Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions tidb-cloud/changefeed-overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,5 +8,6 @@ TiDB Cloud provides the following changefeeds to help you stream data from TiDB

- [Sink to Apache Kafka](/tidb-cloud/changefeed-sink-to-apache-kafka.md)
- [Sink to MySQL](/tidb-cloud/changefeed-sink-to-mysql.md)
- [Secondary Replication](changefeed-replication.md)
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
- [Secondary Replication](changefeed-replication.md)
- [Secondary Replication](/tidb-cloud/changefeed-replication.md)


To learn the billing for changefeeds in TiDB Cloud, see [Changefeed billing](/tidb-cloud/tidb-cloud-billing-tcu.md).
44 changes: 13 additions & 31 deletions tidb-cloud/changefeed-replication.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,15 +9,6 @@ TiDB Cloud Replication is a feature that allows you to create a continuously rep

With TiDB Cloud Replication, you can perform quick disaster recovery of a database in the event of a regional disaster or large-scale failure, which helps achieve business continuity. Once a secondary cluster is set up, you can manually initiate geographic failover to the secondary cluster in a different region.

> **Warning:**
>
> Currently, the **TiDB Cloud Replication** feature is in **Public Preview** with the following limitations:
>
> * One primary cluster can only have one replication.
> * You cannot use a secondary cluster as a source of **TiDB Cloud Replication** to another cluster.
> * **TiDB Cloud Replication** contradicts [**Sink to Apache Kafka**](/tidb-cloud/changefeed-sink-to-apache-kafka.md) and [**Sink to MySQL**](/tidb-cloud/changefeed-sink-to-mysql.md). When **TiDB Cloud Replication** is enabled, neither the primary nor the secondary cluster can use the **Sink to Apache Kafka** or **Sink to MySQL** changefeed and vice versa.
> * Because TiDB Cloud uses TiCDC to establish replication, it has the same [restrictions as TiCDC](https://docs.pingcap.com/tidb/stable/ticdc-overview#restrictions).

To support application replication, you must deploy your applications in both primary and secondary regions, and ensure that each application is connected to the TiDB cluster in the same region. The applications in the secondary region are on standby. When the primary region fails, you can initiate a "Detach" operation to make the TiDB cluster in the secondary region active, and then transfer all data traffic to the applications in the secondary region.

The following diagram illustrates a typical deployment of a geo-redundant cloud application using TiDB Cloud Replication:
Expand All @@ -30,6 +21,19 @@ Creating a secondary TiDB cluster is only a part of the business continuity solu
- Check whether each component of the application is resilient to the same failures and become available within recovery time objective (RTO) of your application. The typical components of an application include client software (such as browsers with custom JavaScript), web front ends, storage, and DNS.
- Identify all dependent services, check the guarantees and capabilities of these services, and ensure that your application is operational during a failover of these services.

## Limitations

Currently, the **TiDB Cloud Replication** feature is in **Public Preview** with the following limitations:

* One primary cluster can only have one replication.
* You cannot use a secondary cluster as a source of **TiDB Cloud Replication** to another cluster.
* **TiDB Cloud Replication** contradicts [**Sink to Apache Kafka**](/tidb-cloud/changefeed-sink-to-apache-kafka.md) and [**Sink to MySQL**](/tidb-cloud/changefeed-sink-to-mysql.md). When **TiDB Cloud Replication** is enabled, neither the primary nor the secondary cluster can use the **Sink to Apache Kafka** or **Sink to MySQL** changefeed and vice versa.
* Because TiDB Cloud uses TiCDC to establish replication, it has the same [restrictions as TiCDC](https://docs.pingcap.com/tidb/stable/ticdc-overview#restrictions).
* The network latency between the primary and secondary clusters affects the performance or stability of **TiDB Cloud Replication**. with the following limitations:
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
* The network latency between the primary and secondary clusters affects the performance or stability of **TiDB Cloud Replication**. with the following limitations:
* Depending on the network latency between the primary and secondary clusters, the performance and stability of **TiDB Cloud Replication** are affected as follows:

* The network latency less than 120 millisecond is recommended.
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
* The network latency less than 120 millisecond is recommended.
* To achieve the best performance and stability of **TiDB Cloud Replication**, the network latency within 120 milliseconds is recommended.

* When the network latency between 120 millisecond and 180 millisecond, the performance of **TiDB Cloud Replication** may be affected.
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
* When the network latency between 120 millisecond and 180 millisecond, the performance of **TiDB Cloud Replication** may be affected.
* When the network latency is between 120 milliseconds and 180 milliseconds, the performance of **TiDB Cloud Replication** might be affected.

* When the network latenct is greater than 180 millisecond, **TiDB Cloud Replication** will not work.
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
* When the network latenct is greater than 180 millisecond, **TiDB Cloud Replication** will not work.
* When the network latency exceeds 180 milliseconds, **TiDB Cloud Replication** cannot provide service normally.


## Terminology and capabilities of TiDB Cloud Replication

### Automatic asynchronous replication
Expand All @@ -42,28 +46,6 @@ The secondary cluster is in the read-only mode. If you have any read-only worklo

To satisfy read-intensive scenarios in the same region, you can use **TiDB Cloud Replication** to create a readable secondary cluster in the same region as the primary cluster. However, because a secondary cluster in the same region does not provide additional resiliency for large-scale outages or catastrophic failures, do not use it as a failover target for regional disaster recovery purposes.

### Planned Detach

**Planned Detach** can be triggered by you manually. It is used for planned maintenance in most cases, such as disaster recovery drills. **Planned detach** makes sure that all data changes are replicated to the secondary cluster without data loss (RPO=0). For RTO, it depends on the replication lag between primary and secondary clusters. In most cases, the RTO is at a level of minutes.

**Planned Detach** detaches the secondary cluster from the primary cluster into an individual cluster. When **Planned Detach** is triggered, it performs the following steps:

1. Sets the primary cluster as read-only, to prevent any new transaction from being committed to the primary cluster.
2. Waits until the secondary cluster is fully synced with the primary cluster.
3. Stops the replication from the primary to the secondary cluster.
4. Sets the original secondary cluster as writable, which makes it available to serve your business.

After **Planned Detach** is finished, the original primary cluster is set as read-only. If you still need to write to the original primary cluster, you can do one of the following to set the cluster as writable explicitly:

- Go to the cluster details page, click **Settings**, and then click the **Make Writable** drop-down button.
- Connect to the SQL port of the original primary cluster and execute the following statement:

{{< copyable "sql" >}}

```sql
set global tidb_super_read_only=OFF;
```

### Force Detach

To recover from an unplanned outage, use **Force Detach**. In the event of a catastrophic failure in the region where the primary cluster is located, you should use **Force Detach** so that the secondary cluster can serve the business as quickly as possible, ensuring business continuity. Because this operation makes the secondary cluster serve as an individual cluster immediately and does not wait for any unreplicated data, the RPO depends on the Primary-Secondary replication lag, while the RTO depends on how quickly **Force Detach** is triggered by you.
Expand Down