From 37c296786dfce9556fa398419eea090ada92a9cf Mon Sep 17 00:00:00 2001 From: Peach Leach Date: Tue, 16 Dec 2025 15:54:40 -0500 Subject: [PATCH 1/5] Added section on multi region behavior Added section on multi region behavior --- .../physical-cluster-replication-technical-overview.md | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/src/current/v26.1/physical-cluster-replication-technical-overview.md b/src/current/v26.1/physical-cluster-replication-technical-overview.md index 8a465f0175c..f569d57b469 100644 --- a/src/current/v26.1/physical-cluster-replication-technical-overview.md +++ b/src/current/v26.1/physical-cluster-replication-technical-overview.md @@ -68,3 +68,9 @@ When a PCR stream is started with a `readonly` virtual cluster, the job will del After reverting any necessary data, the standby virtual cluster is promoted as available to serve traffic and the replication job ends. For details on failing back to the primary cluster following a failover, refer to [Fail back to the primary cluster]({% link {{ page.version.version }}/failover-replication.md %}#failback). + +## Multi-region behavior + +You can use PCR to replicate between clusters with different cluster regions, database regions, and table localities. Mismatched regions and localities do not impact failover or ability to access clusters after failover, but they do impact leaseholders and locality-dedpendent settings. Clusters replicating across different regions may also experience a slight decrease in performance due to longer replication times. + +After a failover event involving clusters in different regions, do not change any configurations on your standby cluster if you plan to fail back to the original primary cluster. If you plan to start using the standby cluster for long-running production traffic rather than performing a failback, adjust the configurations on the standby cluster to optimize for your traffic. When adjusting configurations, ensure that the new settings can be satisfied on the standby cluster. In particular, ensure that the cluster does not have pinned leaseholders for a region that does not exist on the cluster. \ No newline at end of file From d65b1004611db8f608765d051e49091619e559a4 Mon Sep 17 00:00:00 2001 From: Peach Leach Date: Thu, 18 Dec 2025 11:19:13 -0500 Subject: [PATCH 2/5] Added links Added links --- .../v26.1/physical-cluster-replication-technical-overview.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/src/current/v26.1/physical-cluster-replication-technical-overview.md b/src/current/v26.1/physical-cluster-replication-technical-overview.md index f569d57b469..f7d9120ec23 100644 --- a/src/current/v26.1/physical-cluster-replication-technical-overview.md +++ b/src/current/v26.1/physical-cluster-replication-technical-overview.md @@ -71,6 +71,6 @@ For details on failing back to the primary cluster following a failover, refer t ## Multi-region behavior -You can use PCR to replicate between clusters with different cluster regions, database regions, and table localities. Mismatched regions and localities do not impact failover or ability to access clusters after failover, but they do impact leaseholders and locality-dedpendent settings. Clusters replicating across different regions may also experience a slight decrease in performance due to longer replication times. +You can use PCR to replicate between clusters with different [cluster regions]({% link {{ page.version.version }}/multiregion-overview.md %}#cluster-regions), [database regions]({% link {{ page.version.version }}/multiregion-overview.md %}#database-regions), and [table localities]({% link {{ page.version.version }}/table-localities.md %}). Mismatched regions and localities do not impact the [failover process]({% link {{ page.version.version }}/failover-replication.md %}) or ability to access clusters after failover, but they do impact [leaseholders]({% link {{ page.version.version }}/architecture/glossary.md %}#leaseholder) and locality-dependent settings. Clusters replicating across different regions may also experience a slight decrease in performance due to longer replication times. -After a failover event involving clusters in different regions, do not change any configurations on your standby cluster if you plan to fail back to the original primary cluster. If you plan to start using the standby cluster for long-running production traffic rather than performing a failback, adjust the configurations on the standby cluster to optimize for your traffic. When adjusting configurations, ensure that the new settings can be satisfied on the standby cluster. In particular, ensure that the cluster does not have pinned leaseholders for a region that does not exist on the cluster. \ No newline at end of file +After a failover event involving clusters in different regions, do not change any configurations on your standby cluster if you plan to [fail back to the original primary cluster]({% link {{ page.version.version }}/failover-replication.md %}#failback). If you plan to start using the standby cluster for long-running production traffic rather than performing a failback, adjust the configurations on the standby cluster to optimize for your traffic. When adjusting configurations, ensure that the new settings can be satisfied on the standby cluster. In particular, ensure that the cluster does not have pinned leaseholders for a region that does not exist on the cluster. \ No newline at end of file From 642ca4dd865e8f06ad1392669abd925e2a129df6 Mon Sep 17 00:00:00 2001 From: Peach Leach Date: Thu, 18 Dec 2025 14:52:59 -0500 Subject: [PATCH 3/5] Added clarifying paragraph Added another paragraph explaining what settings are affected, also fixed header type --- .../v26.1/physical-cluster-replication-technical-overview.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/src/current/v26.1/physical-cluster-replication-technical-overview.md b/src/current/v26.1/physical-cluster-replication-technical-overview.md index f7d9120ec23..a82961f9c10 100644 --- a/src/current/v26.1/physical-cluster-replication-technical-overview.md +++ b/src/current/v26.1/physical-cluster-replication-technical-overview.md @@ -69,8 +69,10 @@ After reverting any necessary data, the standby virtual cluster is promoted as a For details on failing back to the primary cluster following a failover, refer to [Fail back to the primary cluster]({% link {{ page.version.version }}/failover-replication.md %}#failback). -## Multi-region behavior +### Multi-region behavior You can use PCR to replicate between clusters with different [cluster regions]({% link {{ page.version.version }}/multiregion-overview.md %}#cluster-regions), [database regions]({% link {{ page.version.version }}/multiregion-overview.md %}#database-regions), and [table localities]({% link {{ page.version.version }}/table-localities.md %}). Mismatched regions and localities do not impact the [failover process]({% link {{ page.version.version }}/failover-replication.md %}) or ability to access clusters after failover, but they do impact [leaseholders]({% link {{ page.version.version }}/architecture/glossary.md %}#leaseholder) and locality-dependent settings. Clusters replicating across different regions may also experience a slight decrease in performance due to longer replication times. +If the localities on the primary cluster do not match the localities on the standby cluster, the standby cluster may be unable to satisfy replicating locality constraints. For example, if a replicated regional by row table has partitions in `us-east`, `us-central`, and `us-west`, and the standby cluster only has nodes with the locality tags `us-east` and `us-central`, the standby cluster cannot satisfy the regional by row `us-west` partition constraint. Data with unsatisfiable partition constraints is placed in an arbitrary location on the standby cluster, which can cause performance issues in the case of a failover event due to latency between regions. + After a failover event involving clusters in different regions, do not change any configurations on your standby cluster if you plan to [fail back to the original primary cluster]({% link {{ page.version.version }}/failover-replication.md %}#failback). If you plan to start using the standby cluster for long-running production traffic rather than performing a failback, adjust the configurations on the standby cluster to optimize for your traffic. When adjusting configurations, ensure that the new settings can be satisfied on the standby cluster. In particular, ensure that the cluster does not have pinned leaseholders for a region that does not exist on the cluster. \ No newline at end of file From cd859fdad88b257cfabdb8c4b15c5b258ed3995d Mon Sep 17 00:00:00 2001 From: Peach Leach Date: Tue, 23 Dec 2025 11:24:36 -0500 Subject: [PATCH 4/5] Removed sentence removed one sentence based on tech review --- .../v26.1/physical-cluster-replication-technical-overview.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/current/v26.1/physical-cluster-replication-technical-overview.md b/src/current/v26.1/physical-cluster-replication-technical-overview.md index a82961f9c10..1ef94211a9a 100644 --- a/src/current/v26.1/physical-cluster-replication-technical-overview.md +++ b/src/current/v26.1/physical-cluster-replication-technical-overview.md @@ -71,7 +71,7 @@ For details on failing back to the primary cluster following a failover, refer t ### Multi-region behavior -You can use PCR to replicate between clusters with different [cluster regions]({% link {{ page.version.version }}/multiregion-overview.md %}#cluster-regions), [database regions]({% link {{ page.version.version }}/multiregion-overview.md %}#database-regions), and [table localities]({% link {{ page.version.version }}/table-localities.md %}). Mismatched regions and localities do not impact the [failover process]({% link {{ page.version.version }}/failover-replication.md %}) or ability to access clusters after failover, but they do impact [leaseholders]({% link {{ page.version.version }}/architecture/glossary.md %}#leaseholder) and locality-dependent settings. Clusters replicating across different regions may also experience a slight decrease in performance due to longer replication times. +You can use PCR to replicate between clusters with different [cluster regions]({% link {{ page.version.version }}/multiregion-overview.md %}#cluster-regions), [database regions]({% link {{ page.version.version }}/multiregion-overview.md %}#database-regions), and [table localities]({% link {{ page.version.version }}/table-localities.md %}). Mismatched regions and localities do not impact the [failover process]({% link {{ page.version.version }}/failover-replication.md %}) or ability to access clusters after failover, but they do impact [leaseholders]({% link {{ page.version.version }}/architecture/glossary.md %}#leaseholder) and locality-dependent settings. If the localities on the primary cluster do not match the localities on the standby cluster, the standby cluster may be unable to satisfy replicating locality constraints. For example, if a replicated regional by row table has partitions in `us-east`, `us-central`, and `us-west`, and the standby cluster only has nodes with the locality tags `us-east` and `us-central`, the standby cluster cannot satisfy the regional by row `us-west` partition constraint. Data with unsatisfiable partition constraints is placed in an arbitrary location on the standby cluster, which can cause performance issues in the case of a failover event due to latency between regions. From 3ad1188af05d713f4cedfcebd028e7800b7bc4b1 Mon Sep 17 00:00:00 2001 From: Peach Leach Date: Mon, 29 Dec 2025 14:23:53 -0500 Subject: [PATCH 5/5] Changes from docs review Small changes based on docs review- also backported since this should be the final version --- src/current/v24.1/set-up-physical-cluster-replication.md | 2 +- .../physical-cluster-replication-technical-overview.md | 8 ++++++++ src/current/v24.3/set-up-physical-cluster-replication.md | 2 +- .../physical-cluster-replication-technical-overview.md | 8 ++++++++ src/current/v25.2/set-up-physical-cluster-replication.md | 2 +- .../physical-cluster-replication-technical-overview.md | 8 ++++++++ src/current/v25.3/set-up-physical-cluster-replication.md | 2 +- .../physical-cluster-replication-technical-overview.md | 8 ++++++++ src/current/v25.4/set-up-physical-cluster-replication.md | 2 +- .../physical-cluster-replication-technical-overview.md | 4 ++-- src/current/v26.1/set-up-physical-cluster-replication.md | 2 +- 11 files changed, 40 insertions(+), 8 deletions(-) diff --git a/src/current/v24.1/set-up-physical-cluster-replication.md b/src/current/v24.1/set-up-physical-cluster-replication.md index 6c678ec3c51..c125eb647be 100644 --- a/src/current/v24.1/set-up-physical-cluster-replication.md +++ b/src/current/v24.1/set-up-physical-cluster-replication.md @@ -39,7 +39,7 @@ The high-level steps in this tutorial are: - To set up each cluster, you can follow [Deploy CockroachDB on Premises]({% link {{ page.version.version }}/deploy-cockroachdb-on-premises.md %}). When you initialize the cluster with the [`cockroach init`]({% link {{ page.version.version }}/cockroach-init.md %}) command, you **must** pass the `--virtualized` or `--virtualized-empty` flag. Refer to the cluster creation steps for the [primary cluster](#initialize-the-primary-cluster) and for the [standby cluster](#initialize-the-standby-cluster) for details. - The [Deploy CockroachDB on Premises]({% link {{ page.version.version }}/deploy-cockroachdb-on-premises.md %}) tutorial creates a self-signed certificate for each {{ site.data.products.core }} cluster. To create certificates signed by an external certificate authority, refer to [Create Security Certificates using OpenSSL]({% link {{ page.version.version }}/create-security-certificates-openssl.md %}). - All nodes in each cluster will need access to the Certificate Authority for the other cluster. Refer to [Manage cluster certificates](#step-3-manage-cluster-certificates-and-generate-connection-strings). -- The primary and standby clusters can have different [region topologies]({% link {{ page.version.version }}/topology-patterns.md %}). However, behavior for features that rely on multi-region primitives, such as Region by Row and Region by Table, may be affected. +- The primary and standby clusters can have different [region topologies]({% link {{ page.version.version }}/topology-patterns.md %}). However, behavior for features that rely on multi-region primitives, such as Region by Row and Region by Table, may be affected. For more information, refer to [Multi-region behavior and best practices]({% link {{ page.version.version }}/physical-cluster-replication-technical-overview.md %}#multi-region-behavior-and-best-practices). ## Step 1. Create the primary cluster diff --git a/src/current/v24.3/physical-cluster-replication-technical-overview.md b/src/current/v24.3/physical-cluster-replication-technical-overview.md index 1b779db9c25..1c503d81673 100644 --- a/src/current/v24.3/physical-cluster-replication-technical-overview.md +++ b/src/current/v24.3/physical-cluster-replication-technical-overview.md @@ -68,3 +68,11 @@ When a PCR stream is started with a `readonly` virtual cluster, the job will del After reverting any necessary data, the standby virtual cluster is promoted as available to serve traffic and the replication job ends. For details on failing back to the primary cluster following a failover, refer to [Fail back to the primary cluster]({% link {{ page.version.version }}/failover-replication.md %}#failback). + +### Multi-region behavior and best practices + +You can use PCR to replicate between clusters with different [cluster regions]({% link {{ page.version.version }}/multiregion-overview.md %}#cluster-regions), [database regions]({% link {{ page.version.version }}/multiregion-overview.md %}#database-regions), and [table localities]({% link {{ page.version.version }}/table-localities.md %}). Mismatched regions and localities do not impact the [failover process]({% link {{ page.version.version }}/failover-replication.md %}) or ability to access clusters after failover, but they do impact [leaseholders]({% link {{ page.version.version }}/architecture/glossary.md %}#leaseholder) and locality-dependent settings. + +If the localities on the primary cluster do not match the localities on the standby cluster, the standby cluster may be unable to satisfy replicating locality constraints. For example, if a replicated `REGIONAL BY ROW` table has partitions in `us-east`, `us-central`, and `us-west`, and the standby cluster only has nodes with the locality tags `us-east` and `us-central`, the standby cluster cannot satisfy the `REGIONAL BY ROW` `us-west` partition constraint. Data with unsatisfiable partition constraints is placed in an arbitrary location on the standby cluster, which can cause performance issues in the case of a failover event due to latency between regions. + +After a failover event involving clusters in different regions, do not change any configurations on your standby cluster if you plan to [fail back to the original primary cluster]({% link {{ page.version.version }}/failover-replication.md %}#failback). If you plan to start using the standby cluster for long-running production traffic rather than performing a failback, adjust the configurations on the standby cluster to optimize for your traffic. When adjusting configurations, ensure that the new settings can be satisfied on the standby cluster. In particular, ensure that the cluster does not have pinned leaseholders for a region that does not exist on the cluster. diff --git a/src/current/v24.3/set-up-physical-cluster-replication.md b/src/current/v24.3/set-up-physical-cluster-replication.md index 1902fc7d837..1ae382402e3 100644 --- a/src/current/v24.3/set-up-physical-cluster-replication.md +++ b/src/current/v24.3/set-up-physical-cluster-replication.md @@ -36,7 +36,7 @@ The high-level steps in this tutorial are: - The [Deploy CockroachDB on Premises]({% link {{ page.version.version }}/deploy-cockroachdb-on-premises.md %}) tutorial creates a self-signed certificate for each {{ site.data.products.core }} cluster. To create certificates signed by an external certificate authority, refer to [Create Security Certificates using OpenSSL]({% link {{ page.version.version }}/create-security-certificates-openssl.md %}). - All nodes in each cluster will need access to the Certificate Authority for the other cluster. Refer to [Manage cluster certificates](#step-3-manage-cluster-certificates-and-generate-connection-strings). - An [{{ site.data.products.enterprise }} license]({% link {{ page.version.version }}/licensing-faqs.md %}#types-of-licenses) on the primary **and** standby clusters. You must use the system virtual cluster on the primary and standby clusters to enable your {{ site.data.products.enterprise }} license. -- The primary and standby clusters can have different [region topologies]({% link {{ page.version.version }}/topology-patterns.md %}). However, behavior for features that rely on multi-region primitives, such as Region by Row and Region by Table, may be affected. +- The primary and standby clusters can have different [region topologies]({% link {{ page.version.version }}/topology-patterns.md %}). However, behavior for features that rely on multi-region primitives, such as Region by Row and Region by Table, may be affected. For more information, refer to [Multi-region behavior and best practices]({% link {{ page.version.version }}/physical-cluster-replication-technical-overview.md %}#multi-region-behavior-and-best-practices). {{site.data.alerts.callout_info}} To set up PCR from an existing CockroachDB cluster, which will serve as the primary cluster, refer to [Set up PCR from an existing cluster](#set-up-pcr-from-an-existing-cluster). diff --git a/src/current/v25.2/physical-cluster-replication-technical-overview.md b/src/current/v25.2/physical-cluster-replication-technical-overview.md index 1f7fb41af43..897b4ad6205 100644 --- a/src/current/v25.2/physical-cluster-replication-technical-overview.md +++ b/src/current/v25.2/physical-cluster-replication-technical-overview.md @@ -68,3 +68,11 @@ When a PCR stream is started with a `readonly` virtual cluster, the job will del After reverting any necessary data, the standby virtual cluster is promoted as available to serve traffic and the replication job ends. For details on failing back to the primary cluster following a failover, refer to [Fail back to the primary cluster]({% link {{ page.version.version }}/failover-replication.md %}#failback). + +### Multi-region behavior and best practices + +You can use PCR to replicate between clusters with different [cluster regions]({% link {{ page.version.version }}/multiregion-overview.md %}#cluster-regions), [database regions]({% link {{ page.version.version }}/multiregion-overview.md %}#database-regions), and [table localities]({% link {{ page.version.version }}/table-localities.md %}). Mismatched regions and localities do not impact the [failover process]({% link {{ page.version.version }}/failover-replication.md %}) or ability to access clusters after failover, but they do impact [leaseholders]({% link {{ page.version.version }}/architecture/glossary.md %}#leaseholder) and locality-dependent settings. + +If the localities on the primary cluster do not match the localities on the standby cluster, the standby cluster may be unable to satisfy replicating locality constraints. For example, if a replicated `REGIONAL BY ROW` table has partitions in `us-east`, `us-central`, and `us-west`, and the standby cluster only has nodes with the locality tags `us-east` and `us-central`, the standby cluster cannot satisfy the `REGIONAL BY ROW` `us-west` partition constraint. Data with unsatisfiable partition constraints is placed in an arbitrary location on the standby cluster, which can cause performance issues in the case of a failover event due to latency between regions. + +After a failover event involving clusters in different regions, do not change any configurations on your standby cluster if you plan to [fail back to the original primary cluster]({% link {{ page.version.version }}/failover-replication.md %}#failback). If you plan to start using the standby cluster for long-running production traffic rather than performing a failback, adjust the configurations on the standby cluster to optimize for your traffic. When adjusting configurations, ensure that the new settings can be satisfied on the standby cluster. In particular, ensure that the cluster does not have pinned leaseholders for a region that does not exist on the cluster. diff --git a/src/current/v25.2/set-up-physical-cluster-replication.md b/src/current/v25.2/set-up-physical-cluster-replication.md index 427a06a745e..db501867bd9 100644 --- a/src/current/v25.2/set-up-physical-cluster-replication.md +++ b/src/current/v25.2/set-up-physical-cluster-replication.md @@ -39,7 +39,7 @@ To set up PCR from an existing CockroachDB cluster, which will serve as the prim - To set up each cluster, you can follow [Deploy CockroachDB on Premises]({% link {{ page.version.version }}/deploy-cockroachdb-on-premises.md %}). When you initialize the cluster with the [`cockroach init`]({% link {{ page.version.version }}/cockroach-init.md %}) command, you **must** pass the `--virtualized` or `--virtualized-empty` flag. Refer to the cluster creation steps for the [primary cluster](#initialize-the-primary-cluster) and for the [standby cluster](#initialize-the-standby-cluster) for details. - The [Deploy CockroachDB on Premises]({% link {{ page.version.version }}/deploy-cockroachdb-on-premises.md %}) tutorial creates a self-signed certificate for each {{ site.data.products.core }} cluster. To create certificates signed by an external certificate authority, refer to [Create Security Certificates using OpenSSL]({% link {{ page.version.version }}/create-security-certificates-openssl.md %}). - All nodes in each cluster will need access to the Certificate Authority for the other cluster. Refer to [Manage cluster certificates](#step-3-manage-cluster-certificates-and-generate-connection-strings). -- The primary and standby clusters can have different [region topologies]({% link {{ page.version.version }}/topology-patterns.md %}). However, behavior for features that rely on multi-region primitives, such as Region by Row and Region by Table, may be affected. +- The primary and standby clusters can have different [region topologies]({% link {{ page.version.version }}/topology-patterns.md %}). However, behavior for features that rely on multi-region primitives, such as Region by Row and Region by Table, may be affected. For more information, refer to [Multi-region behavior and best practices]({% link {{ page.version.version }}/physical-cluster-replication-technical-overview.md %}#multi-region-behavior-and-best-practices). ## Step 1. Create the primary cluster diff --git a/src/current/v25.3/physical-cluster-replication-technical-overview.md b/src/current/v25.3/physical-cluster-replication-technical-overview.md index ba0a7211b8a..cec24c3edb8 100644 --- a/src/current/v25.3/physical-cluster-replication-technical-overview.md +++ b/src/current/v25.3/physical-cluster-replication-technical-overview.md @@ -68,3 +68,11 @@ When a PCR stream is started with a `readonly` virtual cluster, the job will del After reverting any necessary data, the standby virtual cluster is promoted as available to serve traffic and the replication job ends. For details on failing back to the primary cluster following a failover, refer to [Fail back to the primary cluster]({% link {{ page.version.version }}/failover-replication.md %}#failback). + +### Multi-region behavior and best practices + +You can use PCR to replicate between clusters with different [cluster regions]({% link {{ page.version.version }}/multiregion-overview.md %}#cluster-regions), [database regions]({% link {{ page.version.version }}/multiregion-overview.md %}#database-regions), and [table localities]({% link {{ page.version.version }}/table-localities.md %}). Mismatched regions and localities do not impact the [failover process]({% link {{ page.version.version }}/failover-replication.md %}) or ability to access clusters after failover, but they do impact [leaseholders]({% link {{ page.version.version }}/architecture/glossary.md %}#leaseholder) and locality-dependent settings. + +If the localities on the primary cluster do not match the localities on the standby cluster, the standby cluster may be unable to satisfy replicating locality constraints. For example, if a replicated `REGIONAL BY ROW` table has partitions in `us-east`, `us-central`, and `us-west`, and the standby cluster only has nodes with the locality tags `us-east` and `us-central`, the standby cluster cannot satisfy the `REGIONAL BY ROW` `us-west` partition constraint. Data with unsatisfiable partition constraints is placed in an arbitrary location on the standby cluster, which can cause performance issues in the case of a failover event due to latency between regions. + +After a failover event involving clusters in different regions, do not change any configurations on your standby cluster if you plan to [fail back to the original primary cluster]({% link {{ page.version.version }}/failover-replication.md %}#failback). If you plan to start using the standby cluster for long-running production traffic rather than performing a failback, adjust the configurations on the standby cluster to optimize for your traffic. When adjusting configurations, ensure that the new settings can be satisfied on the standby cluster. In particular, ensure that the cluster does not have pinned leaseholders for a region that does not exist on the cluster. diff --git a/src/current/v25.3/set-up-physical-cluster-replication.md b/src/current/v25.3/set-up-physical-cluster-replication.md index fdb01f48d74..9db23dae9aa 100644 --- a/src/current/v25.3/set-up-physical-cluster-replication.md +++ b/src/current/v25.3/set-up-physical-cluster-replication.md @@ -39,7 +39,7 @@ To set up PCR from an existing CockroachDB cluster, which will serve as the prim - To set up each cluster, you can follow [Deploy CockroachDB on Premises]({% link {{ page.version.version }}/deploy-cockroachdb-on-premises.md %}). When you initialize the cluster with the [`cockroach init`]({% link {{ page.version.version }}/cockroach-init.md %}) command, you **must** pass the `--virtualized` or `--virtualized-empty` flag. Refer to the cluster creation steps for the [primary cluster](#initialize-the-primary-cluster) and for the [standby cluster](#initialize-the-standby-cluster) for details. - The [Deploy CockroachDB on Premises]({% link {{ page.version.version }}/deploy-cockroachdb-on-premises.md %}) tutorial creates a self-signed certificate for each {{ site.data.products.core }} cluster. To create certificates signed by an external certificate authority, refer to [Create Security Certificates using OpenSSL]({% link {{ page.version.version }}/create-security-certificates-openssl.md %}). - All nodes in each cluster will need access to the Certificate Authority for the other cluster. Refer to [Manage cluster certificates](#step-3-manage-cluster-certificates-and-generate-connection-strings). -- The primary and standby clusters can have different [region topologies]({% link {{ page.version.version }}/topology-patterns.md %}). However, behavior for features that rely on multi-region primitives, such as Region by Row and Region by Table, may be affected. +- The primary and standby clusters can have different [region topologies]({% link {{ page.version.version }}/topology-patterns.md %}). However, behavior for features that rely on multi-region primitives, such as Region by Row and Region by Table, may be affected. For more information, refer to [Multi-region behavior and best practices]({% link {{ page.version.version }}/physical-cluster-replication-technical-overview.md %}#multi-region-behavior-and-best-practices). ## Step 1. Create the primary cluster diff --git a/src/current/v25.4/physical-cluster-replication-technical-overview.md b/src/current/v25.4/physical-cluster-replication-technical-overview.md index fdcbc96f4cf..b222ed4a26d 100644 --- a/src/current/v25.4/physical-cluster-replication-technical-overview.md +++ b/src/current/v25.4/physical-cluster-replication-technical-overview.md @@ -68,3 +68,11 @@ When a PCR stream is started with a `readonly` virtual cluster, the job will del After reverting any necessary data, the standby virtual cluster is promoted as available to serve traffic and the replication job ends. For details on failing back to the primary cluster following a failover, refer to [Fail back to the primary cluster]({% link {{ page.version.version }}/failover-replication.md %}#failback). + +### Multi-region behavior and best practices + +You can use PCR to replicate between clusters with different [cluster regions]({% link {{ page.version.version }}/multiregion-overview.md %}#cluster-regions), [database regions]({% link {{ page.version.version }}/multiregion-overview.md %}#database-regions), and [table localities]({% link {{ page.version.version }}/table-localities.md %}). Mismatched regions and localities do not impact the [failover process]({% link {{ page.version.version }}/failover-replication.md %}) or ability to access clusters after failover, but they do impact [leaseholders]({% link {{ page.version.version }}/architecture/glossary.md %}#leaseholder) and locality-dependent settings. + +If the localities on the primary cluster do not match the localities on the standby cluster, the standby cluster may be unable to satisfy replicating locality constraints. For example, if a replicated `REGIONAL BY ROW` table has partitions in `us-east`, `us-central`, and `us-west`, and the standby cluster only has nodes with the locality tags `us-east` and `us-central`, the standby cluster cannot satisfy the `REGIONAL BY ROW` `us-west` partition constraint. Data with unsatisfiable partition constraints is placed in an arbitrary location on the standby cluster, which can cause performance issues in the case of a failover event due to latency between regions. + +After a failover event involving clusters in different regions, do not change any configurations on your standby cluster if you plan to [fail back to the original primary cluster]({% link {{ page.version.version }}/failover-replication.md %}#failback). If you plan to start using the standby cluster for long-running production traffic rather than performing a failback, adjust the configurations on the standby cluster to optimize for your traffic. When adjusting configurations, ensure that the new settings can be satisfied on the standby cluster. In particular, ensure that the cluster does not have pinned leaseholders for a region that does not exist on the cluster. diff --git a/src/current/v25.4/set-up-physical-cluster-replication.md b/src/current/v25.4/set-up-physical-cluster-replication.md index c7ea8a871cf..b1f23e4c8ed 100644 --- a/src/current/v25.4/set-up-physical-cluster-replication.md +++ b/src/current/v25.4/set-up-physical-cluster-replication.md @@ -39,7 +39,7 @@ To set up PCR from an existing CockroachDB cluster, which will serve as the prim - To set up each cluster, you can follow [Deploy CockroachDB on Premises]({% link {{ page.version.version }}/deploy-cockroachdb-on-premises.md %}). When you initialize the cluster with the [`cockroach init`]({% link {{ page.version.version }}/cockroach-init.md %}) command, you **must** pass the `--virtualized` or `--virtualized-empty` flag. Refer to the cluster creation steps for the [primary cluster](#initialize-the-primary-cluster) and for the [standby cluster](#initialize-the-standby-cluster) for details. - The [Deploy CockroachDB on Premises]({% link {{ page.version.version }}/deploy-cockroachdb-on-premises.md %}) tutorial creates a self-signed certificate for each {{ site.data.products.core }} cluster. To create certificates signed by an external certificate authority, refer to [Create Security Certificates using OpenSSL]({% link {{ page.version.version }}/create-security-certificates-openssl.md %}). - All nodes in each cluster will need access to the Certificate Authority for the other cluster. Refer to [Manage cluster certificates](#step-3-manage-cluster-certificates-and-generate-connection-strings). -- The primary and standby clusters can have different [region topologies]({% link {{ page.version.version }}/topology-patterns.md %}). However, behavior for features that rely on multi-region primitives, such as Region by Row and Region by Table, may be affected. +- The primary and standby clusters can have different [region topologies]({% link {{ page.version.version }}/topology-patterns.md %}). However, behavior for features that rely on multi-region primitives, such as Region by Row and Region by Table, may be affected. For more information, refer to [Multi-region behavior and best practices]({% link {{ page.version.version }}/physical-cluster-replication-technical-overview.md %}#multi-region-behavior-and-best-practices). ## Step 1. Create the primary cluster diff --git a/src/current/v26.1/physical-cluster-replication-technical-overview.md b/src/current/v26.1/physical-cluster-replication-technical-overview.md index 1ef94211a9a..1c0cbd99a62 100644 --- a/src/current/v26.1/physical-cluster-replication-technical-overview.md +++ b/src/current/v26.1/physical-cluster-replication-technical-overview.md @@ -69,10 +69,10 @@ After reverting any necessary data, the standby virtual cluster is promoted as a For details on failing back to the primary cluster following a failover, refer to [Fail back to the primary cluster]({% link {{ page.version.version }}/failover-replication.md %}#failback). -### Multi-region behavior +### Multi-region behavior and best practices You can use PCR to replicate between clusters with different [cluster regions]({% link {{ page.version.version }}/multiregion-overview.md %}#cluster-regions), [database regions]({% link {{ page.version.version }}/multiregion-overview.md %}#database-regions), and [table localities]({% link {{ page.version.version }}/table-localities.md %}). Mismatched regions and localities do not impact the [failover process]({% link {{ page.version.version }}/failover-replication.md %}) or ability to access clusters after failover, but they do impact [leaseholders]({% link {{ page.version.version }}/architecture/glossary.md %}#leaseholder) and locality-dependent settings. -If the localities on the primary cluster do not match the localities on the standby cluster, the standby cluster may be unable to satisfy replicating locality constraints. For example, if a replicated regional by row table has partitions in `us-east`, `us-central`, and `us-west`, and the standby cluster only has nodes with the locality tags `us-east` and `us-central`, the standby cluster cannot satisfy the regional by row `us-west` partition constraint. Data with unsatisfiable partition constraints is placed in an arbitrary location on the standby cluster, which can cause performance issues in the case of a failover event due to latency between regions. +If the localities on the primary cluster do not match the localities on the standby cluster, the standby cluster may be unable to satisfy replicating locality constraints. For example, if a replicated `REGIONAL BY ROW` table has partitions in `us-east`, `us-central`, and `us-west`, and the standby cluster only has nodes with the locality tags `us-east` and `us-central`, the standby cluster cannot satisfy the `REGIONAL BY ROW` `us-west` partition constraint. Data with unsatisfiable partition constraints is placed in an arbitrary location on the standby cluster, which can cause performance issues in the case of a failover event due to latency between regions. After a failover event involving clusters in different regions, do not change any configurations on your standby cluster if you plan to [fail back to the original primary cluster]({% link {{ page.version.version }}/failover-replication.md %}#failback). If you plan to start using the standby cluster for long-running production traffic rather than performing a failback, adjust the configurations on the standby cluster to optimize for your traffic. When adjusting configurations, ensure that the new settings can be satisfied on the standby cluster. In particular, ensure that the cluster does not have pinned leaseholders for a region that does not exist on the cluster. \ No newline at end of file diff --git a/src/current/v26.1/set-up-physical-cluster-replication.md b/src/current/v26.1/set-up-physical-cluster-replication.md index c7ea8a871cf..b1f23e4c8ed 100644 --- a/src/current/v26.1/set-up-physical-cluster-replication.md +++ b/src/current/v26.1/set-up-physical-cluster-replication.md @@ -39,7 +39,7 @@ To set up PCR from an existing CockroachDB cluster, which will serve as the prim - To set up each cluster, you can follow [Deploy CockroachDB on Premises]({% link {{ page.version.version }}/deploy-cockroachdb-on-premises.md %}). When you initialize the cluster with the [`cockroach init`]({% link {{ page.version.version }}/cockroach-init.md %}) command, you **must** pass the `--virtualized` or `--virtualized-empty` flag. Refer to the cluster creation steps for the [primary cluster](#initialize-the-primary-cluster) and for the [standby cluster](#initialize-the-standby-cluster) for details. - The [Deploy CockroachDB on Premises]({% link {{ page.version.version }}/deploy-cockroachdb-on-premises.md %}) tutorial creates a self-signed certificate for each {{ site.data.products.core }} cluster. To create certificates signed by an external certificate authority, refer to [Create Security Certificates using OpenSSL]({% link {{ page.version.version }}/create-security-certificates-openssl.md %}). - All nodes in each cluster will need access to the Certificate Authority for the other cluster. Refer to [Manage cluster certificates](#step-3-manage-cluster-certificates-and-generate-connection-strings). -- The primary and standby clusters can have different [region topologies]({% link {{ page.version.version }}/topology-patterns.md %}). However, behavior for features that rely on multi-region primitives, such as Region by Row and Region by Table, may be affected. +- The primary and standby clusters can have different [region topologies]({% link {{ page.version.version }}/topology-patterns.md %}). However, behavior for features that rely on multi-region primitives, such as Region by Row and Region by Table, may be affected. For more information, refer to [Multi-region behavior and best practices]({% link {{ page.version.version }}/physical-cluster-replication-technical-overview.md %}#multi-region-behavior-and-best-practices). ## Step 1. Create the primary cluster