You can connect a local cluster to other Elasticsearch clusters, known as remote clusters. Remote clusters can be located in different datacenters or geographic regions, and contain indices or data streams that can be replicated with cross-cluster replication or searched by a local cluster using cross-cluster search.
With cross-cluster replication, you ingest data to an index on a remote cluster. This leader index is replicated to one or more read-only follower indices on your local cluster. Creating a multi-cluster architecture with cross-cluster replication enables you to configure disaster recovery, bring data closer to your users, or establish a centralized reporting cluster to process reports locally.
Cross-cluster search enables you to run a search request against one or more remote clusters. This capability provides each region with a global view of all clusters, allowing you to send a search request from a local cluster and return results from all connected remote clusters.
Enabling and configuring security is important on both local and remote
clusters. When connecting a local cluster to remote clusters, an Elasticsearch superuser
(such as the
elastic user) on the local cluster gains total read access to the
remote clusters. To use cross-cluster replication and cross-cluster search safely,
enable security on all connected clusters
and configure Transport Layer Security (TLS) on at least the transport level on
Furthermore, a local administrator at the operating system level with sufficient access to Elasticsearch configuration files and private keys can potentially take over a remote cluster. Ensure that your security strategy includes securing local and remote clusters at the operating system level.
To register a remote cluster, connect the local cluster to nodes in the remote cluster using sniff mode (default) or proxy mode. After registering remote clusters, configure privileges for cross-cluster replication and cross-cluster search.
In sniff mode, a cluster is created using a name and a list of seed nodes. When a remote cluster is registered, its cluster state is retrieved from one of the seed nodes and up to three gateway nodes are selected as part of remote cluster requests. This mode requires that the gateway node’s publish addresses are accessible by the local cluster.
Sniff mode is the default connection mode.
version: Remote nodes must be compatible with the cluster they are registered to, similar to the rules for rolling upgrades:
- Any node can communicate with another node on the same major version. For example, 7.0 can talk to any 7.x node.
- Only nodes on the last minor version of a certain major version can communicate with nodes on the following major version. In the 6.x series, 6.8 can communicate with any 7.x node, while 6.7 can only communicate with 7.0.
Version compatibility is symmetric, meaning that if 6.7 can communicate with 7.0, 7.0 can also communicate with 6.7. The following table depicts version compatibility between local and remote nodes.
Version compatibility table
- role: Dedicated master nodes are never selected as gateway nodes.
- attributes: You can tag which nodes should be selected (see remote cluster settings), though such tagged nodes still have to satisfy the two above requirements.
In proxy mode, a cluster is created using a name and a single proxy address. When you register a remote cluster, a configurable number of socket connections are opened to the proxy address. The proxy is required to route those connections to the remote cluster. Proxy mode does not require remote cluster nodes to have accessible publish addresses.
The proxy mode is not the default connection mode and must be configured. Similar to the sniff gateway nodes, the remote connections are subject to the same version compatibility rules as rolling upgrades.