The first host you install ECE on initially requires the ports for all roles to be open, which includes the ports for the coordinator, allocator, director, and proxy roles. After you have brought up your initial ECE installation, only the ports for the roles that the initial host continues to hold need to remain open.
Open these ports for outbound traffic:
|Host role||Outbound ports||Purpose|
Installation script and docker.elastic.co Docker registry access (HTTP)
Installation script and docker.elastic.co Docker registry access (HTTPS)
When there are multiple hosts for each role, the inbound networking and ports can be represented by the following diagram:
Installation and troubleshooting SSH access only (TCP)
Admin API access (HTTP/HTTPS)
Elasticsearch (transport client/transport client with TLS/SSL), also required by load balancers
Cloud UI console to API (HTTP/HTTPS)
In addition to the following list, you should open 12898-12908 and 13898-13908 on the director host for Zookeeper leader and election activity.
ZooKeeper ensemble discovery/joining (TCP)
Client forwarder to ZooKeeper, one port per director (TLS tunnels)
Elasticsearch cluster to cluster (HTTPS/Node Transport 6.x+/TLS 6.x+)
Connections to initial coordinator from allocators and proxies, one port per coordinator, up to five (TCP)
Kibana to the services forwarder (HTTP)
Kibana and Elasticsearch (HTTP via TLS tunnel)
Constructor to Elasticsearch cluster (HTTPS)
Elasticsearch (HTTPS/Transport Client TLS)
If you have IP filtering set up for your deployment, make sure to create rule sets the IP addresses that you need. All other inbound traffic will be blocked. Internal deployment traffic between Kibana instances, APM Servers, and the Elasticsearch clusters is automatically allowed.
A typical ECE installation should be contained within a single data center. We recommend that ECE installations not span different data centers, due to variations in networking latency and bandwidth that cannot be controlled.
Installation of ECE across multiple data centers might be feasible with sufficiently low latency and high bandwidth, with some restrictions around what we can support. Based on our experience with our hosted Elastic Cloud service, the following is required:
- A typical network latency between the data centers of less than 10ms round-trip time during pings
- A network bandwidth of at least 10 Gigabit
If you choose to deploy a single ECE installation across multiple data centers, you might need to contend with additional disruptions due to bandwidth or latency issues. Both ECE and Elasticsearch are designed to be resilient to networking issues, but this resiliency is intended to handle exceptions and should not be depended on as part of normal operations. If Elastic determines during a support case that an issue is related to an installation across multiple data centers, the recommended resolution will be to consolidate your installation into a single data center, with further support limited until consolidation is complete.