Elasticsearch requires very little configuration to get started, but there are a number of items which must be considered before using your cluster in production:
Our Elastic Cloud service configures these items automatically, making your cluster production-ready by default.
Elasticsearch writes the data you index to indices and data streams to a
directory. Elasticsearch writes its own application logs, which contain information about
cluster health and operations, to a
In production, we strongly recommend you set the
elasticsearch.yml to locations outside of
path.logs values vary by platform:
Linux and macOS installations support Unix-style paths:
path: data: /var/data/elasticsearch logs: /var/log/elasticsearch
Windows installations support DOS paths with escaped backslashes:
path: data: "C:\\Elastic\\Elasticsearch\\data" logs: "C:\\Elastic\\Elasticsearch\\logs"
If needed, you can specify multiple paths in
path.data. Elasticsearch stores the node’s
data across all provided paths but keeps each shard’s data on the same path.
Elasticsearch does not balance shards across a node’s data paths. High disk usage in a single path can trigger a high disk usage watermark for the entire node. If triggered, Elasticsearch will not add shards to the node, even if the node’s other paths have available disk space. If you need additional disk space, we recommend you add a new node rather than additional data paths.
Linux and macOS installations support multiple Unix-style paths in
path: data: - /mnt/elasticsearch_1 - /mnt/elasticsearch_2 - /mnt/elasticsearch_3
Windows installations support multiple DOS paths in
path: data: - "C:\\Elastic\\Elasticsearch_1" - "E:\\Elastic\\Elasticsearch_1" - "F:\\Elastic\\Elasticsearch_3"
Cluster name settingedit
A node can only join a cluster when it shares its
cluster.name with all the
other nodes in the cluster. The default name is
elasticsearch, but you should
change it to an appropriate name that describes the purpose of the cluster.
Do not reuse the same cluster names in different environments. Otherwise, nodes might join the wrong cluster.
Node name settingedit
node.name as a human-readable identifier for a
particular instance of Elasticsearch. This name is included in the response
of many APIs. The node name defaults to the hostname of the machine when
Elasticsearch starts, but can be configured explicitly in
Network host settingedit
By default, Elasticsearch binds to loopback addresses only such as
[::1]. This binding is sufficient to run a single development node on a
more than one node can be started from the same
location on a single node. This setup can be useful for testing Elasticsearch’s
ability to form clusters, but it is not a configuration recommended for
To form a cluster with nodes on other servers, your
node will need to bind to a non-loopback address. While there are many
network settings, usually all you need to configure is
network.host setting also understands some special values such as
_global_ and modifiers like
Special values for
When you provide a custom setting for
Elasticsearch assumes that you are moving from development mode to production
mode, and upgrades a number of system startup checks from warnings to
exceptions. See the differences between development and production modes.
Discovery and cluster formation settingsedit
Configure two important discovery and cluster formation settings before going to production so that nodes in the cluster can discover each other and elect a master node.
Out of the box, without any network configuration, Elasticsearch will bind to
the available loopback addresses and scan local ports
connect with other nodes running on the same server. This behavior provides an
auto-clustering experience without having to do any configuration.
When you want to form a cluster with nodes on other hosts, use the
discovery.seed_hosts setting. This setting
provides a list of other nodes in the cluster
that are master-eligible and likely to be live and contactable to seed
the discovery process. This setting
accepts a YAML sequence or array of the addresses of all the master-eligible
nodes in the cluster. Each address can be either an IP address or a hostname
that resolves to one or more IP addresses via DNS.
The port is optional and defaults to
If a hostname resolves to multiple IP addresses, the node will attempt to discover other nodes at all resolved addresses.
IPv6 addresses must be enclosed in square brackets.
If your master-eligible nodes do not have fixed names or addresses, use an alternative hosts provider to find their addresses dynamically.
When you start an Elasticsearch cluster for the first time, a cluster bootstrapping step determines the set of master-eligible nodes whose votes are counted in the first election. In development mode, with no discovery settings configured, this step is performed automatically by the nodes themselves.
Because auto-bootstrapping is inherently
unsafe, when starting a new cluster in production
mode, you must explicitly list the master-eligible nodes whose votes should be
counted in the very first election. You set this list using the
After the cluster forms successfully for the first time, remove the
cluster.initial_master_nodes setting from each nodes'
configuration. Do not use this setting when
restarting a cluster or adding a new node to an existing cluster.
Identify the initial master nodes by their
Heap size settingsedit
By default, Elasticsearch tells the JVM to use a heap with a minimum and maximum size of 1 GB. When moving to production, it is important to configure heap size to ensure that Elasticsearch has enough heap available.
Elasticsearch will assign the entire heap specified in
jvm.options via the
Xms (minimum heap size) and
heap size) settings. These two settings must be equal to each other.
The value for these settings depends on the amount of RAM available on your server:
Xmsto no more than 50% of your physical RAM. Elasticsearch requires memory for purposes other than the JVM heap and it is important to leave space for this. For instance, Elasticsearch uses off-heap buffers for efficient network communication, relies on the operating system’s filesystem cache for efficient access to files, and the JVM itself requires some memory too. It is normal to observe the Elasticsearch process using more memory than the limit configured with the
Xmsto no more than the threshold that the JVM uses for compressed object pointers (compressed oops). The exact threshold varies but is near 32 GB. You can verify that you are under the threshold by looking for a line in the logs like the following:
heap size [1.9gb], compressed ordinary object pointers [true]
Xmsto no more than the threshold for zero-based compressed oops. The exact threshold varies but 26 GB is safe on most systems and can be as large as 30 GB on some systems. You can verify that you are under this threshold by starting Elasticsearch with the JVM options
-XX:+UnlockDiagnosticVMOptions -XX:+PrintCompressedOopsModeand looking for a line like the following:
heap address: 0x000000011be00000, size: 27648 MB, zero based Compressed Oops
This line shows that zero-based compressed oops are enabled. If zero-based compressed oops are not enabled, you’ll see a line like the following instead:
heap address: 0x0000000118400000, size: 28672 MB, Compressed Oops with base: 0x00000001183ff000
The more heap available to Elasticsearch, the more memory it can use for its internal caches, but the less memory it leaves available for the operating system to use for the filesystem cache. Also, larger heaps can cause longer garbage collection pauses.
Here is an example of how to set the heap size via a
jvm.options.d is the preferred method for configuring the heap size for
It is also possible to set the heap size via the
variable. This is generally discouraged for production deployments but is useful
for testing because it overrides all other means of setting JVM options.
JVM heap dump path settingedit
By default, Elasticsearch configures the JVM to dump the heap on out of
memory exceptions to the default data directory. On RPM and
Debian packages, the data directory is
Linux and MacOS and Windows distributions,
data directory is located under the root of the Elasticsearch installation.
If this path is not suitable for receiving heap dumps, modify the
-XX:HeapDumpPath=... entry in
- If you specify a directory, the JVM will generate a filename for the heap dump based on the PID of the running instance.
- If you specify a fixed filename instead of a directory, the file must not exist when the JVM needs to perform a heap dump on an out of memory exception. Otherwise, the heap dump will fail.
GC logging settingsedit
By default, Elasticsearch enables garbage collection (GC) logs. These are configured in
jvm.options and output to the same default location as
the Elasticsearch logs. The default configuration rotates the logs every 64 MB and
can consume up to 2 GB of disk space.
You can reconfigure JVM logging using the command line options described in
JEP 158: Unified JVM Logging. Unless you
change the default
jvm.options file directly, the Elasticsearch default
configuration is applied in addition to your own settings. To disable the
default configuration, first disable logging by supplying the
-Xlog:disable option, then supply your own command line options. This
disables all JVM logging, so be sure to review the available options
and enable everything that you require.
To see further options not contained in the original JEP, see Enable Logging with the JVM Unified Logging Framework.
Change the default GC log output location to
$ES_HOME/config/jvm.options.d/gc.options with some sample
# Turn off all previous logging configuratons -Xlog:disable # Default settings from JEP 158, but with `utctime` instead of `uptime` to match the next line -Xlog:all=warning:stderr:utctime,level,tags # Enable GC logging to a custom location with a variety of options -Xlog:gc*,gc+age=trace,safepoint:file=/opt/my-app/gc.log:utctime,pid,tags:filecount=32,filesize=64m
Configure an Elasticsearch Docker container to send GC debug logs to
standard error (
stderr). This lets the container orchestrator
handle the output. If using the
ES_JAVA_OPTS environment variable,
MY_OPTS="-Xlog:disable -Xlog:all=warning:stderr:utctime,level,tags -Xlog:gc=debug:stderr:utctime" docker run -e ES_JAVA_OPTS="$MY_OPTS" # etc
Temporary directory settingsedit
By default, Elasticsearch uses a private temporary directory that the startup script creates immediately below the system temporary directory.
On some Linux distributions, a system utility will clean files and directories
/tmp if they have not been recently accessed. This behavior can lead to
the private temporary directory being removed while Elasticsearch is running if
features that require the temporary directory are not used for a long time.
Removing the private temporary directory causes problems if a feature that
requires this directory is subsequently used.
If you install Elasticsearch using the
.rpm packages and run it
systemd, the private temporary directory that Elasticsearch uses
is excluded from periodic cleanup.
If you intend to run the
.tar.gz distribution on Linux or MacOS for
an extended period, consider creating a dedicated temporary
directory for Elasticsearch that is not under a path that will have old files
and directories cleaned from it. This directory should have permissions set
so that only the user that Elasticsearch runs as can access it. Then, set the
$ES_TMPDIR environment variable to point to this directory before starting
JVM fatal error log settingedit
By default, Elasticsearch configures the JVM to write fatal error logs
to the default logging directory. On RPM and Debian packages,
this directory is
/var/log/elasticsearch. On Linux and MacOS and Windows distributions, the
directory is located under the root of the Elasticsearch installation.
These are logs produced by the JVM when it encounters a fatal error, such as a
segmentation fault. If this path is not suitable for receiving logs,
-XX:ErrorFile=... entry in
You cannot back up an Elasticsearch cluster by simply copying the data directories of all of its nodes. Elasticsearch may be making changes to the contents of its data directories while it is running; copying its data directories cannot be expected to capture a consistent picture of their contents. If you try to restore a cluster from such a backup, it may fail and report corruption and/or missing files. Alternatively, it may appear to have succeeded though it silently lost some of its data. The only reliable way to back up a cluster is by using the snapshot and restore functionality.