Elasticsearch is also available as Docker images. Starting with version 8.0.0, these are based upon a tiny core of essential files. Prior versions used centos:8 as the base image.
This package contains both free and subscription features. Start a 30-day trial to try out all of the features.
Obtaining Elasticsearch for Docker is as simple as issuing a
docker pull command
against the Elastic Docker registry.
Version 8.0.0 of Elasticsearch has not yet been released, so no Docker image is currently available for this version.
Version 8.0.0 of the Elasticsearch Docker image has not yet been released.
To get a three-node Elasticsearch cluster up and running in Docker, you can use Docker Compose:
Version 8.0.0 of Elasticsearch has not yet been released, so a
docker-compose.ymlis not available for this version.
docker-compose.ymlfile uses the
ES_JAVA_OPTSenvironment variable to manually set the heap size to 512MB. We do not recommend using
ES_JAVA_OPTSin production. See Manually set the heap size.
This sample Docker Compose file brings up a three-node Elasticsearch cluster. Node
es01over a Docker network.
Please note that this configuration exposes port 9200 on all network interfaces, and given how Docker manipulates
iptableson Linux, this means that your Elasticsearch cluster is publicly accessible, potentially ignoring any firewall settings. If you don’t want to expose port 9200 and instead use a reverse proxy, replace
127.0.0.1:9200:9200in the docker-compose.yml file. Elasticsearch will then only be accessible from the host machine itself.
The Docker named volumes
data03store the node data directories so the data persists across restarts. If they don’t already exist,
docker-composecreates them when you bring up the cluster.
Make sure Docker Engine is allotted at least 4GiB of memory. In Docker Desktop, you configure resource usage on the Advanced tab in Preference (macOS) or Settings (Windows).
Docker Compose is not pre-installed with Docker on Linux. See docs.docker.com for installation instructions: Install Compose on Linux
docker-composeto bring up the cluster:
_cat/nodesrequest to see that the nodes are up and running:
curl -X GET "localhost:9200/_cat/nodes?v=true&pretty"
Log messages go to the console and are handled by the configured Docker logging driver.
By default you can access logs with
docker logs. If you would prefer the Elasticsearch
container to write logs to disk, set the
ES_LOG_STYLE environment variable to
This causes Elasticsearch to use the same logging configuration as other Elasticsearch distribution formats.
To stop the cluster, run
The data in the Docker volumes is preserved and loaded
when you restart the cluster with
To delete the data volumes when you bring down the cluster,
docker-compose down -v.
The following requirements and recommendations apply when running Elasticsearch in Docker in production.
vm.max_map_count kernel setting must be set to at least
262144 for production use.
How you set
vm.max_map_count depends on your platform:
vm.max_map_countsetting should be set permanently in
grep vm.max_map_count /etc/sysctl.conf vm.max_map_count=262144
To apply the setting on a live system, run:
sysctl -w vm.max_map_count=262144
macOS with Docker for Mac
vm.max_map_countsetting must be set within the xhyve virtual machine:
From the command line, run:
Press enter and use`sysctl` to configure
sysctl -w vm.max_map_count=262144
To exit the
Ctrl a d.
Windows and macOS with Docker Desktop
vm.max_map_countsetting must be set via docker-machine:
docker-machine ssh sudo sysctl -w vm.max_map_count=262144
Windows with Docker Desktop WSL 2 backend
vm.max_map_countsetting must be set in the docker-desktop container:
wsl -d docker-desktop sysctl -w vm.max_map_count=262144
By default, Elasticsearch runs inside the container as user
One exception is Openshift,
which runs containers using an arbitrarily assigned user ID.
Openshift presents persistent volumes with the gid set to
0, which works without any adjustments.
If you are bind-mounting a local directory or file, it must be readable by the
In addition, this user must have write access to the config, data and log dirs
(Elasticsearch needs write access to the
config directory so that it can generate a keystore).
A good strategy is to grant group access to gid
0 for the local directory.
For example, to prepare a local directory for storing data through a bind-mount:
mkdir esdatadir chmod g+rwx esdatadir chgrp 0 esdatadir
You can also run an Elasticsearch container using both a custom UID and GID. Unless you
bind-mount each of the
logs directories, you must pass
the command line option
--group-add 0 to
docker run. This ensures that the user
under which Elasticsearch is running is also a member of the
root (GID 0) group inside the
To check the Docker daemon defaults for ulimits, run:
docker run --rm centos:8 /bin/bash -c 'ulimit -Hn && ulimit -Sn && ulimit -Hu && ulimit -Su'
If needed, adjust them in the Daemon or override them per container.
For example, when using
docker run, set:
Swapping needs to be disabled for performance and node stability. For information about ways to do this, see Disable swapping.
If you opt for the
bootstrap.memory_lock: true approach,
you also need to define the
memlock: true ulimit in the
or explicitly set for the container as shown in the sample compose file.
docker run, you can specify:
-e "bootstrap.memory_lock=true" --ulimit memlock=-1:-1
The image exposes
TCP ports 9200 and 9300. For production clusters, randomizing the
published ports with
--publish-all is recommended,
unless you are pinning one container per host.
By default, Elasticsearch automatically sizes JVM heap based on a nodes’s roles and the total memory available to the node’s container. We recommend this default sizing for most production environments. If needed, you can override default sizing by manually setting JVM heap size.
For testing, you can also manually set the heap size using the
environment variable. For example, to use 16GB, specify
ES_JAVA_OPTS="-Xms16g -Xmx16g" with
docker run. The
overrides all other JVM options. The
ES_JAVA_OPTS variable overrides all other
JVM options. We do not recommend using
ES_JAVA_OPTS in production. The
docker-compose.yml file above sets the heap size to 512MB.
Pin your deployments to a specific version of the Elasticsearch Docker image. For
You should use a volume bound on
/usr/share/elasticsearch/data for the following reasons:
- The data of your Elasticsearch node won’t be lost if the container is killed
- Elasticsearch is I/O sensitive and the Docker storage driver is not ideal for fast I/O
- It allows the use of advanced Docker volume plugins
If you are using the devicemapper storage driver, do not use the default
Configure docker-engine to use
When you run in Docker, the Elasticsearch configuration files are loaded from
To use custom configuration files, you bind-mount the files over the configuration files in the image.
To use the contents of a file to set an environment variable, suffix the environment
variable name with
_FILE. This is useful for passing secrets such as passwords to Elasticsearch
without specifying them directly.
For example, to set the Elasticsearch bootstrap password from a file, you can bind mount the
file and set the
ELASTIC_PASSWORD_FILE environment variable to the mount location.
If you mount the password file to
You can also override the default command for the image to pass Elasticsearch configuration parameters as command line options. For example:
docker run <various parameters> bin/elasticsearch -Ecluster.name=mynewclustername
While bind-mounting your configuration files is usually the preferred method in production, you can also create a custom Docker image that contains your configuration.
Create custom config files and bind-mount them over the corresponding files in the Docker image.
For example, to bind-mount
docker run, specify:
The container runs Elasticsearch as user
1000:0. Bind mounted host directories and files must be accessible by this user,
and the data and log directories must be writable by this user.
By default, Elasticsearch will auto-generate a keystore file for secure settings. This
file is obfuscated but not encrypted. If you want to encrypt your
secure settings with a password, you must use the
elasticsearch-keystore utility to create a password-protected keystore and
bind-mount it to the container as
/usr/share/elasticsearch/config/elasticsearch.keystore. In order to provide
the Docker container with the password at startup, set the Docker environment
KEYSTORE_PASSWORD to the value of your password. For example, a
run command might have the following options:
-v full_path_to/elasticsearch.keystore:/usr/share/elasticsearch/config/elasticsearch.keystore -E KEYSTORE_PASSWORD=mypassword
In some environments, it might make more sense to prepare a custom image that contains
your configuration. A
Dockerfile to achieve this might be as simple as:
FROM docker.elastic.co/elasticsearch/elasticsearch:8.0.0 COPY --chown=elasticsearch:elasticsearch elasticsearch.yml /usr/share/elasticsearch/config/
You could then build and run the image with:
docker build --tag=elasticsearch-custom . docker run -ti -v /usr/share/elasticsearch/data elasticsearch-custom
Some plugins require additional security permissions. You must explicitly accept them either by:
ttywhen you run the Docker image and allowing the permissions when prompted.
Inspecting the security permissions and accepting them (if appropriate) by adding the
--batchflag to the plugin install command.
See Plugin management for more information.
The Elasticsearch Docker image only includes what is required to run Elasticsearch, and does not provide a package manager. It is possible to add additional utilities with a multi-phase Docker build. You must also copy any dependencies, for example shared libraries.
FROM centos:8 AS builder yum install -y some-package FROM docker.elastic.co/elasticsearch/elasticsearch:8.0.0 COPY --from=builder /usr/bin/some-utility /usr/bin/ COPY --from=builder /usr/lib/some-lib.so /usr/lib/
You should use
centos:8 as a base in order to avoid incompatibilities.
ldd to list the
shared libraries required by a utility.
You now have a test Elasticsearch environment set up. Before you start serious development or go into production with Elasticsearch, you must do some additional setup: