The images are available in three different configurations or "flavors". The
basic flavor, which is the default, ships with X-Pack Basic features
pre-installed and automatically activated with a free licence. The
flavor features all X-Pack functionally under a 30-day trial licence. The
flavor does not include X-Pack, and contains only open-source Elasticsearch.
X-Pack Security is enabled in the
image. To access your cluster, it’s necessary to set an initial password for the
elastic user. The initial password can be set at start up time via the
ELASTIC_PASSWORD environment variable:
docker run -e ELASTIC_PASSWORD=MagicWord docker.elastic.co/elasticsearch/elasticsearch-platinum:6.1.2
platinum image includes a trial license for 30 days. After that, you
can obtain one of the available
subscriptions or revert to a Basic licence. The Basic license is free and
includes a selection of X-Pack features.
Obtaining Elasticsearch for Docker is as simple as issuing a
docker pull command
against the Elastic Docker registry.
Docker images can be retrieved with the following commands:
docker pull docker.elastic.co/elasticsearch/elasticsearch:6.1.2 docker pull docker.elastic.co/elasticsearch/elasticsearch-platinum:6.1.2 docker pull docker.elastic.co/elasticsearch/elasticsearch-oss:6.1.2
Elasticsearch can be quickly started for development or testing use with the following command:
docker run -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" docker.elastic.co/elasticsearch/elasticsearch:6.1.2
vm.max_map_count kernel setting needs to be set to at least
Depending on your platform:
vm.max_map_countsetting should be set permanently in /etc/sysctl.conf:
$ grep vm.max_map_count /etc/sysctl.conf vm.max_map_count=262144
To apply the setting on a live system type:
sysctl -w vm.max_map_count=262144
macOS with Docker for Mac
vm.max_map_countsetting must be set within the xhyve virtual machine:
$ screen ~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/tty
Log in with root and no password. Then configure the
sysctlsetting as you would for Linux:
sysctl -w vm.max_map_count=262144
Windows and macOS with Docker Toolbox
vm.max_map_countsetting must be set via docker-machine:
docker-machine ssh sudo sysctl -w vm.max_map_count=262144
The following example brings up a cluster comprising two Elasticsearch nodes.
To bring up the cluster, use the
docker-compose.yml and just type:
docker-compose is not pre-installed with Docker on Linux.
Instructions for installing it can be found on the
Docker Compose webpage.
elasticsearch listens on
elasticsearch over a Docker network.
This example also uses
Docker named volumes,
esdata2 which will be created if not already present.
version: '2.2' services: elasticsearch: image: docker.elastic.co/elasticsearch/elasticsearch:6.1.2 container_name: elasticsearch environment: - cluster.name=docker-cluster - bootstrap.memory_lock=true - "ES_JAVA_OPTS=-Xms512m -Xmx512m" ulimits: memlock: soft: -1 hard: -1 volumes: - esdata1:/usr/share/elasticsearch/data ports: - 9200:9200 networks: - esnet elasticsearch2: image: docker.elastic.co/elasticsearch/elasticsearch:6.1.2 container_name: elasticsearch2 environment: - cluster.name=docker-cluster - bootstrap.memory_lock=true - "ES_JAVA_OPTS=-Xms512m -Xmx512m" - "discovery.zen.ping.unicast.hosts=elasticsearch" ulimits: memlock: soft: -1 hard: -1 volumes: - esdata2:/usr/share/elasticsearch/data networks: - esnet volumes: esdata1: driver: local esdata2: driver: local networks: esnet:
To stop the cluster, type
docker-compose down. Data volumes will persist, so
it’s possible to start the cluster again with the same data using
docker-compose up`. To destroy the cluster and the data volumes, just type
docker-compose down -v.
The image offers several methods for configuring Elasticsearch settings with the
conventional approach being to provide customized files, that is to say,
elasticsearch.yml. It’s also possible to use environment variables to set
For example, to define the cluster name with
docker run you can pass
-e "cluster.name=mynewclustername". Double quotes are required.
Create your custom config file and mount this over the image’s corresponding file.
For example, bind-mounting a
docker run can be
accomplished with the parameter:
The container runs Elasticsearch as user
elasticsearch using uid:gid
Bind mounted host directories and files, such as
need to be accessible by this user. For the data and log dirs,
/usr/share/elasticsearch/data, write access is required as well.
Also see note 1 below.
In some environments, it may make more sense to prepare a custom image containing
your configuration. A
Dockerfile to achieve this may be as simple as:
FROM docker.elastic.co/elasticsearch/elasticsearch:6.1.2 COPY --chown=elasticsearch:elasticsearch elasticsearch.yml /usr/share/elasticsearch/config/
You could then build and try the image with something like:
docker build --tag=elasticsearch-custom . docker run -ti -v /usr/share/elasticsearch/data elasticsearch-custom
Options can be passed as command-line options to the Elasticsearch process by overriding the default command for the image. For example:
docker run <various parameters> bin/elasticsearch -Ecluster.name=mynewclustername
We have collected a number of best practices for production use.
Any Docker parameters mentioned below assume the use of
By default, Elasticsearch runs inside the container as user
One exception is Openshift which runs containers using an arbitrarily assigned user ID. Openshift will present persistent volumes with the gid set to
0which will work without any adjustments.
If you are bind-mounting a local directory or file, ensure it is readable by this user, while the data and log dirs additionally require write access. A good strategy is to grant group access to gid
0for the local directory. As an example, to prepare a local directory for storing data through a bind-mount:
mkdir esdatadir chmod g+rwx esdatadir chgrp 1000 esdatadir
As a last resort, you can also force the container to mutate the ownership of any bind-mounts used for the data and log dirs through the environment variable
TAKE_FILE_OWNERSHIP; in this case they will be owned by uid:gid
1000:0providing read/write access to the Elasticsearch process as required.
It is important to ensure increased ulimits for nofile and nproc are available for the Elasticsearch containers. Verify the init system for the Docker daemon is already setting those to acceptable values and, if needed, adjust them in the Daemon, or override them per container, for example using
One way of checking the Docker daemon defaults for the aforementioned ulimits is by running:
docker run --rm centos:7 /bin/bash -c 'ulimit -Hn && ulimit -Sn && ulimit -Hu && ulimit -Su'
Swapping needs to be disabled for performance and node stability. This can be achieved through any of the methods mentioned in the Elasticsearch docs. If you opt for the
bootstrap.memory_lock: trueapproach, apart from defining it through any of the configuration methods, you will additionally need the
memlock: trueulimit, either defined in the Docker Daemon or specifically set for the container. This is demonstrated above in the docker-compose.yml. If using
-e "bootstrap.memory_lock=true" --ulimit memlock=-1:-1
The image exposes
TCP ports 9200 and 9300. For clusters it is recommended to randomize the
published ports with
--publish-all, unless you are pinning one container per host.
ES_JAVA_OPTSenvironment variable to set heap size. For example, to use 16GB use
-e ES_JAVA_OPTS="-Xms16g -Xmx16g"with
Pin your deployments to a specific version of the Elasticsearch Docker image. For
Always use a volume bound on
/usr/share/elasticsearch/data, as shown in the production example, for the following reasons:
- The data of your elasticsearch node won’t be lost if the container is killed
- Elasticsearch is I/O sensitive and the Docker storage driver is not ideal for fast I/O
- It allows the use of advanced Docker volume plugins
If you are using the devicemapper storage driver, make sure you are not using
loop-lvmmode. Configure docker-engine to use direct-lvm instead.
- Consider centralizing your logs by using a different logging driver. Also note that the default json-file logging driver is not ideally suited for production use.