Elasticsearch is also available as Docker images. The images use centos:7 as the base image.
These images are free to use under the Elastic license. They contain open source and free commercial features and access to paid commercial features. Start a 30-day trial to try out all of the paid commercial features. See the Subscriptions page for information about Elastic license levels.
Obtaining Elasticsearch for Docker is as simple as issuing a
docker pull command
against the Elastic Docker registry.
docker pull docker.elastic.co/elasticsearch/elasticsearch:7.1.0
Alternatively, you can download other Docker images that contain only features available under the Apache 2.0 license. To download the images, go to www.docker.elastic.co.
Elasticsearch can be quickly started for development or testing use with the following command:
docker run -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" docker.elastic.co/elasticsearch/elasticsearch:7.1.0
vm.max_map_count kernel setting needs to be set to at least
production use. Depending on your platform:
vm.max_map_countsetting should be set permanently in
$ grep vm.max_map_count /etc/sysctl.conf vm.max_map_count=262144
To apply the setting on a live system type:
sysctl -w vm.max_map_count=262144
macOS with Docker for Mac
vm.max_map_countsetting must be set within the xhyve virtual machine:
$ screen ~/Library/Containers/com.docker.docker/Data/vms/0/tty
Just press enter and configure the
sysctlsetting as you would for Linux:
sysctl -w vm.max_map_count=262144
Windows and macOS with Docker Toolbox
vm.max_map_countsetting must be set via docker-machine:
docker-machine ssh sudo sysctl -w vm.max_map_count=262144
The following example brings up a cluster comprising two Elasticsearch nodes.
To bring up the cluster, use the
docker-compose.yml and just type:
docker-compose is not pre-installed with Docker on Linux.
Instructions for installing it can be found on the
Docker Compose webpage.
es01 listens on
es01 over a Docker network.
This example also uses
Docker named volumes,
esdata02 which will be created if not already present.
version: '2.2' services: es01: image: docker.elastic.co/elasticsearch/elasticsearch:7.1.0 container_name: es01 environment: - node.name=es01 - discovery.seed_hosts=es02 - cluster.initial_master_nodes=es01,es02 - cluster.name=docker-cluster - bootstrap.memory_lock=true - "ES_JAVA_OPTS=-Xms512m -Xmx512m" ulimits: memlock: soft: -1 hard: -1 volumes: - esdata01:/usr/share/elasticsearch/data ports: - 9200:9200 networks: - esnet es02: image: docker.elastic.co/elasticsearch/elasticsearch:7.1.0 container_name: es02 environment: - node.name=es02 - discovery.seed_hosts=es01 - cluster.initial_master_nodes=es01,es02 - cluster.name=docker-cluster - bootstrap.memory_lock=true - "ES_JAVA_OPTS=-Xms512m -Xmx512m" ulimits: memlock: soft: -1 hard: -1 volumes: - esdata02:/usr/share/elasticsearch/data networks: - esnet volumes: esdata01: driver: local esdata02: driver: local networks: esnet:
To stop the cluster, type
docker-compose down. Data volumes will persist,
so it’s possible to start the cluster again with the same data using
To destroy the cluster and the data volumes, just type
docker-compose down -v.
curl http://127.0.0.1:9200/_cat/health 1472225929 15:38:49 docker-cluster green 2 2 4 2 0 0 0 0 - 100.0%
Log messages go to the console and are handled by the configured Docker logging
driver. By default you can access logs with
The image offers several methods for configuring Elasticsearch settings with the
conventional approach being to provide customized files, that is to say
elasticsearch.yml, but it’s also possible to use environment variables to set
For example, to define the cluster name with
docker run you can pass
-e "cluster.name=mynewclustername". Double quotes are required.
Create your custom config file and mount this over the image’s corresponding file.
For example, bind-mounting a
docker run can be
accomplished with the parameter:
The container runs Elasticsearch as user
1000:1000. Bind mounted host directories and files, such as
custom_elasticsearch.yml above, need to be accessible by this user. For the data and log dirs,
/usr/share/elasticsearch/data, write access is required as well.
Also see note 1 below.
In some environments, it may make more sense to prepare a custom image containing
your configuration. A
Dockerfile to achieve this may be as simple as:
FROM docker.elastic.co/elasticsearch/elasticsearch:7.1.0 COPY --chown=elasticsearch:elasticsearch elasticsearch.yml /usr/share/elasticsearch/config/
You could then build and try the image with something like:
docker build --tag=elasticsearch-custom . docker run -ti -v /usr/share/elasticsearch/data elasticsearch-custom
Some plugins require additional security permissions. You have to explicitly accept
them either by attaching a
tty when you run the Docker image and accepting yes at
the prompts, or inspecting the security permissions separately and if you are
comfortable with them adding the
--batch flag to the plugin install command.
See Plugin Management documentation
for more details.
We have collected a number of best practices for production use.
Any Docker parameters mentioned below assume the use of
By default, Elasticsearch runs inside the container as user
One exception is Openshift, which runs containers using an arbitrarily assigned user ID. Openshift will present persistent volumes with the gid set to
0which will work without any adjustments.
If you are bind-mounting a local directory or file, ensure it is readable by this user, while the data and log dirs additionally require write access. A good strategy is to grant group access to gid
0for the local directory. As an example, to prepare a local directory for storing data through a bind-mount:
mkdir esdatadir chmod g+rwx esdatadir chgrp 1000 esdatadir
As a last resort, you can also force the container to mutate the ownership of any bind-mounts used for the data and log dirs through the environment variable
TAKE_FILE_OWNERSHIP. Inn this case, they will be owned by uid:gid
1000:0providing read/write access to the Elasticsearch process as required.
It is important to ensure increased ulimits for nofile and nproc are available for the Elasticsearch containers. Verify the init system for the Docker daemon is already setting those to acceptable values and, if needed, adjust them in the Daemon, or override them per container, for example using
One way of checking the Docker daemon defaults for the aforementioned ulimits is by running:
docker run --rm centos:7 /bin/bash -c 'ulimit -Hn && ulimit -Sn && ulimit -Hu && ulimit -Su'
Swapping needs to be disabled for performance and node stability. This can be achieved through any of the methods mentioned in the Elasticsearch docs. If you opt for the
bootstrap.memory_lock: trueapproach, apart from defining it through any of the configuration methods, you will additionally need the
memlock: trueulimit, either defined in the Docker Daemon or specifically set for the container. This is demonstrated above in the docker-compose.yml. If using
-e "bootstrap.memory_lock=true" --ulimit memlock=-1:-1
The image exposes
TCP ports 9200 and 9300. For clusters it is recommended to randomize the
published ports with
--publish-all, unless you are pinning one container per host.
ES_JAVA_OPTSenvironment variable to set heap size. For example, to use 16GB, use
-e ES_JAVA_OPTS="-Xms16g -Xmx16g"with
Pin your deployments to a specific version of the Elasticsearch Docker image, for
Always use a volume bound on
/usr/share/elasticsearch/data, as shown in the production example, for the following reasons:
- The data of your Elasticsearch node won’t be lost if the container is killed
- Elasticsearch is I/O sensitive and the Docker storage driver is not ideal for fast I/O
- It allows the use of advanced Docker volume plugins
If you are using the devicemapper storage driver, make sure you are not using
loop-lvmmode. Configure docker-engine to use direct-lvm instead.
- Consider centralizing your logs by using a different logging driver. Also note that the default json-file logging driver is not ideally suited for production use.
You now have a test Elasticsearch environment set up. Before you start serious development or go into production with Elasticsearch, you must do some additional setup: