WARNING: Version 0.90 of Elasticsearch has passed its EOL date.
This documentation is no longer being maintained and may be removed. If you are running this version, we strongly advise you to upgrade. For the latest information, see the current release documentation.
The store module allows you to control how index data is stored.
The index can either be stored in-memory (no persistence) or on-disk (the default). In-memory indices provide better performance at the cost of limiting the index size to the amount of available physical memory.
When using a local gateway (the default), file system storage with no in memory storage is required to maintain index consistency. This is required since the local gateway constructs its state from the local index state of each node.
Another important aspect of memory based storage is the fact that ElasticSearch supports storing the index in memory outside of the JVM heap space using the "Memory" (see below) storage type. It translates to the fact that there is no need for extra large JVM heaps (with their own consequences) for storing the index in memory.
From version 0.90 onwards, store compression is always enabled.
For versions 0.19.5 to 0.20:
In the mapping, one can configure the
_source field to be compressed.
The problem with it is the fact that small documents don’t end up
compressing well, as several documents compressed in a single
compression "block" will provide a considerable better compression
ratio. This version introduces the ability to compress stored fields
index.store.compress.stored setting, as well as term vector
The settings can be set on the index level, and are dynamic, allowing to change them using the index update settings API. elasticsearch can handle mixed stored / non stored cases. This allows, for example, to enable compression at a later stage in the index lifecycle, and optimize the index to make use of it (generating new segments that use compression).
Best compression, compared to _source level compression, will mainly happen when indexing smaller documents (less than 64k). The price on the other hand is the fact that for each doc returned, a block will need to be decompressed (its fast though) in order to extract the document data.
(0.19.5 and above).
The way Lucene, the IR library elasticsearch uses under the covers, works is by creating immutable segments (up to deletes) and constantly merging them (the merge policy settings allow to control how those merges happen). The merge process happens in an asynchronous manner without affecting the indexing / search speed. The problem though, especially on systems with low IO, is that the merge process can be expensive and affect search / index operation simply by the fact that the box is now taxed with more IO happening.
The store module allows to have throttling configured for merges (or
all) either on the node level, or on the index level. The node level
throttling will make sure that out of all the shards allocated on that
node, the merge process won’t pass the specific setting bytes per
second. It can be set by setting
merge, and setting
5mb. The node level settings can be changed dynamically
using the cluster update settings API. Since 0.90.1 the default is set
20mb with type
If specific index level configuration is needed, regardless of the node
level settings, it can be set as well using the
index.store.throttle.max_bytes_per_sec. The default value for the type
node, meaning it will throttle based on the node level settings and
participate in the global throttling happening. Both settings can be set
using the index update settings API dynamically.
The following sections lists all the different storage types supported.
File system based storage is the default storage used. There are
different implementations or storage types. The best one for the
operating environment will be automatically chosen:
simplefs on Windows 32bit, and
niofs for the
The following are the different file system based storage types:
simplefs type is a straightforward implementation of file system
storage (maps to Lucene
SimpleFsDirectory) using a random access file.
This implementation has poor concurrent performance (multiple threads
will bottleneck). It is usually better to use the
niofs when you need
niofs type stores the shard index on the file system (maps to
NIOFSDirectory) using NIO. It allows multiple threads to read
from the same file concurrently. It is not recommended on Windows
because of a bug in the SUN Java implementation.
mmapfs type stores the shard index on the file system (maps to
MMapDirectory) by mapping a file into memory (mmap). Memory
mapping uses up a portion of the virtual memory address space in your
process equal to the size of the file being mapped. Before using this
class, be sure your have plenty of virtual address space.
memory type stores the index in main memory with the following
There are also node level settings that control the caching of buffers (important when using direct buffers):
Should the memory be allocated outside of the
JVM heap. Defaults to
The small buffer size, defaults to
The large buffer size, defaults to
The small buffer cache size, defaults
The large buffer cache size, defaults