Field dataedit

The field data cache is used mainly when sorting on or faceting on a field. It loads all the field values to memory in order to provide fast document based access to those values. The field data cache can be expensive to build for a field, so its recommended to have enough memory to allocate it, and to keep it loaded.

The amount of memory used for the field data cache can be controlled using indices.fielddata.cache.size. Note: reloading the field data which does not fit into your cache will be expensive and perform poorly.

Setting Description

indices.fielddata.cache.size

The max size of the field data cache, eg 30% of node heap space, or an absolute value, eg 12GB. Defaults to unbounded.

indices.fielddata.cache.expire

A time based setting that expires field data after a certain time of inactivity. Defaults to -1. For example, can be set to 5m for a 5 minute expiry.

Field data circuit breakeredit

The field data circuit breaker allows Elasticsearch to estimate the amount of memory a field will require to be loaded into memory. It can then prevent the field data loading by raising an exception. By default the limit is configured to 60% of the maximum JVM heap. It can be configured with the following parameters:

Setting Description

indices.fielddata.breaker.limit

Maximum size of estimated field data to allow loading. Defaults to 60% of the maximum JVM heap.

indices.fielddata.breaker.overhead

A constant that all field data estimations are multiplied with to determine a final estimation. Defaults to 1.03

Both the indices.fielddata.breaker.limit and indices.fielddata.breaker.overhead can be changed dynamically using the cluster update settings API.

Monitoring field dataedit

You can monitor memory usage for field data as well as the field data circuit breaker using Nodes Stats API