The field data cache is used mainly when sorting on or faceting on a field. It loads all the field values to memory in order to provide fast document based access to those values. The field data cache can be expensive to build for a field, so its recommended to have enough memory to allocate it, and to keep it loaded.
The amount of memory used for the field
data cache can be controlled using
reloading the field data which does not fit into your cache will be expensive
and perform poorly.
The max size of the field data cache,
A time based setting that expires
field data after a certain time of inactivity. Defaults to
Field data circuit breakeredit
The field data circuit breaker allows Elasticsearch to estimate the amount of memory a field will require to be loaded into memory. It can then prevent the field data loading by raising an exception. By default the limit is configured to 60% of the maximum JVM heap. It can be configured with the following parameters:
Maximum size of estimated field data to allow loading. Defaults to 60% of the maximum JVM heap.
A constant that all field data estimations are multiplied with to determine a final estimation. Defaults to 1.03
indices.fielddata.breaker.overhead can be changed dynamically using the
cluster update settings API.
Monitoring field dataedit
You can monitor memory usage for field data as well as the field data circuit breaker using Nodes Stats API