Fix common cluster issuesedit

This guide describes how to fix common problems with Elasticsearch clusters.

Circuit breaker errorsedit

Elasticsearch uses circuit breakers to prevent nodes from running out of JVM heap memory. If Elasticsearch estimates an operation would exceed a circuit breaker, it stops the operation and returns an error.

By default, the parent circuit breaker triggers at 95% JVM memory usage. To prevent errors, we recommend taking steps to reduce memory pressure if usage consistently exceeds 85%.

Diagnose circuit breaker errorsedit

Error messages

If a request triggers a circuit breaker, Elasticsearch returns an error.

{
  'error': {
    'type': 'circuit_breaking_exception',
    'reason': '[parent] Data too large, data for [<http_request>] would be [123848638/118.1mb], which is larger than the limit of [123273216/117.5mb], real usage: [120182112/114.6mb], new bytes reserved: [3666526/3.4mb]',
    'bytes_wanted': 123848638,
    'bytes_limit': 123273216,
    'durability': 'TRANSIENT'
  },
  'status': 429
}

Elasticsearch also writes circuit breaker errors to elasticsearch.log. This is helpful when automated processes, such as allocation, trigger a circuit breaker.

Caused by: org.elasticsearch.common.breaker.CircuitBreakingException: [parent] Data too large, data for [<transport_request>] would be [num/numGB], which is larger than the limit of [num/numGB], usages [request=0/0b, fielddata=num/numKB, in_flight_requests=num/numGB, accounting=num/numGB]

Check JVM memory usage

If you’ve enabled Stack Monitoring, you can view JVM memory usage in Kibana. In the main menu, click Stack Monitoring. On the Stack Monitoring Overview page, click Nodes. The JVM Heap column lists the current memory usage for each node.

You can also use the cat nodes API to get the current heap.percent for each node.

GET _cat/nodes?v=true&h=name,node*,heap*

To get the JVM memory usage for each circuit breaker, use the node stats API.

GET _nodes/stats/breaker

Prevent circuit breaker errorsedit

Reduce JVM memory pressure

High JVM memory pressure often causes circuit breaker errors. See High JVM memory pressure.

Avoid using fielddata on text fields

For high-cardinality text fields, fielddata can use a large amount of JVM memory. To avoid this, Elasticsearch disables fielddata on text fields by default. If you’ve enabled fielddata and triggered the fielddata circuit breaker, consider disabling it and using a keyword field instead. See fielddata mapping parameter.

Clear the fieldata cache

If you’ve triggered the fielddata circuit breaker and can’t disable fielddata, use the clear cache API to clear the fielddata cache. This may disrupt any in-flight searches that use fielddata.

POST _cache/clear?fielddata=true

High JVM memory pressureedit

High JVM memory usage can degrade cluster performance and trigger circuit breaker errors. To prevent this, we recommend taking steps to reduce memory pressure if a node’s JVM memory usage consistently exceeds 85%.

Diagnose high JVM memory pressureedit

Check JVM memory pressure

From your deployment menu, click Elasticsearch. Under Instances, each instance displays a JVM memory pressure indicator. When the JVM memory pressure reaches 75%, the indicator turns red.

You can also use the nodes stats API to calculate the current JVM memory pressure for each node.

GET _nodes/stats?filter_path=nodes.*.jvm.mem.pools.old

Use the response to calculate memory pressure as follows:

JVM Memory Pressure = used_in_bytes / max_in_bytes

Check garbage collection logs

As memory usage increases, garbage collection becomes more frequent and takes longer. You can track the frequency and length of garbage collection events in elasticsearch.log. For example, the following event states Elasticsearch spent more than 50% (21 seconds) of the last 40 seconds performing garbage collection.

[timestamp_short_interval_from_last][INFO ][o.e.m.j.JvmGcMonitorService] [node_id] [gc][number] overhead, spent [21s] collecting in the last [40s]

Reduce JVM memory pressureedit

Reduce your shard count

Every shard uses memory. In most cases, a small set of large shards uses fewer resources than many small shards. For tips on reducing your shard count, see Size your shards.

Avoid expensive searches

Expensive searches can use large amounts of memory. To better track expensive searches on your cluster, enable slow logs.

Expensive searches may have a large size argument, use aggregations with a large number of buckets, or include expensive queries. To prevent expensive searches, consider the following setting changes:

PUT _settings
{
  "index.max_result_window": 5000
}

PUT _cluster/settings
{
  "persistent": {
    "search.max_buckets": 20000,
    "search.allow_expensive_queries": false
  }
}

Prevent mapping explosions

Defining too many fields or nesting fields too deeply can lead to mapping explosions that use large amounts of memory. To prevent mapping explosions, use the mapping limit settings to limit the number of field mappings.

Spread out bulk requests

While more efficient than individual requests, large bulk indexing or multi-search requests can still create high JVM memory pressure. If possible, submit smaller requests and allow more time between them.

Upgrade node memory

Heavy indexing and search loads can cause high JVM memory pressure. To better handle heavy workloads, upgrade your nodes to increase their memory capacity.

Red or yellow cluster statusedit

A red or yellow cluster status indicates one or more shards are missing or unallocated. These unassigned shards increase your risk of data loss and can degrade cluster performance.

Diagnose your cluster statusedit

Check your cluster status

Use the cluster health API.

GET _cluster/health?filter_path=status,*_shards

A healthy cluster has a green status and zero unassigned_shards. A yellow status means only replicas are unassigned. A red status means one or more primary shards are unassigned.

View unassigned shards

To view unassigned shards, use the cat shards API.

GET _cat/shards?v=true&h=index,shard,prirep,state,node,unassigned.reason&s=state

Unassigned shards have a state of UNASSIGNED. The prirep value is p for primary shards and r for replicas. The unassigned.reason describes why the shard remains unassigned.

To get a more in-depth explanation of an unassigned shard’s allocation status, use the cluster allocation explanation API. You can often use details in the response to resolve the issue.

GET _cluster/allocation/explain?filter_path=index,node_allocation_decisions.node_name,node_allocation_decisions.deciders.*
{
  "index": "my-index",
  "shard": 0,
  "primary": false,
  "current_node": "my-node"
}

Fix a red or yellow cluster statusedit

A shard can become unassigned for several reasons. The following tips outline the most common causes and their solutions.

Re-enable shard allocation

You typically disable allocation during a restart or other cluster maintenance. If you forgot to re-enable allocation afterward, Elasticsearch will be unable to assign shards. To re-enable allocation, reset the cluster.routing.allocation.enable cluster setting.

PUT _cluster/settings
{
  "persistent" : {
    "cluster.routing.allocation.enable" : null
  }
}

Recover lost nodes

Shards often become unassigned when a data node leaves the cluster. This can occur for several reasons, ranging from connectivity issues to hardware failure. After you resolve the issue and recover the node, it will rejoin the cluster. Elasticsearch will then automatically allocate any unassigned shards.

To avoid wasting resources on temporary issues, Elasticsearch delays allocation by one minute by default. If you’ve recovered a node and don’t want to wait for the delay period, you can call the cluster reroute API with no arguments to start the allocation process. The process runs asynchronously in the background.

POST _cluster/reroute

Fix allocation settings

Misconfigured allocation settings can result in an unassigned primary shard. These settings include:

To review your allocation settings, use the get index settings and get cluster settings APIs.

GET my-index/_settings?flat_settings=true&include_defaults=true

GET _cluster/settings?flat_settings=true&include_defaults=true

You can change the settings using the update index settings and update cluster settings APIs.

Allocate or reduce replicas

To protect against hardware failure, Elasticsearch will not assign a replica to the same node as its primary shard. If no other data nodes are available to host the replica, it remains unassigned. To fix this, you can:

  • Add a data node to the same tier to host the replica.
  • Change the index.number_of_replicas index setting to reduce the number of replicas for each primary shard. We recommend keeping at least one replica per primary.
PUT _settings
{
  "index.number_of_replicas": 1
}

Free up or increase disk space

Elasticsearch uses a low disk watermark to ensure data nodes have enough disk space for incoming shards. By default, Elasticsearch does not allocate shards to nodes using more than 85% of disk space.

To check the current disk space of your nodes, use the cat allocation API.

GET _cat/allocation?v=true&h=node,shards,disk.*

If your nodes are running low on disk space, you have a few options:

  • Upgrade your nodes to increase disk space.
  • Delete unneeded indices to free up space. If you use ILM, you can update your lifecycle policy to use searchable snapshots or add a delete phase. If you no longer need to search the data, you can use a snapshot to store it off-cluster.
  • If you no longer write to an index, use the force merge API or ILM’s force merge action to merge its segments into larger ones.

    POST my-index/_forcemerge
  • If an index is read-only, use the shrink index API or ILM’s shrink action to reduce its primary shard count.

    POST my-index/_shrink/my-shrunken-index
  • If your node has a large disk capacity, you can increase the low disk watermark or set it to an explicit byte value.

    PUT _cluster/settings
    {
      "persistent": {
        "cluster.routing.allocation.disk.watermark.low": "30gb"
      }
    }

Reduce JVM memory pressure

Shard allocation requires JVM heap memory. High JVM memory pressure can trigger circuit breakers that stop allocation and leave shards unassigned. See High JVM memory pressure.

Recover data for a lost primary shard

If a node containing a primary shard is lost, Elasticsearch can typically replace it using a replica on another node. If you can’t recover the node and replicas don’t exist or are irrecoverable, you’ll need to re-add the missing data from a snapshot or the original data source.

Only use this option if node recovery is no longer possible. This process allocates an empty primary shard. If the node later rejoins the cluster, Elasticsearch will overwrite its primary shard with data from this newer empty shard, resulting in data loss.

Use the cluster reroute API to manually allocate the unassigned primary shard to another data node in the same tier. Set accept_data_loss to true.

POST _cluster/reroute
{
  "commands": [
    {
      "allocate_empty_primary": {
        "index": "my-index",
        "shard": 0,
        "node": "my-node",
        "accept_data_loss": "true"
      }
    }
  ]
}

If you backed up the missing index data to a snapshot, use the restore snapshot API to restore the individual index. Alternatively, you can index the missing data from the original data source.