Fix common cluster issuesedit

This guide describes how to fix common errors and problems with Elasticsearch clusters.

Error: disk usage exceeded flood-stage watermark, index has read-only-allow-delete block
This error indicates a data node is critically low on disk space and has reached the flood-stage disk usage watermark.
Circuit breaker errors
Elasticsearch uses circuit breakers to prevent nodes from running out of JVM heap memory. If Elasticsearch estimates an operation would exceed a circuit breaker, it stops the operation and returns an error.
High CPU usage
The most common causes of high CPU usage and their solutions.
High JVM memory pressure
High JVM memory usage can degrade cluster performance and trigger circuit breaker errors.
Red or yellow cluster status
A red or yellow cluster status indicates one or more shards are missing or unallocated. These unassigned shards increase your risk of data loss and can degrade cluster performance.
Rejected requests
When Elasticsearch rejects a request, it stops the operation and returns an error with a 429 response code.
Task queue backlog
A backlogged task queue can prevent tasks from completing and put the cluster into an unhealthy state.
Diagnose unassigned shards
There are multiple reasons why shards might get unassigned, ranging from misconfigured allocation settings to lack of disk space.
Troubleshooting an unstable cluster
A cluster in which nodes leave unexpectedly is unstable and can create several issues.