Peer recovery syncs data from a primary shard to a new or existing shard copy.
Peer recovery automatically occurs when Elasticsearch:
- Recreates a shard lost during node failure
- Relocates a shard to another node due to a cluster rebalance or changes to the shard allocation settings
You can view a list of in-progress and completed recoveries using the cat recovery API.
(Dynamic) Limits total inbound and outbound recovery traffic for each node. Applies to both peer recoveries as well as snapshot recoveries (i.e., restores from a snapshot). Defaults to
40mbunless the node is a dedicated cold or frozen node, in which case the default relates to the total memory available to the node:
Total memory Default recovery rate on cold and frozen nodes
≤ 4 GB
> 4 GB and ≤ 8 GB
> 8 GB and ≤ 16 GB
> 16 GB and ≤ 32 GB
> 32 GB
This limit applies to each node separately. If multiple nodes in a cluster perform recoveries at the same time, the cluster’s total recovery traffic may exceed this limit.
If this limit is too high, ongoing recoveries may consume an excess of bandwidth and other resources, which can destabilize the cluster.
This is a dynamic setting, which means you can set it in each node’s
elasticsearch.ymlconfig file and you can update it dynamically using the cluster update settings API. If you set it dynamically then the same limit applies on every node in the cluster. If you do not set it dynamically then you can set a different limit on each node, which is useful if some of your nodes have better bandwidth than others. For example, if you are using Index Lifecycle Management then you may be able to give your hot nodes a higher recovery bandwidth limit than your warm nodes.
Expert peer recovery settingsedit
You can use the following expert setting to manage resources for peer recoveries.
(Dynamic, Expert) Number of file chunks sent in parallel for each recovery. Defaults to
You can increase the value of this setting when the recovery of a single shard is not reaching the traffic limit set by
indices.recovery.max_bytes_per_sec, up to a maximum of
(Dynamic, Expert) Number of operations sent in parallel for each recovery. Defaults to
Concurrently replaying operations during recovery can be very resource-intensive and may interfere with indexing, search, and other activities in your cluster. Do not increase this setting without carefully verifying that your cluster has the resources available to handle the extra load that will result.