Local gateway settingsedit

The local gateway stores the cluster state and shard data across full cluster restarts.

The following static settings, which must be set on every master node, control how long a freshly elected master should wait before it tries to recover the cluster state and the cluster’s data.

These settings only take effect on a full cluster restart.

gateway.expected_nodes
(Static) [7.7.0] Deprecated in 7.7.0. This setting will be removed in 8.0. Use gateway.expected_data_nodes instead. Number of data or master nodes expected in the cluster. Recovery of local shards begins when the expected number of nodes join the cluster. Defaults to 0.
gateway.expected_master_nodes
(Static) [7.7.0] Deprecated in 7.7.0. This setting will be removed in 8.0. Use gateway.expected_data_nodes instead. Number of master nodes expected in the cluster. Recovery of local shards begins when the expected number of master nodes join the cluster. Defaults to 0.
gateway.expected_data_nodes
(Static) Number of data nodes expected in the cluster. Recovery of local shards begins when the expected number of data nodes join the cluster. Defaults to 0.
gateway.recover_after_time

(Static) If the expected number of nodes is not achieved, the recovery process waits for the configured amount of time before trying to recover. Defaults to 5m if one of the expected_nodes settings is configured.

Once the recover_after_time duration has timed out, recovery will start as long as the following conditions are met:

gateway.recover_after_nodes
(Static) [7.7.0] Deprecated in 7.7.0. This setting will be removed in 8.0. Use gateway.recover_after_data_nodes instead. Recover as long as this many data or master nodes have joined the cluster.
gateway.recover_after_master_nodes
(Static) [7.7.0] Deprecated in 7.7.0. This setting will be removed in 8.0. Use gateway.recover_after_data_nodes instead. Recover as long as this many master nodes have joined the cluster.
gateway.recover_after_data_nodes
(Static) Recover as long as this many data nodes have joined the cluster.

Dangling indicesedit

When a node joins the cluster, if it finds any shards stored in its local data directory that do not already exist in the cluster, it will consider those shards to be "dangling". Importing dangling indices into the cluster using gateway.auto_import_dangling_indices is not safe. Instead, use the Dangling indices API. Neither mechanism provides any guarantees as to whether the imported data truly represents the latest state of the data when the index was still part of the cluster.

gateway.auto_import_dangling_indices
[7.9.0] Deprecated in 7.9.0. This setting will be removed in 8.0. You should use the dedicated dangling indices API instead. Whether to automatically import dangling indices into the cluster state, provided no indices already exist with the same name. Defaults to false.

The auto-import functionality was intended as a best effort to help users who lose all master nodes. For example, if a new master node were to be started which was unaware of the other indices in the cluster, adding the old nodes would cause the old indices to be imported, instead of being deleted. However there are several issues with automatic importing, and its use is strongly discouraged in favour of the <<dangling-indices-api,dedicated API>.

Losing all master nodes is a situation that should be avoided at all costs, as it puts your cluster’s metadata and data at risk.