Kubernetes LeaderElection Provideredit

Provides the option to enable leaderelection between a set of Elastic Agents running on Kubernetes. Only one Elastic Agent at a time will be the holder of the leader lock and based on this, configurations can be enabled with the condition that the Elastic Agent holds the leadership. This is useful in cases where the Elastic Agent between a set of Elastic Agents collects cluster wide metrics for the Kubernetes cluster, such as the kube-state-metrics endpoint.

Provider needs a kubeconfig file to establish a connection to Kubernetes API. It can automatically reach the API if it’s running in an inCluster environment (Elastic Agent runs as Pod).

  #kube_config: /Users/elastic-agent/.kube/config
  #leader_lease: agent-k8s-leader-lock
(Optional) Use the given config file as configuration for the Kubernetes client. If kube_config is not set, KUBECONFIG environment variable will be checked and will fall back to InCluster if it’s not present.
(Optional) Specify the name of the leader lease. This is set to elastic-agent-cluster-leader by default.

The available key is:

Key Type Description



The value of the leadership flag. This is set to true when the Elastic Agent is the current leader, and is set to false otherwise.

Enabling confgiurations only when on leadershipedit

Use conditions based on the kubernetes_leaderelection.leader key to leverage the leaderelection provider and enable specific inputs only when the Elastic Agent holds the leadership lock. The below example enables the state_container metricset only when the leadership lock is acquired:

- data_stream:
    dataset: kubernetes.state_container
    type: metrics
    - state_container
  add_metadata: true
    - 'kube-state-metrics:8080'
  period: 10s
  condition: ${kubernetes_leaderelection.leader} == true