Kubernetes LeaderElection Provideredit

Provides the option to enable leaderelection between a set of Elastic Agents running on Kubernetes. Only one Elastic Agent at a time will be the holder of the leader lock and based on this, configurations can be enabled with the condition that the Elastic Agent holds the leadership. This is useful in cases where the Elastic Agent between a set of Elastic Agents collects cluster wide metrics for the Kubernetes cluster, such as the kube-state-metrics endpoint.

Provider needs a kubeconfig file to establish a connection to Kubernetes API. It can automatically reach the API if it’s running in an InCluster environment (Elastic Agent runs as Pod).

  #enabled: true
  #kube_config: /Users/elastic-agent/.kube/config
  #  qps: 5
  #  burst: 10
  #leader_lease: agent-k8s-leader-lock
  #leader_retryperiod: 2
  #leader_leaseduration: 15
  #leader_renewdeadline: 10
(Optional) Defaults to true. To explicitly disable the LeaderElection provider, set enabled: false.
(Optional) Use the given config file as configuration for the Kubernetes client. If kube_config is not set, KUBECONFIG environment variable will be checked and will fall back to InCluster if it’s not present.
(Optional) Configure additional options for the Kubernetes client. Supported options are qps and burst. If not set, the Kubernetes client’s default QPS and burst settings are used.
(Optional) Specify the name of the leader lease. This is set to elastic-agent-cluster-leader by default.
(Optional) Default value 2 (in sec). How long before Elastic Agents try to get the leader role.
(Optional) Default value 15 (in sec). How long the leader Elastic Agent holds the leader state.
(Optional) Default value 10 (in sec). How long leaders retry getting the leader role.

The available key is:

Key Type Description



The value of the leadership flag. This is set to true when the Elastic Agent is the current leader, and is set to false otherwise.

Understanding leader timingsedit

As described above, the LeaderElection configuration offers the following parameters: Lease duration (leader_leaseduration), Renew deadline (leader_renewdeadline), and Retry period (leader_retryperiod). Based on the config provided, each agent will trigger Kubernetes API requests and will try to check the status of the lease.

The number of leader calls to the K8s Control API is proportional to the number of Elastic Agents installed. This means that requests will come from all Elastic Agents per leader_retryperiod. Setting leader_retryperiod to a greater value than the default (2sec), means that fewer requests will be made towards the Kubernetes Control API, but will also increase the period where collection of metrics from the leader Elastic Agent might be lost.

The library applies specific checks for the timing parameters and if those are not verified Elastic Agent will exit with a panic error.

In general: - Leaseduration must be greater than renewdeadline - Renewdeadline must be greater than retryperiod*JitterFactor.

Constant JitterFactor=1.2 is defined in leaderelection lib.

Enabling configurations only when on leadershipedit

Use conditions based on the kubernetes_leaderelection.leader key to leverage the leaderelection provider and enable specific inputs only when the Elastic Agent holds the leadership lock. The below example enables the state_container metricset only when the leadership lock is acquired:

- data_stream:
    dataset: kubernetes.state_container
    type: metrics
    - state_container
  add_metadata: true
    - 'kube-state-metrics:8080'
  period: 10s
  condition: ${kubernetes_leaderelection.leader} == true