Collecting Elasticsearch monitoring data with Metricbeatedit

In 6.5 and later, you can use Metricbeat to collect data about Elasticsearch and ship it to the monitoring cluster, rather than routing it through exporters as described in Legacy collection methods.

Want to use Elastic Agent instead? Refer to Collecting monitoring data with Elastic Agent.

Example monitoring architecture
  1. Install Metricbeat. Ideally install a single Metricbeat instance configured with scope: cluster and configure hosts to point to an endpoint (e.g. a load-balancing proxy) which directs requests to the master-ineligible nodes in the cluster. If this is not possible then install one Metricbeat instance for each Elasticsearch node in the production cluster and use the default scope: node. When Metricbeat is monitoring Elasticsearch with scope: node then you must install a Metricbeat instance for each Elasticsearch node. If you don’t, some metrics will not be collected. Metricbeat with scope: node collects most of the metrics from the elected master of the cluster, so you must scale up all your master-eligible nodes to account for this extra load and you should not use this mode if you have dedicated master nodes.
  2. Enable the Elasticsearch module in Metricbeat on each Elasticsearch node.

    For example, to enable the default configuration for the Elastic Stack monitoring features in the modules.d directory, run the following command:

    metricbeat modules enable elasticsearch-xpack

    For more information, refer to Elasticsearch module.

  3. Configure the Elasticsearch module in Metricbeat on each Elasticsearch node.

    The modules.d/elasticsearch-xpack.yml file contains the following settings:

      - module: elasticsearch
        xpack.enabled: true
        period: 10s
        hosts: ["http://localhost:9200"] 
        #scope: node 
        #username: "user"
        #password: "secret"
        #ssl.enabled: true
        #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
        #ssl.certificate: "/etc/pki/client/cert.pem"
        #ssl.key: "/etc/pki/client/cert.key"
        #ssl.verification_mode: "full"

    By default, the module collects Elasticsearch monitoring metrics from http://localhost:9200. If that host and port number are not correct, you must update the hosts setting. If you configured Elasticsearch to use encrypted communications, you must access it via HTTPS. For example, use a hosts setting like https://localhost:9200.

    By default, scope is set to node and each entry in the hosts list indicates a distinct node in an Elasticsearch cluster. If you set scope to cluster then each entry in the hosts list indicates a single endpoint for a distinct Elasticsearch cluster (for example, a load-balancing proxy fronting the cluster). You should use scope: cluster if the cluster has dedicated master nodes, and configure the endpoint in the hosts list not to direct requests to the dedicated master nodes.

    If Elastic security features are enabled, you must also provide a user ID and password so that Metricbeat can collect metrics successfully:

    1. Create a user on the production cluster that has the remote_monitoring_collector built-in role. Alternatively, use the remote_monitoring_user built-in user.
    2. Add the username and password settings to the Elasticsearch module configuration file.
    3. If TLS is enabled on the HTTP layer of your Elasticsearch cluster, you must either use https as the URL scheme in the hosts setting or add the ssl.enabled: true setting. Depending on the TLS configuration of your Elasticsearch cluster, you might also need to specify additional ssl.* settings.
  4. Optional: Disable the system module in Metricbeat.

    By default, the system module is enabled. The information it collects, however, is not shown on the Monitoring page in Kibana. Unless you want to use that information for other purposes, run the following command:

    metricbeat modules disable system
  5. Identify where to send the monitoring data.

    In production environments, we strongly recommend using a separate cluster (referred to as the monitoring cluster) to store the data. Using a separate monitoring cluster prevents production cluster outages from impacting your ability to access your monitoring data. It also prevents monitoring activities from impacting the performance of your production cluster.

    For example, specify the Elasticsearch output information in the Metricbeat configuration file (metricbeat.yml):

    output.elasticsearch:
      # Array of hosts to connect to.
      hosts: ["http://es-mon-1:9200", "http://es-mon-2:9200"] 
    
      # Optional protocol and basic auth credentials.
      #protocol: "https"
      #username: "elastic"
      #password: "changeme"

    In this example, the data is stored on a monitoring cluster with nodes es-mon-1 and es-mon-2.

    If you configured the monitoring cluster to use encrypted communications, you must access it via HTTPS. For example, use a hosts setting like https://es-mon-1:9200.

    The Elasticsearch monitoring features use ingest pipelines, therefore the cluster that stores the monitoring data must have at least one ingest node.

    If Elasticsearch security features are enabled on the monitoring cluster, you must provide a valid user ID and password so that Metricbeat can send metrics successfully:

    1. Create a user on the monitoring cluster that has the remote_monitoring_agent built-in role. Alternatively, use the remote_monitoring_user built-in user.
    2. Add the username and password settings to the Elasticsearch output information in the Metricbeat configuration file.

    For more information about these configuration options, see Configure the Elasticsearch output.

  6. Start Metricbeat on each node.
  7. View the monitoring data in Kibana.