Collecting monitoring data using legacy collectorsedit

Metricbeat is the recommended method for collecting and shipping monitoring data to a monitoring cluster.

If you have previously configured legacy collection methods, you should migrate to using Metricbeat collection methods. Use either Metricbeat collection or legacy collection methods; do not use both.

Learn more about Collecting monitoring data with Metricbeat.

This method for collecting metrics about Elasticsearch involves sending the metrics to the monitoring cluster by using exporters. For the recommended method, see Collecting monitoring data with Metricbeat.

Advanced monitoring settings enable you to control how frequently data is collected, configure timeouts, and set the retention period for locally-stored monitoring indices. You can also adjust how monitoring data is displayed.

To learn about monitoring in general, see Monitor a cluster.

  1. Configure your cluster to collect monitoring data:

    1. Verify that the xpack.monitoring.elasticsearch.collection.enabled setting is true, which is its default value, on each node in the cluster.

      You can specify this setting in either the elasticsearch.yml on each node or across the cluster as a dynamic cluster setting. If Elasticsearch security features are enabled, you must have monitor cluster privileges to view the cluster settings and manage cluster privileges to change them.

      For more information, see Monitoring settings and Cluster update settings.

    2. Set the xpack.monitoring.collection.enabled setting to true on each node in the cluster. By default, it is disabled (false).

      You can specify this setting in either the elasticsearch.yml on each node or across the cluster as a dynamic cluster setting. If Elasticsearch security features are enabled, you must have monitor cluster privileges to view the cluster settings and manage cluster privileges to change them.

      For example, use the following APIs to review and change this setting:

      GET _cluster/settings
      
      PUT _cluster/settings
      {
        "persistent": {
          "xpack.monitoring.collection.enabled": true
        }
      }

      Alternatively, you can enable this setting in Kibana. In the side navigation, click Monitoring. If data collection is disabled, you are prompted to turn it on.

      For more information, see Monitoring settings and Cluster update settings.

    3. Optional: Specify which indices you want to monitor.

      By default, the monitoring agent collects data from all Elasticsearch indices. To collect data from particular indices, configure the xpack.monitoring.collection.indices setting. You can specify multiple indices as a comma-separated list or use an index pattern to match multiple indices. For example:

      xpack.monitoring.collection.indices: logstash-*, index1, test2

      You can prepend - to explicitly exclude index names or patterns. For example, to include all indices that start with test except test3, you could specify test*,-test3. To include system indices such as .security and .kibana, add .* to the list of included names. For example .*,test*,-test3

    4. Optional: Specify how often to collect monitoring data. The default value for the xpack.monitoring.collection.interval setting 10 seconds. See Monitoring settings.
  2. Identify where to store monitoring data.

    By default, the data is stored on the same cluster by using a local exporter. Alternatively, you can use an http exporter to send data to a separate monitoring cluster.

    The Elasticsearch monitoring features use ingest pipelines, therefore the cluster that stores the monitoring data must have at least one ingest node.

    For more information about typical monitoring architectures, see How it works.

  3. If you choose to use an http exporter:

    1. On the cluster that you want to monitor (often called the production cluster), configure each node to send metrics to your monitoring cluster. Configure an HTTP exporter in the xpack.monitoring.exporters settings in the elasticsearch.yml file. For example:

      xpack.monitoring.exporters:
        id1:
          type: http
          host: ["http://es-mon-1:9200", "http://es-mon2:9200"]
    2. If the Elastic security features are enabled on the monitoring cluster, you must provide appropriate credentials when data is shipped to the monitoring cluster:

      1. Create a user on the monitoring cluster that has the remote_monitoring_agent built-in role. Alternatively, use the remote_monitoring_user built-in user.
      2. Add the user ID and password settings to the HTTP exporter settings in the elasticsearch.yml file on each node.

        For example:

        xpack.monitoring.exporters:
          id1:
            type: http
            host: ["http://es-mon-1:9200", "http://es-mon2:9200"]
            auth.username: remote_monitoring_user
            auth.password: YOUR_PASSWORD
    3. If you configured the monitoring cluster to use encrypted communications, you must use the HTTPS protocol in the host setting. You must also specify the trusted CA certificates that will be used to verify the identity of the nodes in the monitoring cluster.

      • To add a CA certificate to an Elasticsearch node’s trusted certificates, you can specify the location of the PEM encoded certificate with the certificate_authorities setting. For example:

        xpack.monitoring.exporters:
          id1:
            type: http
            host: ["https://es-mon1:9200", "https://es-mon2:9200"]
            auth:
              username: remote_monitoring_user
              password: YOUR_PASSWORD
            ssl:
              certificate_authorities: [ "/path/to/ca.crt" ]
      • Alternatively, you can configure trusted certificates using a truststore (a Java Keystore file that contains the certificates). For example:

        xpack.monitoring.exporters:
          id1:
            type: http
            host: ["https://es-mon1:9200", "https://es-mon2:9200"]
            auth:
              username: remote_monitoring_user
              password: YOUR_PASSWORD
            ssl:
              truststore.path: /path/to/file
              truststore.password: password
  4. Configure your cluster to route monitoring data from sources such as Kibana, Beats, and Logstash to the monitoring cluster. For information about configuring each product to collect and send monitoring data, see Monitor a cluster.
  5. If you updated settings in the elasticsearch.yml files on your production cluster, restart Elasticsearch. See Stopping Elasticsearch and Starting Elasticsearch.

    You may want to temporarily disable shard allocation before you restart your nodes to avoid unnecessary shard reallocation during the install process.

  6. Optional: Configure the indices that store the monitoring data.
  7. View the monitoring data in Kibana.