Configuring Marvel

You control how Marvel collects data from an Elasticsearch cluster by configuring Marvel-specific settings in the elasticsearch.yml configuration file on each node. You can add a custom index template to change the settings for the indices Marvel creates to store the data collected from a cluster.

Controlling Marvel Data Collection

You can set the following marvel.agent options in a node’s elasticsearch.yml file to control how Marvel data is collected from the node.

Sets the timeout for collecting the cluster state. Defaults to 10m.
Sets the timeout for collecting the cluster statistics. Defaults to 10m.

Controls which indices Marvel collects data from. Defaults to all indices. Specify the index names as a comma-separated list, for example test1,test2,test3. Names can include wildcards, for example test*. You can explicitly include or exclude indices by prepending + to include the index, or - to exclude the index. For example, to include all indices that start with test except test3, you could specify +test*,-test3.

You can update this setting through the Cluster Update Settings API.

Sets the timeout for collecting index statistics. Defaults to 10m.
Sets the timeout for collecting total indices statistics. Defaults to 10m.

Configures where the agent stores monitoring data. By default, the agent uses a local exporter that indexes monitoring data on the cluster where it is installed. Use an HTTP exporter to send data to a separate monitoring cluster. For more information, see Setting up a Separate Monitoring Cluster.

  id1:                              # default local exporter
    type: local

  id2:                                     # example of an http exporter
    type: http                             # exporter type, local or http
    host: [ "http://domain:port", ... ]    # host(s) to send data to over http or https

    headers:                               # optional headers that should be passed with every HTTP request
      X-My-Proxy-Header: <string>          # arbitrary key/value pair (keys and value are used as-is)
      X-My-Other-Thing: [ <string>, ... ]  # arbitrary key/values pair (keys and values are used as-is)

      username: <string>            # basic auth username
      password: <string>            # basic auth password

      timeout: <time_value>         # http connection timeout (default: 6s)
      read_timeout: <time_value>    # http response timeout (default: connection.timeout * 10)
      keep_alive: true | false      # use persistent connections (default: true)

      hostname_verification: true | false  # check host certificate (default: true)
      protocol: <string>                   # security protocol (default: TLSv1.2)
      truststore.path: /path/to/file       # absolute path to the truststore
      truststore.password: <string>        # password for the truststore
      truststore.algorithm: <string>       # format for the truststore (default: SunX509)

        time_format: <string>              # time format suffix for marvel indices (default: "YYYY.MM.dd")

Optional headers that can be useful when using an http exporter type to pass it content through proxies. Certain headers are blacklisted because they can prevent proper execution. That list currently includes Content-Type and Content-Length. You can use this to manually supply the Base64-encoded Authorization header instead of hardcoding the auth.username and auth.password.

Any header that Marvel creates will take precedence over headers supplied as a setting.

Controls whether or not all recoveries are collected. Set to true to collect only active recoveries. Defaults to false.
Sets the timeout for collecting the recovery information. Defaults to 10m. *
Controls how often data samples are collected. Defaults to 10s. If you modify the collection interval, set the marvel.min_interval_seconds option in kibana.yml to the same value. Set to -1 to temporarily disable data collection. You can update this setting through the Cluster Update Settings API.
Sets the retention duration beyond which the indices created by Marvel will be automatically deleted. Defaults to 7d. Set to -1 to disable automatic deletion of Marvel indices.

Configuring Marvel’s Indices

Marvel uses an index template to configure the indices used to store the data collected from a cluster.

You can retrieve the default template with:

GET /_template/.marvel-es

By default, the template configures one shard and one replica for the Marvel indices. To override the default settings, add your own template:

  1. Set the template pattern to .marvel-es-*.
  2. Set the template order to 1. This ensures your template is applied after the default template, which has an order of 0.
  3. Specify the number_of_shards and/or number_of_replicas in the settings section.

For example, the following template increases the number of shards to five and the number of replicas to two.

PUT /_template/custom_marvel
    "template": ".marvel*",
    "order": 1,
    "settings": {
        "number_of_shards": 5,
        "number_of_replicas": 2

Only set the number_of_shards and number_of_replicas in the settings section. Overriding other Marvel template settings could cause your Marvel dashboards to stop working correctly.

Configuring Marvel in Kibana

You can set the following marvel options in the Kibana configuration file (kibana.yml). In most cases, however, you can just rely on the defaults. For more information about modifying Server Properties] in the Kibana User Guide.

The number of term buckets to return out of the overall terms list when performing terms aggregations to retrieve index and node metrics. For more information about the size parameter, see Terms Aggregation in the Elasticsearch Reference. Defaults to 10000.
The minimum number of seconds that a time bucket in a chart can represent. Defaults to 10. If you modify the marvel.agent.interval in elasticsearch.yml, set this option to the same value.

The node resolver controls how nodes are considered unique. This can be set to either transport_address or name. transport_address controls uniqueness based on the node’s published hostname/IP and port. name controls uniqueness based on the node’s setting. Defaults to transport_address.

If you actively set your Elasticsearch node names via, then you should set this to name. This is particularly helpful for cloud deployments of Elasticsearch where IPs are not always static.

Whether or not to send cluster statistics to Elastic. Reporting your cluster statistics helps us improve your user experience. Your data is never shared with anyone. Set to false to disable statistics reporting from any browser connected to the Kibana instance. You can also opt-out on a per-browser basis through the Marvel user interface. Defaults to true.

Configuring a Tribe Node to Work with Marvel

If you connect to a cluster through a tribe node, to monitor the cluster you need to install the Marvel agent on the tribe node as well as the nodes in the cluster. If the cluster is protected by Shield, you also need to install and configure Shield on the tribe node. For more information, see Installing Shield on Tribe Nodes.

To exclude the tribe node from the monitoring data, set marvel.enabled: false in the tribe node’s elasticsearch.yml file: tribe
marvel.enabled: false

  t1: cluster1 ["cluster1-node1:9300", "cluster1-node2:9300"]

With this configuration, the tribe node is included in the node count displayed in the Marvel UI, but is not included in the node list because it does not export any data to the monitoring cluster.

To include the tribe node in the monitoring data, enable Marvel data collection at the tribe level: tribe
marvel.enabled: false

  t1: cluster1 ["cluster1-node1:9300", "cluster1-node2:9300"]
      enabled: true 
          type: http 
          host: ["monitoringhost:9200"]

Enable data collection from the tribe node.

Export data via HTTP to the monitoring cluster.

When you enable data collection from the tribe node, it is included in both the node count and node list. Note that tribe nodes only support the http exporter—data from a tribe node must be sent to an external monitoring cluster.