Configure the Elasticsearch outputedit

The Elasticsearch output sends events directly to Elasticsearch by using the Elasticsearch HTTP API.

Compatibility: This output works with all compatible versions of Elasticsearch. See the Elastic Support Matrix.

This example configures an Elasticsearch output called default in the elastic-agent.yml file:

    type: elasticsearch
    hosts: []
    username: elastic
    password: changeme

This example is similar to the previous one, except that it uses the recommended token-based (API key) authentication:

    type: elasticsearch
    hosts: []
    api_key: "my_api_key"

Token-based authentication is required in an Elastic Cloud serverless environment.

Elasticsearch output configuration settingsedit

The elasticsearch output type supports the following settings, grouped by category. Many of these settings have sensible defaults that allow you to run Elastic Agent with minimal configuration.

Commonly used settingsedit

Setting Description


(boolean) Enables or disables the output. If set to false, the output is disabled.

Default: true


(list) The list of Elasticsearch nodes to connect to. The events are distributed to these nodes in round robin order. If one node becomes unreachable, the event is automatically sent to another node. Each Elasticsearch node can be defined as a URL or IP:PORT. For example:, or If no port is specified, 9200 is used.

When a node is defined as an IP:PORT, the scheme and path are taken from the protocol and path settings.

    type: elasticsearch
    hosts: ["", ""] 
    protocol: https
    path: /elasticsearch

In this example, the Elasticsearch nodes are available at and


(string) The name of the protocol Elasticsearch is reachable on. The options are: http or https. The default is http. However, if you specify a URL for hosts, the value of protocol is overridden by whatever scheme you specify in the URL.


(boolean) If set to true, all proxy settings, including HTTP_PROXY and HTTPS_PROXY variables, are ignored.

Default: false


(string) Additional headers to send to proxies during CONNECT requests.


(string) The URL of the proxy to use when connecting to the Elasticsearch servers. The value may be either a complete URL or a host[:port], in which case the http scheme is assumed. If a value is not specified through the configuration file then proxy environment variables are used. See the Go documentation for more information about the environment variables.

Authentication settingsedit

When sending data to a secured cluster through the elasticsearch output, Elastic Agent can use any of the following authentication methods:

Basic authentication credentialsedit
    type: elasticsearch
    hosts: ["https://myEShost:9200"]
    username: "your-username"
    password: "your-password"
Setting Description


(string) The basic authentication password for connecting to Elasticsearch.


(string) The basic authentication username for connecting to Elasticsearch.

This user needs the privileges required to publish events to Elasticsearch.

Note that in an Elastic Cloud serverless environment you need to use token-based (API key) authentication.

Token-based (API key) authenticationedit
    type: elasticsearch
    hosts: ["https://myEShost:9200"]
    api_key: "KnR6yE41RrSowb0kQ0HWoA"
Setting Description


(string) Instead of using a username and password, you can use API keys to secure communication with Elasticsearch. The value must be the ID of the API key and the API key joined by a colon: id:api_key. Token-based authentication is required in an Elastic Cloud serverless environment.

Public Key Infrastructure (PKI) certificatesedit
    type: elasticsearch
    hosts: ["https://myEShost:9200"]
    ssl.certificate: "/etc/pki/client/cert.pem"
    ssl.key: "/etc/pki/client/cert.key"

For a list of available settings, refer to SSL/TLS, specifically the settings under Table 4, “Common configuration options” and Table 5, “Client configuration options”.


The following encryption types are supported:

  • aes128-cts-hmac-sha1-96
  • aes128-cts-hmac-sha256-128
  • aes256-cts-hmac-sha1-96
  • aes256-cts-hmac-sha384-192
  • des3-cbc-sha1-kd
  • rc4-hmac

Example output config with Kerberos password-based authentication:

    type: elasticsearch
    hosts: [""]
    kerberos.auth_type: password
    kerberos.username: "elastic"
    kerberos.password: "changeme"
    kerberos.config_path: "/etc/krb5.conf"
    kerberos.realm: "ELASTIC.CO"

The service principal name for the Elasticsearch instance is constructed from these options. Based on this configuration, the name would be:


Setting Description


(string) The type of authentication to use with Kerberos KDC:

When specified, also set kerberos.username and kerberos.password.
When specified, also set kerberos.username and kerberos.keytab. The keytab must contain the keys of the selected principal, or authentication fails.

Default: password


(string) Path to the krb5.conf. Elastic Agent uses this setting to find the Kerberos KDC to retrieve a ticket.


(boolean) Enables or disables the Kerberos configuration.

Kerberos settings are disabled if either enabled is set to false or the kerberos section is missing.


(boolean) If true, enables Kerberos FAST authentication. This may conflict with some Active Directory installations.

Default: false


(string) If kerberos.auth_type is keytab, provide the path to the keytab of the selected principal.


(string) If kerberos.auth_type is password, provide a password for the selected principal.


(string) Name of the realm where the output resides.


(string) Name of the principal used to connect to the output.

Data parsing, filtering, and manipulation settingsedit

Settings used to parse, filter, and transform data.

Setting Description


(boolean) Configures escaping of HTML in strings. Set to true to enable escaping.

Default: false


(string) A format string value that specifies the ingest pipeline to write events to.

    type: elasticsearchoutput.elasticsearch:
    hosts: ["http://localhost:9200"]
    pipeline: my_pipeline_id

You can set the ingest pipeline dynamically by using a format string to access any event field. For example, this configuration uses a custom field, fields.log_type, to set the pipeline for each event:

    type: elasticsearch  hosts: ["http://localhost:9200"]
    pipeline: "%{[fields.log_type]}_pipeline"

With this configuration, all events with log_type: normal are sent to a pipeline named normal_pipeline, and all events with log_type: critical are sent to a pipeline named critical_pipeline.

To learn how to add custom fields to events, see the fields option.

See the pipelines setting for other ways to set the ingest pipeline dynamically.


An array of pipeline selector rules. Each rule specifies the ingest pipeline to use for events that match the rule. During publishing, Elastic Agent uses the first matching rule in the array. Rules can contain conditionals, format string-based fields, and name mappings. If the pipelines setting is missing or no rule matches, the pipeline setting is used.

Rule settings:

The pipeline format string to use. If this string contains field references, such as %{[]}, the fields must exist, or the rule fails.
A dictionary that takes the value returned by pipeline and maps it to a new name.
The default string value to use if mappings does not find a match.
A condition that must succeed in order to execute the current rule.

All the conditions supported by processors are also supported here.

The following example sends events to a specific pipeline based on whether the message field contains the specified string:

    type: elasticsearch  hosts: ["http://localhost:9200"]
      - pipeline: "warning_pipeline"
          message: "WARN"
      - pipeline: "error_pipeline"
          message: "ERR"

The following example sets the pipeline by taking the name returned by the pipeline format string and mapping it to a new name that’s used for the pipeline:

    type: elasticsearch
    hosts: ["http://localhost:9200"]
      - pipeline: "%{[fields.log_type]}"
          critical: "sev1_pipeline"
          normal: "sev2_pipeline"
        default: "sev3_pipeline"

With this configuration, all events with log_type: critical are sent to sev1_pipeline, all events with log_type: normal are sent to a sev2_pipeline, and all other events are sent to sev3_pipeline.

HTTP settingsedit

Settings that modify the HTTP requests sent to Elasticsearch.

Setting Description


Custom HTTP headers to add to each request created by the Elasticsearch output.


    type: elasticsearch
      X-My-Header: Header contents

Specify multiple header values for the same header name by separating them with a comma.


Dictionary of HTTP parameters to pass within the URL with index operations.


(string) An HTTP path prefix that is prepended to the HTTP API calls. This is useful for the cases where Elasticsearch listens behind an HTTP reverse proxy that exports the API under a custom prefix.

Memory queue settingsedit

The memory queue keeps all events in memory.

The memory queue waits for the output to acknowledge or drop events. If the queue is full, no new events can be inserted into the memory queue. Only after the signal from the output will the queue free up space for more events to be accepted.

The memory queue is controlled by the parameters queue.mem.flush.min_events and queue.mem.flush.timeout. If queue.mem.flush.timeout is 0s or queue.mem.flush.min_events is 0 or 1 then events can be sent by the output as soon as they are available. If the output supports a bulk_max_size parameter it controls the maximum batch size that can be sent.

If queue.mem.flush.min_events is greater than 1 and queue.mem.flush.timeout is greater than 0s, events will only be sent to the output when the queue contains at least queue.mem.flush.min_events events or the queue.mem.flush.timeout period has expired. In this mode the maximum size batch that that can be sent by the output is queue.mem.flush.min_events. If the output supports a bulk_max_size parameter, values of bulk_max_size greater than queue.mem.flush.min_events have no effect. The value of queue.mem.flush.min_events should be evenly divisible by bulk_max_size to avoid sending partial batches to the output.

This sample configuration forwards events to the output if 512 events are available or the oldest available event has been waiting for 5s in the queue: 4096
  queue.mem.flush.min_events: 512
  queue.mem.flush.timeout: 5s
Setting Description

The number of events the queue can store. This value should be evenly divisible by queue.mem.flush.min_events to avoid sending partial batches to the output.

Default: 3200 events


The minimum number of events required for publishing. If this value is set to 0 or 1, events are available to the output immediately. If this value is greater than 1 the output must wait for the queue to accumulate this minimum number of events or for queue.mem.flush.timeout to expire before publishing. When greater than 1 this value also defines the maximum possible batch that can be sent by the output.

Default: 1600 events


(int) The maximum wait time for queue.mem.flush.min_events to be fulfilled. If set to 0s, events are available to the output immediately.

Default: 10s

Performance tuning settingsedit

Settings that may affect performance when sending data through the Elasticsearch output.

Use the preset option to automatically configure the group of performance tuning settings to optimize for throughput, scale, latency, or you can select a balanced set of performance specifications.

The performance tuning preset values take precedence over any settings that may be defined separately. If you want to change any setting, set preset to custom and specify the performance tuning settings individually.

Setting Description


(string) The number of seconds to wait before trying to reconnect to Elasticsearch after a network error. After waiting backoff.init seconds, Elastic Agent tries to reconnect. If the attempt fails, the backoff timer is increased exponentially up to backoff.max. After a successful connection, the backoff timer is reset.

Default: 1s


(string) The maximum number of seconds to wait before attempting to connect to Elasticsearch after a network error.

Default: 60s


(int) The maximum number of events to bulk in a single Elasticsearch bulk API index request.

Events can be collected into batches. Elastic Agent will split batches larger than bulk_max_size into multiple batches.

Specifying a larger batch size can improve performance by lowering the overhead of sending events. However big batch sizes can also increase processing times, which might result in API errors, killed connections, timed-out publishing requests, and, ultimately, lower throughput.

Setting bulk_max_size to values less than or equal to 0 turns off the splitting of batches. When splitting is disabled, the queue decides on the number of events to be contained in a batch.

Default: 1600


(int) The gzip compression level. Set this value to 0 to disable compression. The compression level must be in the range of 1 (best speed) to 9 (best compression).

Increasing the compression level reduces network usage but increases CPU usage.

Default: 1


(int) The number of times to retry publishing an event after a publishing failure. After the specified number of retries, the events are typically dropped.

Set max_retries to a value less than 0 to retry until all events are published.

Default: 3


Configures the full group of performance tuning settings to optimize your Elastic Agent performance when sending data to an Elasticsearch output.

Refer to Performance tuning settings for a table showing the group of values associated with any preset, and another table showing EPS (events per second) results from testing the different preset options.

Performance tuning preset settings:

Configure the default tuning setting values for "out-of-the-box" performance.
Optimize the Elasticsearch output for throughput.
Optimize the Elasticsearch output for scale.
Optimize the Elasticsearch output to reduce latence.
Use the custom option to fine-tune the performance tuning settings individually.

Default: balanced


(string) The HTTP request timeout in seconds for the Elasticsearch request.

Default: 90s


(int) The number of workers per configured host publishing events. Example: If you have two hosts and three workers, in total six workers are started (three for each host).

Default: 1