Elasticsearch filter plugin v4.3.0
editElasticsearch filter plugin v4.3.0
edit- Plugin version: v4.3.0
- Released on: 2025-07-21
- Changelog
For other versions, see the overview list.
To learn more about Logstash, see the Logstash Reference.
Getting help
editFor questions about the plugin, open a topic in the Discuss forums. For bugs or feature requests, open an issue in Github. For the list of Elastic supported plugins, please consult the Elastic Support Matrix.
Description
editSearch Elasticsearch for a previous log event and copy some fields from it into the current event. Below are two complete examples of how this filter might be used.
The first example uses the legacy query parameter where the user is limited to
an Elasticsearch query_string.
Whenever logstash receives an "end" event, it uses this elasticsearch
filter to find the matching "start" event based on some operation identifier.
Then it copies the @timestamp field from the "start" event into a new field on
the "end" event. Finally, using a combination of the "date" filter and the
"ruby" filter, we calculate the time duration in hours between the two events.
if [type] == "end" {
elasticsearch {
hosts => ["es-server"]
query => "type:start AND operation:%{[opid]}"
fields => { "@timestamp" => "started" }
}
date {
match => ["[started]", "ISO8601"]
target => "[started]"
}
ruby {
code => "event.set('duration_hrs', (event.get('@timestamp') - event.get('started')) / 3600)"
}
}
The example below reproduces the above example but utilises the query_template. This query_template represents a full Elasticsearch query DSL and supports the standard Logstash field substitution syntax. The example below issues the same query as the first example but uses the template shown.
if [type] == "end" {
elasticsearch {
hosts => ["es-server"]
query_template => "template.json"
fields => { "@timestamp" => "started" }
}
date {
match => ["[started]", "ISO8601"]
target => "[started]"
}
ruby {
code => "event.set('duration_hrs', (event.get('@timestamp') - event.get('started')) / 3600)"
}
}
template.json:
{
"size": 1,
"sort" : [ { "@timestamp" : "desc" } ],
"query": {
"query_string": {
"query": "type:start AND operation:%{[opid]}"
}
},
"_source": ["@timestamp"]
}
As illustrated above, through the use of opid, fields from the Logstash events can be referenced within the template. The template will be populated per event prior to being used to query Elasticsearch.
Notice also that when you use query_template, the Logstash attributes result_size
and sort will be ignored. They should be specified directly in the JSON
template, as shown in the example above.
Authentication
editAuthentication to a secure Elasticsearch cluster is possible using one of the following options:
Authorization
editAuthorization to a secure Elasticsearch cluster requires read permission at index level and monitoring permissions at cluster level.
The monitoring permission at cluster level is necessary to perform periodic connectivity checks.
ES|QL support
editElasticsearch Query Language (ES|QL) provides a SQL-like interface for querying your Elasticsearch data.
To use ES|QL, this plugin needs to be installed in Logstash 8.17.4 or newer, and must be connected to Elasticsearch 8.11 or newer.
To configure ES|QL query in the plugin, set your ES|QL query in the query parameter.
We recommend understanding ES|QL current limitations before using it in production environments.
The following is a basic ES|QL query that sets the food name to transaction event based on upstream event’s food ID:
filter {
elasticsearch {
hosts => [ 'https://..']
api_key => '....'
query => '
FROM food-index
| WHERE id == ?food_id
'
query_params => {
"food_id" => "[food][id]"
}
}
}
Set config.support_escapes: true in logstash.yml if you need to escape special chars in the query.
In the result event, the plugin sets total result size in [@metadata][total_values] field.
Mapping ES|QL result to Logstash event
editES|QL returns query results in a structured tabular format, where data is organized into columns (fields) and values (entries). The plugin maps each value entry to an event, populating corresponding fields. For example, a query might produce a table like:
timestamp |
user_id |
action |
status.code |
status.desc |
|---|---|---|---|---|
2025-04-10T12:00:00 |
123 |
login |
200 |
Success |
2025-04-10T12:05:00 |
456 |
purchase |
403 |
Forbidden (unauthorized user) |
For this case, the plugin creates two JSON look like objects as below and places them into the target field of the event if target is defined.
If target is not defined, the plugin places the only first result at the root of the event.
[
{
"timestamp": "2025-04-10T12:00:00",
"user_id": 123,
"action": "login",
"status": {
"code": 200,
"desc": "Success"
}
},
{
"timestamp": "2025-04-10T12:05:00",
"user_id": 456,
"action": "purchase",
"status": {
"code": 403,
"desc": "Forbidden (unauthorized user)"
}
}
]
If your index has a mapping with sub-objects where status.code and status.desc actually dotted fields, they appear in Logstash events as a nested structure.
Conflict on multi-fields
editES|QL query fetches all parent and sub-fields fields if your Elasticsearch index has multi-fields or subobjects.
Since Logstash events cannot contain parent field’s concrete value and sub-field values together, the plugin ignores sub-fields with warning and includes parent.
We recommend using the RENAME (or DROP to avoid warning) keyword in your ES|QL query explicitly rename the fields to include sub-fields into the event.
This is a common occurrence if your template or mapping follows the pattern of always indexing strings as "text" (field) + " keyword" (field.keyword) multi-field.
In this case it’s recommended to do KEEP field if the string is identical and there is only one subfield as the engine will optimize and retrieve the keyword, otherwise you can do KEEP field.keyword | RENAME field.keyword as field.
To illustrate the situation with example, assuming your mapping has a time time field with time.min and time.max sub-fields as following:
"properties": {
"time": { "type": "long" },
"time.min": { "type": "long" },
"time.max": { "type": "long" }
}
The ES|QL result will contain all three fields but the plugin cannot map them into Logstash event.
To avoid this, you can use the RENAME keyword to rename the time parent field to get all three fields with unique fields.
...
query => 'FROM my-index | RENAME time AS time.current'
...
For comprehensive ES|QL syntax reference and best practices, see the ES|QL documentation.
Elasticsearch Filter Configuration Options
editThis plugin supports the following configuration options plus the Common options described later.
As of version 4.0.0 of this plugin, a number of previously deprecated settings related to SSL have been removed. Please see the
Elasticsearch Filter Obsolete Configuration Options for more details.
| Setting | Input type | Required |
|---|---|---|
No |
||
No |
||
No |
||
No |
||
No |
||
No |
||
No |
||
No |
||
No |
||
No |
||
No |
||
No |
||
No |
||
No |
||
string, one of |
No |
|
No |
||
No |
||
No |
||
No |
||
No |
||
No |
||
No |
||
list of path |
No |
|
list of string |
No |
|
No |
||
No |
||
No |
||
No |
||
No |
||
No |
||
No |
||
No |
||
No |
||
string, one of |
No |
|
No |
||
No |
||
No |
Also see Common options for a list of options supported by all filter plugins.
aggregation_fields
edit- Value type is hash
-
Default value is
{} -
Format:
"aggregation_name" => "[path][on][event]":-
aggregation_name: aggregation name in result from Elasticsearch -
[path][on][event]: path for where to place the value on the current event, using field-reference notation
-
A mapping of aggregations to copy into the target of the current event.
Example:
filter {
elasticsearch {
aggregation_fields => {
"my_agg_name" => "my_ls_field"
}
}
}
api_key
edit- Value type is password
- There is no default value for this setting.
Authenticate using Elasticsearch API key. Note that this option also requires
enabling the ssl_enabled option.
Format is id:api_key where id and api_key are as returned by the
Elasticsearch Create API key API.
ca_trusted_fingerprint
edit- Value type is string, and must contain exactly 64 hexadecimal characters.
- There is no default value for this setting.
- Use of this option requires Logstash 8.3+
The SHA-256 fingerprint of an SSL Certificate Authority to trust, such as the autogenerated self-signed CA for an Elasticsearch cluster.
cloud_auth
edit- Value type is password
- There is no default value for this setting.
Cloud authentication string ("<username>:<password>" format) is an alternative for the user/password pair.
For more info, check out the Logstash-to-Cloud documentation.
cloud_id
edit- Value type is string
- There is no default value for this setting.
Cloud ID, from the Elastic Cloud web console. If set hosts should not be used.
For more info, check out the Logstash-to-Cloud documentation.
custom_headers
edit- Value type is hash
- Default value is empty
Pass a set of key value pairs as the headers sent in each request to Elasticsearch. These custom headers will override any headers previously set by the plugin such as the User Agent or Authorization headers.
docinfo_fields
edit- Value type is hash
-
Default value is
{} -
Format:
"path.in.source" => "[path][on][event]":-
path.in.source: field path in document source of result from Elasticsearch, using dot-notation -
[path][on][event]: path for where to place the value on the current event, using field-reference notation
-
A mapping of docinfo (_source) fields to copy into the target of the current event.
Example:
filter {
elasticsearch {
docinfo_fields => {
"_id" => "document_id"
"_index" => "document_index"
}
}
}
fields
edit- Value type is array
-
Default value is
{} -
Format:
"path.in.result" => "[path][on][event]":-
path.in.result: field path in indexed result from Elasticsearch, using dot-notation -
[path][on][event]: path for where to place the value on the current event, using field-reference notation
-
A mapping of indexed fields to copy into the target of the current event.
In the following example, the values of @timestamp and event_id on the event
found via elasticsearch are copied to the current event’s
started and start_id fields, respectively:
fields => {
"@timestamp" => "started"
"event_id" => "start_id"
}
hosts
edit- Value type is array
-
Default value is
["localhost:9200"]
List of elasticsearch hosts to use for querying.
index
edit- Value type is string
-
Default value is
""
Comma-delimited list of index names to search; use _all or empty string to perform the operation on all indices.
Field substitution (e.g. index-name-%{date_field}) is available
password
edit- Value type is password
- There is no default value for this setting.
Basic Auth - password
proxy
edit- Value type is uri
- There is no default value for this setting.
Set the address of a forward HTTP proxy.
An empty string is treated as if proxy was not set, and is useful when using
environment variables e.g. proxy => '${LS_PROXY:}'.
query
edit- Value type is string
- There is no default value for this setting.
The query to be executed.
The accepted query shape is DSL query string or ES|QL.
For the DSL query string, use either query or query_template.
Read the Elasticsearch query
string documentation or Elasticsearch ES|QL documentation for more information.
query_type
edit-
Value can be
dsloresql -
Default value is
dsl
Defines the query shape.
When dsl, the query shape must be valid Elasticsearch JSON-style string.
When esql, the query shape must be a valid ES|QL string and index, query_template and sort parameters are not allowed.
query_params
editNamed parameters in ES|QL to send to Elasticsearch together with query.
Visit passing parameters to query page for more information.
query_template
edit- Value type is string
- There is no default value for this setting.
File path to elasticsearch query in DSL format. More information is available in
the Elasticsearch query documentation.
Use either query or query_template.
retry_on_failure
edit- Value type is number
-
Default value is
0(retries disabled)
How many times to retry an individual failed request.
When enabled, retry requests that result in connection errors or an HTTP status code included in retry_on_status
retry_on_status
edit- Value type is array
-
Default value is an empty list
[]
Which HTTP Status codes to consider for retries (in addition to connection errors) when using retry_on_failure,
sort
edit- Value type is string
-
Default value is
"@timestamp:desc"
Comma-delimited list of <field>:<direction> pairs that define the sort order
ssl_certificate
edit- Value type is path
- There is no default value for this setting.
SSL certificate to use to authenticate the client. This certificate should be an OpenSSL-style X.509 certificate file.
This setting can be used only if ssl_key is set.
ssl_certificate_authorities
edit- Value type is a list of path
- There is no default value for this setting
The .cer or .pem files to validate the server’s certificate.
You cannot use this setting and ssl_truststore_path at the same time.
ssl_cipher_suites
edit- Value type is a list of string
- There is no default value for this setting
The list of cipher suites to use, listed by priorities. Supported cipher suites vary depending on the Java and protocol versions.
ssl_enabled
edit- Value type is boolean
- There is no default value for this setting.
Enable SSL/TLS secured communication to Elasticsearch cluster.
Leaving this unspecified will use whatever scheme is specified in the URLs listed in hosts or extracted from the cloud_id.
If no explicit protocol is specified plain HTTP will be used.
ssl_key
edit- Value type is path
- There is no default value for this setting.
OpenSSL-style RSA private key that corresponds to the ssl_certificate.
This setting can be used only if ssl_certificate is set.
ssl_keystore_password
edit- Value type is password
- There is no default value for this setting.
Set the keystore password
ssl_keystore_path
edit- Value type is path
- There is no default value for this setting.
The keystore used to present a certificate to the server.
It can be either .jks or .p12
You cannot use this setting and ssl_certificate at the same time.
ssl_keystore_type
edit-
Value can be any of:
jks,pkcs12 - If not provided, the value will be inferred from the keystore filename.
The format of the keystore file. It must be either jks or pkcs12.
ssl_supported_protocols
edit- Value type is string
-
Allowed values are:
'TLSv1.1','TLSv1.2','TLSv1.3' -
Default depends on the JDK being used. With up-to-date Logstash, the default is
['TLSv1.2', 'TLSv1.3'].'TLSv1.1'is not considered secure and is only provided for legacy applications.
List of allowed SSL/TLS versions to use when establishing a connection to the Elasticsearch cluster.
For Java 8 'TLSv1.3' is supported only since 8u262 (AdoptOpenJDK), but requires that you set the
LS_JAVA_OPTS="-Djdk.tls.client.protocols=TLSv1.3" system property in Logstash.
If you configure the plugin to use 'TLSv1.1' on any recent JVM, such as the one packaged with Logstash,
the protocol is disabled by default and needs to be enabled manually by changing jdk.tls.disabledAlgorithms in
the $JDK_HOME/conf/security/java.security configuration file. That is, TLSv1.1 needs to be removed from the list.
ssl_truststore_password
edit- Value type is password
- There is no default value for this setting.
Set the truststore password
ssl_truststore_path
edit- Value type is path
- There is no default value for this setting.
The truststore to validate the server’s certificate.
It can be either .jks or .p12.
You cannot use this setting and ssl_certificate_authorities at the same time.
ssl_truststore_type
edit-
Value can be any of:
jks,pkcs12 - If not provided, the value will be inferred from the truststore filename.
The format of the truststore file. It must be either jks or pkcs12.
ssl_verification_mode
edit-
Value can be any of:
full,none -
Default value is
full
Defines how to verify the certificates presented by another party in the TLS connection:
full validates that the server certificate has an issue date that’s within
the not_before and not_after dates; chains to a trusted Certificate Authority (CA), and
has a hostname or IP address that matches the names within the certificate.
none performs no certificate validation.
Setting certificate verification to none disables many security benefits of SSL/TLS, which is very dangerous. For more information on disabling certificate verification please read https://www.cs.utexas.edu/~shmat/shmat_ccs12.pdf
tag_on_failure
edit- Value type is array
-
Default value is
["_elasticsearch_lookup_failure"]
Tags the event on failure to look up previous log event information. This can be used in later analysis.
target
edit- Value type is string
- There is no default value for this setting.
Define the target field for placing the result data.
If this setting is omitted, the target will be the root (top level) of the event.
It is highly recommended to set when using query_type=>'esql' to set all query results into the event.
When query_type=>'dsl', the destination fields specified in fields, aggregation_fields, and docinfo_fields are relative to this target.
For example, if you want the data to be put in the operation field:
if [type] == "end" {
filter {
query => "type:start AND transaction:%{[transactionId]}"
elasticsearch {
target => "transaction"
fields => {
"@timestamp" => "started"
"transaction_id" => "id"
}
}
}
}
fields fields will be expanded into a data structure in the target field, overall shape looks like this:
{
"transaction" => {
"started" => "2025-04-29T12:01:46.263Z"
"id" => "1234567890"
}
}
when writing to a field that already exists on the event, the previous value will be overwritten.
Elasticsearch Filter Obsolete Configuration Options
editAs of version 4.0.0 of this plugin, some configuration options have been replaced.
The plugin will fail to start if it contains any of these obsolete options.
| Setting | Replaced by | ca_file |
|---|---|---|
keystore |
||
keystore_password |
ssl |
Common options
editThese configuration options are supported by all filter plugins:
| Setting | Input type | Required |
|---|---|---|
No |
||
No |
||
No |
||
No |
||
No |
||
No |
||
No |
add_field
edit- Value type is hash
-
Default value is
{}
If this filter is successful, add any arbitrary fields to this event.
Field names can be dynamic and include parts of the event using the %{field}.
Example:
filter {
elasticsearch {
add_field => { "foo_%{somefield}" => "Hello world, from %{host}" }
}
}
# You can also add multiple fields at once:
filter {
elasticsearch {
add_field => {
"foo_%{somefield}" => "Hello world, from %{host}"
"new_field" => "new_static_value"
}
}
}
If the event has field "somefield" == "hello" this filter, on success,
would add field foo_hello if it is present, with the
value above and the %{host} piece replaced with that value from the
event. The second example would also add a hardcoded field.
add_tag
edit- Value type is array
-
Default value is
[]
If this filter is successful, add arbitrary tags to the event.
Tags can be dynamic and include parts of the event using the %{field}
syntax.
Example:
filter {
elasticsearch {
add_tag => [ "foo_%{somefield}" ]
}
}
# You can also add multiple tags at once:
filter {
elasticsearch {
add_tag => [ "foo_%{somefield}", "taggedy_tag"]
}
}
If the event has field "somefield" == "hello" this filter, on success,
would add a tag foo_hello (and the second example would of course add a taggedy_tag tag).
enable_metric
edit- Value type is boolean
-
Default value is
true
Disable or enable metric logging for this specific plugin instance by default we record all the metrics we can, but you can disable metrics collection for a specific plugin.
id
edit- Value type is string
- There is no default value for this setting.
Add a unique ID to the plugin configuration. If no ID is specified, Logstash will generate one.
It is strongly recommended to set this ID in your configuration. This is particularly useful
when you have two or more plugins of the same type, for example, if you have 2 elasticsearch filters.
Adding a named ID in this case will help in monitoring Logstash when using the monitoring APIs.
filter {
elasticsearch {
id => "ABC"
}
}
periodic_flush
edit- Value type is boolean
-
Default value is
false
Call the filter flush method at regular interval. Optional.
remove_field
edit- Value type is array
-
Default value is
[]
If this filter is successful, remove arbitrary fields from this event. Fields names can be dynamic and include parts of the event using the %{field} Example:
filter {
elasticsearch {
remove_field => [ "foo_%{somefield}" ]
}
}
# You can also remove multiple fields at once:
filter {
elasticsearch {
remove_field => [ "foo_%{somefield}", "my_extraneous_field" ]
}
}
If the event has field "somefield" == "hello" this filter, on success,
would remove the field with name foo_hello if it is present. The second
example would remove an additional, non-dynamic field.
remove_tag
edit- Value type is array
-
Default value is
[]
If this filter is successful, remove arbitrary tags from the event.
Tags can be dynamic and include parts of the event using the %{field}
syntax.
Example:
filter {
elasticsearch {
remove_tag => [ "foo_%{somefield}" ]
}
}
# You can also remove multiple tags at once:
filter {
elasticsearch {
remove_tag => [ "foo_%{somefield}", "sad_unwanted_tag"]
}
}
If the event has field "somefield" == "hello" this filter, on success,
would remove the tag foo_hello if it is present. The second example
would remove a sad, unwanted tag as well.