The Logstash output sends events directly to Logstash by using the lumberjack protocol, which runs over TCP. Logstash allows for additional processing and routing of generated events.
To use Logstash as an output, you must install and configure the Beats input plugin for Logstash.
If you want to use Logstash to perform additional processing on the data collected by Filebeat, you need to configure Filebeat to use Logstash.
To do this, you edit the Filebeat configuration file to disable the Elasticsearch output by commenting it out and enable the Logstash output by uncommenting the logstash section:
#----------------------------- Logstash output -------------------------------- output.logstash: hosts: ["127.0.0.1:5044"]
hosts option specifies the Logstash server and the port (
5044) where Logstash is configured to listen for incoming
For this configuration, you must load the index template into Elasticsearch manually because the options for auto loading the template are only available for the Elasticsearch output.
Every event sent to Logstash contains the following metadata fields that you can use in Logstash for indexing and filtering:
Filebeat uses the
The default is filebeat. To change this value, set the
The beats current version.
The value of
@metadata.type field, added by the Logstash output, is
deprecated, hardcoded to
doc, and will be removed in Filebeat 7.0.
You can access this metadata from within the Logstash config file to set values dynamically based on the contents of the metadata.
For example, the following Logstash configuration file for versions 2.x and 5.x sets Logstash to use the index and document type reported by Beats for indexing events into Elasticsearch:
Events indexed into Elasticsearch with the Logstash configuration shown here will be similar to events directly indexed by Beats into Elasticsearch.
This output works with all compatible versions of Logstash. See "Supported Beats Versions" in the Elastic Support Matrix.
You can specify the following options in the
logstash section of the
filebeat.yml config file:
The enabled config is a boolean setting to enable or disable the output. If set to false, the output is disabled.
The default value is true.
The list of known Logstash servers to connect to. If load balancing is disabled, but multiple hosts are configured, one host is selected randomly (there is no precedence). If one host becomes unreachable, another one is selected randomly.
All entries in this list can contain a port number. If no port number is given, the
value specified for
port is used as the default port number.
The gzip compression level. Setting this value to 0 disables compression. The compression level must be in the range of 1 (best speed) to 9 (best compression).
Increasing the compression level will reduce the network usage but will increase the cpu usage.
The default value is 3.
The number of workers per configured host publishing events to Logstash. This is best used with load balancing mode enabled. Example: If you have 2 hosts and 3 workers, in total 6 workers are started (3 for each host).
If set to true and multiple Logstash hosts are configured, the output plugin load balances published events onto all Logstash hosts. If set to false, the output plugin sends all events to only one host (determined at random) and will switch to another host if the selected one becomes unresponsive. The default value is false.
Time to live for a connection to Logstash after which the connection will be re-established. Useful when Logstash hosts represent load balancers. Since the connections to Logstash hosts are sticky operating behind load balancers can lead to uneven load distribution between the instances. Specifying a TTL on the connection allows to achieve equal connection distribution between the instances. Specifying a TTL of 0 will disable this feature.
The default value is 0.
The "ttl" option is not yet supported on an async Logstash client (one with the "pipelining" option set).
output.logstash: hosts: ["localhost:5044", "localhost:5045"] loadbalance: true index: filebeat
Configures number of batches to be sent asynchronously to logstash while waiting
for ACK from logstash. Output only becomes blocking once number of
batches have been written. Pipelining is disabled if a values of 0 is
configured. The default value is 2.
Deprecated in 5.0.0.
The default port to use if the port number is not given in
hosts. The default port number
The URL of the SOCKS5 proxy to use when connecting to the Logstash servers. The
value must be a URL with a scheme of
socks5://. The protocol used to
communicate to Logstash is not based on HTTP so a web-proxy cannot be used.
If the SOCKS5 proxy server requires client authentication, then a username and password can be embedded in the URL as shown in the example.
When using a proxy, hostnames are resolved on the proxy server instead of on the
client. You can change this behavior by setting the
output.logstash: hosts: ["remote-host:5044"] proxy_url: socks5://user:password@socks5-proxy:2233
proxy_use_local_resolver option determines if Logstash hostnames are
resolved locally when using a proxy. The default value is false which means
that when a proxy is used the name resolution occurs on the proxy server.
The index root name to write events to. The default is the Beat name. For
indices (for example,
Configuration options for SSL parameters like the root CA for Logstash connections. See Specify SSL settings for more information. To use SSL, you must also configure the Beats input plugin for Logstash to use SSL/TLS.
The number of seconds to wait for responses from the Logstash server before timing out. The default is 30 (seconds).
The number of times to retry publishing an event after a publishing failure. After the specified number of retries, the events are typically dropped.
Filebeat will ignore the
max_retries setting and retry until all
events are published.
The default is 3.
The maximum number of events to bulk in a single Logstash request. The default is 2048.
If the Beat sends single events, the events are collected into batches. If the Beat publishes
a large batch of events (larger than the value specified by
bulk_max_size), the batch is
Specifying a larger batch size can improve performance by lowering the overhead of sending events. However big batch sizes can also increase processing times, which might result in API errors, killed connections, timed-out publishing requests, and, ultimately, lower throughput.
bulk_max_size to values less than or equal to 0 disables the
splitting of batches. When splitting is disabled, the queue decides on the
number of events to be contained in a batch.
If enabled only a subset of events in a batch of events is transferred per transaction.
The number of events to be sent increases up to
bulk_max_size if no error is encountered.
On error the number of events per transaction is reduced again.
The default is