Logstash Lines: bug fixes, better environment variable support and more

Welcome back to The Logstash Lines! In these weekly posts, we'll share the latest happenings in the world of Logstash and its ecosystem.

Changes in 5.4.0

  • We now prevent multiple Logstash instances from starting using the same data directory. A file lock is created under path.data when a Logstash instance starts. To start multiple Logstash instances on the same box, just override —path.data to point to separate directories.
  • Fixed environment variables resolution for a nested hash type configuration. Previously, environment variables in LS config would only work in a top level configuration value in a hash data type option.
  • Avro Filter: Added a tag_on_failure config param to tag an event with _avroparsefailure when the raw event was unparseable under the Avro schema. The raw message is still retained in the event.
  • Elasticsearch Output: Support request and response compression. Request compression can be enabled by setting the http_compression option. Response compression is enabled by default and works out of the box with Elasticsearch 5.0 and above.
  • S3 Input: Additional metadata such as bucket, path and prefix is now available in the @metadata field of the event that can be used by other plugins in the pipeline.
  • S3 Output: Fixed an issue where empty gzip files were being uploaded to S3 when using the gzip encoding.
  • Setting --path.data option via CLI flag or logstash.yml will now also change path.queue to be relative to that data directory. Previously, if the data directory was changed, the queue directory would still use the default directory.

Changes in 5.3.0

  • Fixed Persistent Queues issues on Windows. Logstash would crash when PQ was enabled on Windows, and when it tried to purge old data. This has been backported to 5.2.2 as well.
  • Dissect filter, which can be used as an alternative to grok filter for extracting fields, is now included by default in Logstash release.
  • Fixed an issue where the JVM metrics collection code was affecting the throughput of Logstash. We were collecting more information than required (by the response) from the JVM. The expensive call has been fixed.
  • Fixed a bug where persistent queue recovery on Logstash startup was failing for certain use cases.

Elastic{ON} 2017

If you are coming to Elastic{ON} '17, our team would love to meet you! Please come see us at the AMA booth. We have plenty of new features and product updates that we can't wait to unravel!