Product release

Logstash 6.3.0 Released

Today is a big day -- we’re releasing the first version of Logstash since we’ve opened our X-Pack code. This is meaningful for a number of reasons which we’ve covered in detail in our opening X-Pack announcement.

This release also brings a number of significant new features and performance improvements. We’ve improved our experimental Java execution engine, boosted performance through code refactors in various parts of Logstash, and made communicating across pipelines easier and more efficient. On the plugins' side, we've extended the S3 and SQS plugins to support custom endpoints and regions. Read on for the a discussion of the highlights in this release. You can, of course, find the full list of changes in the release notes.

Read This if You Use Persistent Queues

Logstash 6.3.0 contains significant bug fixes for our Persistent Queue (PQ) feature. Unfortunately, we were forced to change the on-disk serialization format in 6.3.0. This means that if you currently use the PQ, you must drain or delete the queue on your current version before upgrading. We have detailed instructions on this process on our Upgrading Persistent Queues documentation.

Inter-pipeline Communication Just got a LOT Easier (Beta)

Many people today use the multiple pipelines feature of Logstash along with an external queuing technology such as Redis to create a multi-staged processing pipeline. While this is an appropriate situation in some scenarios, we’ve made this easier and more efficient with our Pipeline-to-Pipeline communications feature. This lets you connect pipelines within a Logstash process simply and with maximum efficiency since everything stays within a single process.

It’s easy to use since we implemented this through two new built-in inputs and outputs, both called pipeline. Look below for an example of this feature:

# config/pipelines.yml
- pipeline.id: upstream
  config.string: input { stdin {} } output { pipeline { send_to => [myVirtualAddress] } }
- pipeline.id: downstream
  config.string: input { pipeline { address => myVirtualAddress } }

You can read about this feature in detail on our Pipeline to Pipeline documentation page. This feature is currently in Beta.

Help Logstash by Test Driving Our Experimental Java Execution Feature!

We’ve made huge strides with our Java pipeline execution engine. With 6.3.0 we’ve been able to improve pipeline compilation times significantly for larger configs and improve the runtime performance of many configurations. This represents a major refactoring of a core piece of Logstash and we’re not ready to turn it on by default just yet. We’d love more feedback on it and, if you find any issues, bug reports. Our current plan is to make this new engine the default in 6.4.0 so long as no significant outstanding issues are found. You can test this feature by enabling the --experimental-java-execution flag in 6.3.0. We’ve written up a detailed breakdown of performance differences in 6.3.0 too.

Support for custom endpoints and regions in S3 and SQS plugins

For a long time, Logstash has had the ability to read from and send data to both S3 and SQS AWS Services. With only AWS in mind, these plugins connect implicitly to AWS and only allow selecting a region from a hard-coded list. Now, to support on premise deployments of the AWS cloud services and other compatible services, we've added the ability to customize both the endpoint and the region for the S3 and SQS input and output plugins.

Other plugin improvements