The Logstash Lines: 2.2 Release prep, JDBC Input enhancements!

Happy New Years to our users and welcome back to The Logstash Lines! In these weekly posts, we'll share the latest happenings in the world of Logstash and its ecosystem.

Prepping for the 2.2 release:

This past week was spent testing internal release candidates for our next feature release - 2.2. We found and fixed the following issues:

  • Changed default # of pipeline workers to use 100% of cores. In our tests using the new pipeline architecture, we got the best performance - for common scenarios like apache log processing - while using all cores #4414.
  • Added warning when batch size for new pipeline is confgured to be too high.
  • Updated "Life of an Event" docs for the new architecture.
  • Introduced API for declaring if an output plugin is threadsafe. This allows specific outputs to use the parallelism and batched nature of the pipeline #4391.


  • Introduce a bootstrap config file for Logstash to complement command line options. And, yes, this is different than the pipeline config. Started initial implementation on a feature branch. We would love any feedback on this item! #4401


Beats Input:

Refactored beats input to primarily fix thread synchronization issues under high data volume. Replaced the in-house blocking size queue implementation with Java's Synchronous Queue. Reorganized code to make testing easier.

File Input:

Added new settings - ignore_older and close_older to mirror existing functionality in Filebeat. This will help close file descriptors for files which are not being actively written to. Previously, users had to restart LS to release these resources.

ES Output:

Reviewed and merged scripted update support for HTTP protocol. Fixed a regression where http errors would not sleep for max_retry_interval, sending the CPU into a spin.

Kafka output:

Work continues on rewriting Kafka input to use the new 0.9 version of consumer API. Also adding SSL support for both producer and consumer based on the new APIs.

JDBC input:

Added functionality to save query run state by using any column number, not just time-based columns #57.

Until next time!

  • We're hiring

    Work for a global, distributed team where finding someone like you is just a Zoom meeting away. Flexible work with impact? Development opportunities from the start?