Logstash Lines: Kafka 0.9 beta support, ES output memory fix
Welcome back to The Logstash Lines! In these weekly posts, we'll share the latest happenings in the world of Logstash and its ecosystem.
Support for Kafka v0.9
Apache Kafka had released version 0.9 couple of months ago which brings in new security features (SSL, client based auth, access control), improved consumer API, and much more. The biggest asks from Logstash users were support for SSL encryption and client auth features. Over the last few months we've been working on implementing these features into the input and output plugins, and this week, we released beta versions.
The new consumer library from Kafka has been greatly simplified - much of the logic (like rebalancing), has been pushed to the broker side. This meant we could directly use the Java APIs. While we're at this, added more integration tests running on travis, and cleaned up some configs.
Note: that these features need an upgrade to the Kafka broker -- 0.8 producer/consumer will not work with 0.9 broker and Logstash plugins are not backward compatible.
This is a beta release, please do not run it on production
To install this version of plugin on your Logstash (> 2.0.0):
bin/plugin install --version 3.0.0.beta3 logstash-input-kafka
bin/plugin install --version 3.0.0.beta1 logstash-output-kafka
Elasticsearch Output memory issue
Users have recently ran into a memory leak when using the sniffing feature in Elasticsearch Output. This leak was caused by ES output frequently instantiating
Manticore::Client — the underlying http library used in Logstash — while tearing down and reconnecting, upon hosts being updated from sniffing. Manticore lib has been patched to be more efficient in this scenario and ES Output version 2.5.3 released. To install this on Logstash 2.2:
bin/plugin install --version 2.5.3 logstash-output-elasticsearch
Plugin Installation Bug
While validating the fix for memory leak described above, we ran into a plugin installation issue when executing
bin/plugin update . Turns out we didn't know of hidden files produced by jar-dependencies library (use to package jars) and weren't correctly packaging dirs like
.mvn/* in our gem building process. Fix is in, and life is good in plugins-land (#4818)
Beats input certificate verification
Continue to make progress on adding certificate verification on the Beats input side. We found a jruby-openssl bug while adding support for chained certs, but root CA verification works fine. Released a new version which includes certificate validation.
Java Event Timestamp Fixes
This bug surfaced in Gelf input when a timestamp conversion from JSON string was loosing precision. The Java BigDecimal type conversion to a proper Ruby BigDecimal was not handled, and has been fixed now. This issue of losing precision also showed up in JRuby Event implementation previously (#4565)
Packs Installation Support
Preliminary work of installing packs in Logstash has begun. This will use the offline install feature which creates an intermediate plugin state file and
bin/plugin install can be pointed to use this as source instead of RubyGems.
Adding integration tests to validate this feature (#4585).
- Elasticsearch Output: Work continues on making this plugin fully thread-safe so resources can be shared across worker instances. Working with Karel to make sure manticore adapter implementation for elasticsearch-transport is thread-safe and resource friendly when used with sniffing (#284)
- File Input: Exploring the use of a fingerprint to uniquely identify file entries in sincedb file. Previously we were thinking of using file path and inode as a key, but we are now converging on using 2 checksums - once at the start (0 offset) and one at 32K (for larger files). Details here.
- Redis Input Batching: Changed the default behavior of Redis input to read batches of events instead of one. As expected, this gives a good boost in performance (#36).
- RabbitMQ performance : A recent change which added metadata from RabbitMQ consumers into Logstash Event seemed to have regressed performance. Investigating this and making more fixes (#69).