We are excited to announce the released of beta1 for Logstash 6.0.0!
Note: This is a pre-release intended for testing and evaluation purposes only, so please don't run this version on production!
As always, your feedback is super important to us, so please download this release and provide us any feedback you have! Talking about feedback, we hope you are familiar with our Pioneer Program for pre-releases! You could win plenty of swag, and even a ticket to ElasticON 2018 when you provide feedback!
This release is packed with exciting features! So without further ado, here are the highlights:
The monitoring UI got a significant upgrade with this new X-Pack Basic feature. Users can now visualize their often complex pipeline configuration as a directed acyclic graph (DAG) representation. This UI provides a simple way to understand the overall pipeline topology, data flow, branching logic, and granular plugin level metrics. We overlay important metrics such as events per second, and time spent in milliseconds for each plugin in this view. Plus, there are visual indicators (colored labels) on the components when events spend extra time in certain plugins — this should draw your attention to the problem areas, providing an easy way to diagnose bottlenecks and optimize them.
Centrally manage configurations
This Configuration Management feature allows you to store and manage Logstash configurations remotely with Elasticsearch and Kibana. Logstash nodes can be configured to periodically poll new configuration updates from Elasticsearch and automatically apply changes without restarting the process. As part of this, we've also built a CRUD UI in the management section of Kibana to manage the Logstash configurations. This feature is part of the X-Pack license which can used for free for 30 days!
This a transformational step for Logstash and makes managing a fleet of Logstash easier. If you manage a logging as a service platform using Logstash, this adds a level of self-service to your deployment. We are already exploring features like rollback and auditing on our short-term roadmap!
Ingest to Logstash convertor
Ever wanted to migrate from Ingest Node to Logstash? Here's some of the reasons why you'd want to do that:
- Ingest from more inputs. Logstash can natively ingest data from many other sources like TCP, UDP, syslog, and relational databases.
- Use multiple outputs. Ingest was designed to only support ES as an output. For example, you may want to archive the ingested data to S3 in addition to indexing it in ES.
- Take advantage of the richer transformation capabilities in Logstash. Ingest processors today are a subset of what Logstash filters provide.
- Use the persistent queue feature to handle spikes when ingesting data (from Beats and other sources).
We now have a CLI tool that takes an ingest pipeline in JSON and produces a corresponding Logstash configuration DSL. You can then run this configuration natively in Logstash, make changes to it etc.
$LS_HOME/bin/ingest-convert.sh --input file:///tmp/ingest/apache.json --output file:///tmp/ingest/apache.conf