Changed the default queue location on disk to include the pipeline’s ID in the path hierarchy.
By default, the queue is now created under
<path.data>/queue/main. This breaking change was made to accommodate an upcoming feature where multiple, isolated pipelines could be run on the same Logstash instance.
- Added a recovery process that runs during Logstash startup to recover data that has been written to the persistent queue, but not yet checkpointed. This is useful in situations where the input has written data to the queue, but Logstash crashed before writing to the checkpoint file.
Added exclusive access to the persistent queue on disk, as defined by the
path.queuesetting. Using a file lock guards against corruption by ensuring that only a single Logstash instance has access to write to the queue on the same path. (Issue 6604).
- You can now safely reload the pipeline config when using persistent queues. Previously, reloading the config could result in data corruption. In 5.3, the reload sequence has been changed to reliably shut down the first pipeline before a new one is started with the same settings.
- Fixed an issue where Logstash would stop accepting new events when queue capacity is reached even though events were successfully acknowledged (Issue 6626).
- Changed the default queue location on disk to include the pipeline’s ID in the path hierarchy. By default, the queue is now created under
- Fixed a warning message when --config.debug is used with --log.level=debug (Issue 6256).
- We now include the S3 key information in the metadata (Issue 105).
pathfields are no longer overwritten if they are already provided by
trimkeyoptions are renamed to
trim_keyrespectively (Issue 10).
trim_valueonly removes the specified leading and trailing characters from the value. Similarly,
trim_keyonly removes the specified leading and trailing characters from the key (Issue 10).
Added new options
remove_char_keyto remove the specified characters from keys (or values) regardless of where these characters are found (Issue 10).
Added an option to define custom patterns using