Logstash 6.4.0 Release Notesedit

Attention users of Kafka Output in Logstash 6.4.0

If you are using Kafka output and have upgraded to Logstash 6.4.0, you will see pipeline startup errors:

Pipeline aborted due to error {:pipeline_id=>"pipeline1", :exception=>org.apache.kafka.common.config.ConfigException: Invalid value 32768 for configuration receive.buffer.bytes: Expected value to be a 32-bit integer, but it was a java.lang.Long

This error was due to an incorrectly configured default value for the receive_buffer_bytes option (fixed in PR logstash-output-kafka #205), and false negative results on our CI due to incorrect exit code handling (fixed in logstash-output-kafka#204).

Kafka output plugin version 7.1.3 has been released. You can upgrade using:

bin/logstash-plugin update logstash-output-kafka

This version will be included in the next 6.4.1 patch release.

Pluginsedit

Rubydebug Codec

  • Fixes crash that could occur on startup if $HOME was unset or if ${HOME}/.aprc was unreadable by pinning awesome_print dependency to a release before the bug was introduced. #5

Fingerprint Filter

  • Adds support for non-keyed, regular hash functions. #18

KV Filter

  • Adds whitespace => strict mode, which allows the parser to behave more predictably when input is known to avoid unnecessary whitespace. #67
  • Adds error handling, which tags the event with _kv_filter_error if an exception is raised while handling an event instead of allowing the plugin to crash. #68

Azure Event Hubs Input

Beats Input

  • Adds add_hostname flag to enable/disable the population of the host field from the beats.hostname. field #340
  • Fixes handling of batches where the sequence numbers do not start with 1. #342
  • Changes project to use gradle version 4.8.1. #334
  • Adds ssl_peer_metadata option. #327
  • Fixes ssl_verify_mode => peer. #326

Exec Input

  • Fixes issue where certain log entries were incorrectly writing jdbc input instead of exec input. #21

File Input

  • Adds new feature: mode setting. Introduces two modes, tail mode is the existing behaviour for tailing, read mode is new behaviour that is optimized for the read complete content scenario. Please read the docs to fully appreciate the benefits of read mode.
  • Adds new feature: File completion actions. Settings file_completed_action and file_completed_log_path control what actions to do after a file is completely read. Applicable: read mode only.
  • Adds new feature: in read mode, compressed files can be processed, GZIP only.
  • Adds new feature: Files are sorted after being discovered. Settings file_sort_by and file_sort_direction control the sort order. Applicable: any mode.
  • Adds new feature: Banded or striped file processing. Settings: file_chunk_size and file_chunk_count control banded or striped processing. Applicable: any mode.
  • Adds new feature: sincedb_clean_after setting. Introduces expiry of sincedb records. The default is 14 days. If, after sincedb_clean_after days, no activity has been detected on a file (inode) the record expires and is not written to disk. The persisted record now includes the "last activity seen" timestamp. Applicable: any mode.
  • Moves Filewatch code into the plugin folder, rework Filewatch code to use Logstash facilities like logging and environment.
  • Adds much better support for file rotation schemes of copy/truncate and rename cascading. Applies to tail mode only.
  • Adds support for processing files over remote mounts e.g. NFS. Before, it was possible to read into memory allocated but not filled with data resulting in ASCII NUL (0) bytes in the message field. Now, files are read up to the size as given by the remote filesystem client. Applies to tail and read modes.
  • Fixes read mode of regular files sincedb write is requested in each read loop iteration rather than waiting for the end-of-file to be reached. Note: for gz files, the sincedb entry can only be updated at the end of the file as it is not possible to seek into a compressed file and begin reading from that position. #196
  • Adds support for String Durations in some settings e.g. stat_interval => "750 ms". #194
  • Fixes require winhelper error in WINDOWS. #184
  • Fixes issue, where when no delimiter is found in a chunk, the chunk is reread - no forward progress is made in the file. #185
  • Fixes JAR_VERSION read problem, prevented Logstash from starting. #180
  • Fixes sincedb write error when using /dev/null, repeatedly causes a plugin restart. #182
  • Fixes a regression where files discovered after first discovery were not always read from the beginning. Applies to tail mode only. #198

Http Input

  • Replaces Puma web server with Netty. #73
  • Adds request_headers_target_field and remote_host_target_field configuration options with default to host and headers respectively. #68
  • Sanitizes content-type header with getMimeType. #87
  • Moves most message handling code to Java. #85
  • Fixes issue to respond with correct http protocol version. #84
  • Adds support for crt/key certificates.
  • Deprecates jks support.

Jdbc Input

  • Fixes crash that occurs when receiving string input that cannot be coerced to UTF-8 (such as BLOB data). #291

S3 Input

  • Adds ability to optionally include S3 object properties inside @metadata. #155

Kafka Output

  • Fixes handling of two settings that weren’t wired to the kafka client. #198