Dead Letter Queuesedit

Note

The dead letter queue feature is currently supported for the Elasticsearch output plugin output only. Support for additional outputs will be available in future releases of the Logstash plugins. Before configuring Logstash to use this feature, refer to the output plugin documentation to verify that the plugin supports the dead letter queue feature.

By default, when Logstash encounters an event that it cannot process because the data contains a mapping error or some other issue, the Logstash pipeline either hangs or drops the unsuccessful event. In order to protect against data loss in this situation, you can configure Logstash to write unsuccessful events to a dead letter queue instead of dropping them.

Each event written to the dead letter queue includes the original event, along with metadata that describes the reason the event could not be processed, information about the plugin that wrote the event, and the timestamp for when the event entered the dead letter queue.

To process events in the dead letter queue, you simply create a Logstash pipeline configuration that uses the dead_letter_queue input plugin to read from the queue.

Diagram showing pipeline reading from the dead letter queue

See Processing Events in the Dead Letter Queue for more information.

Configuring Logstash to Use Dead Letter Queuesedit

Dead letter queues are disabled by default. To enable dead letter queues, set the dead_letter_queue_enable option in the logstash.yml settings file:

dead_letter_queue.enable: true

Dead letter queues are stored as files in the local directory of the Logstash instance. By default, the dead letter queue files are stored in path.data/dead_letter_queue. For example, the dead letter queue for the main pipeline is stored in LOGSTASH_HOME/data/dead_letter_queue/main by default. The queue files are numbered sequentially: 1.log, 2.log, and so on.

You can set path.dead_letter_queue in the logstash.yml file to specify a different path for the files:

path.dead_letter_queue: "path/to/data/dead_letter_queue"

File Rotationedit

Dead letter queues have a built-in file rotation policy that manages the file size of the queue. When the file size reaches a preconfigured threshold, a new file is created automatically. The size limit of the dead letter queue is constrained only by the amount of space that you have available on disk.

Note

Dead letter queues retain all the events that are written to them. Currently, you cannot configure the size of the queue or the size of the files that are used to store the queue.

Processing Events in the Dead Letter Queueedit

When you are ready to process events in the dead letter queue, you create a pipeline that uses the dead_letter_queue input plugin to read from the dead letter queue. The pipeline configuration that you use depends, of course, on what you need to do. For example, if the dead letter queue contains events that resulted from a mapping error in Elasticsearch, you can create a pipeline that reads the "dead" events, removes the field that caused the mapping issue, and re-indexes the clean events into Elasticsearch.

The following example shows a simple pipeline that reads events from the dead letter queue and writes the events, including metadata, to standard output:

input {
  dead_letter_queue {
    path => "/path/to/data/dead_letter_queue" 
    commit_offsets => true 
  }
}

output {
  stdout {
    codec => rubydebug { metadata => true }
  }
}

The path to the top-level directory containing the dead letter queue. This directory contains a main folder for the main pipeline. To find the path to this directory, look at the logstash.yml settings file. By default, Logstash creates the dead_letter_queue directory under the location used for persistent storage (path.data), for example, LOGSTASH_HOME/data/dead_letter_queue. However, if path.dead_letter_queue is set, it uses that location instead.

When true, saves the offset. When the pipeline restarts, it will continue reading from the position where it left off rather than reprocessing all the items in the queue. You can set commit_offsets to false when you are exploring events in the dead letter queue and want to iterate over the events multiple times.

For another example, see Example: Processing Data That Has Mapping Errors.

When the pipeline has finished processing all the events in the dead letter queue, it will continue to run and process new events as they stream into the queue. This means that you do not need to stop your production system to handle events in the dead letter queue.

Reading From a Timestampedit

When you read from the dead letter queue, you might not want to process all the events in the queue, especially if there are a lot of old events in the queue. You can start processing events at a specific point in the queue by using the start_timestamp option. This option configures the pipeline to start processing events based on the timestamp of when they entered the queue:

input {
  dead_letter_queue {
    path => "/path/to/data/dead_letter_queue"
    start_timestamp => 2017-06-06T23:40:37
  }
}

For this example, the pipeline starts reading all events that were delivered to the dead letter queue on or after June 6, 2017, at 23:40:37.

Example: Processing Data That Has Mapping Errorsedit

In this example, the user attempts to index a document that includes geo_ip data, but the data cannot be processed because it contains a mapping error:

{"geoip":{"location":"home"}}

Indexing fails because the Logstash output plugin expects a geo_point object in the location field, but the value is a string. The failed event is written to the dead letter queue, along with metadata about the error that caused the failure:

{
   "@metadata" => {
    "dead_letter_queue" => {
       "entry_time" => #<Java::OrgLogstash::Timestamp:0x5b5dacd5>,
        "plugin_id" => "fb80f1925088497215b8d037e622dec5819b503e-4",
      "plugin_type" => "elasticsearch",
           "reason" => "Could not index event to Elasticsearch. status: 400, action: [\"index\", {:_id=>nil, :_index=>\"logstash-2017.06.22\", :_type=>\"logs\", :_routing=>nil}, 2017-06-22T01:29:29.804Z Suyogs-MacBook-Pro-2.local {\"geoip\":{\"location\":\"home\"}}], response: {\"index\"=>{\"_index\"=>\"logstash-2017.06.22\", \"_type\"=>\"logs\", \"_id\"=>\"AVzNayPze1iR9yDdI2MD\", \"status\"=>400, \"error\"=>{\"type\"=>\"mapper_parsing_exception\", \"reason\"=>\"failed to parse\", \"caused_by\"=>{\"type\"=>\"illegal_argument_exception\", \"reason\"=>\"illegal latitude value [266.30859375] for geoip.location\"}}}}"
    }
  },
  "@timestamp" => 2017-06-22T01:29:29.804Z,
    "@version" => "1",
       "geoip" => {
    "location" => "home"
  },
        "host" => "Suyogs-MacBook-Pro-2.local",
     "message" => "{\"geoip\":{\"location\":\"home\"}}"
}

To process the failed event, you create the following pipeline that reads from the dead letter queue and removes the mapping problem:

input {
  dead_letter_queue {
    path => "/path/to/data/dead_letter_queue/" 
  }
}
filter {
  mutate {
    remove_field => "[geoip][location]" 
  }
}
output {
  elasticsearch{
    hosts => [ "localhost:9200" ] 
  }
}

The dead_letter_queue input reads from the dead letter queue.

The mutate filter removes the problem field called location.

The clean event is sent to Elasticsearch, where it can be indexed because the mapping issue is resolved.