Delete By Query APIedit

The simplest usage of _delete_by_query just performs a deletion on every document that match a query. Here is the API:

POST twitter/_delete_by_query
{
  "query": { 
    "match": {
      "message": "some message"
    }
  }
}

The query must be passed as a value to the query key, in the same way as the Search API. You can also use the q parameter in the same way as the search api.

That will return something like this:

{
  "took" : 147,
  "timed_out": false,
  "deleted": 119,
  "batches": 1,
  "version_conflicts": 0,
  "noops": 0,
  "retries": {
    "bulk": 0,
    "search": 0
  },
  "throttled_millis": 0,
  "requests_per_second": -1.0,
  "throttled_until_millis": 0,
  "total": 119,
  "failures" : [ ]
}

_delete_by_query gets a snapshot of the index when it starts and deletes what it finds using internal versioning. That means that you’ll get a version conflict if the document changes between the time when the snapshot was taken and when the delete request is processed. When the versions match the document is deleted.

Since internal versioning does not support the value 0 as a valid version number, documents with version equal to zero cannot be deleted using _delete_by_query and will fail the request.

During the _delete_by_query execution, multiple search requests are sequentially executed in order to find all the matching documents to delete. Every time a batch of documents is found, a corresponding bulk request is executed to delete all these documents. In case a search or bulk request got rejected, _delete_by_query relies on a default policy to retry rejected requests (up to 10 times, with exponential back off). Reaching the maximum retries limit causes the _delete_by_query to abort and all failures are returned in the failures of the response. The deletions that have been performed still stick. In other words, the process is not rolled back, only aborted. While the first failure causes the abort, all failures that are returned by the failing bulk request are returned in the failures element; therefore it’s possible for there to be quite a few failed entities.

If you’d like to count version conflicts rather than cause them to abort then set conflicts=proceed on the url or "conflicts": "proceed" in the request body.

Back to the API format, this will delete tweets from the twitter index:

POST twitter/_doc/_delete_by_query?conflicts=proceed
{
  "query": {
    "match_all": {}
  }
}

It’s also possible to delete documents of multiple indexes and multiple types at once, just like the search API:

POST twitter,blog/_docs,post/_delete_by_query
{
  "query": {
    "match_all": {}
  }
}

If you provide routing then the routing is copied to the scroll query, limiting the process to the shards that match that routing value:

POST twitter/_delete_by_query?routing=1
{
  "query": {
    "range" : {
        "age" : {
           "gte" : 10
        }
    }
  }
}

By default _delete_by_query uses scroll batches of 1000. You can change the batch size with the scroll_size URL parameter:

POST twitter/_delete_by_query?scroll_size=5000
{
  "query": {
    "term": {
      "user": "kimchy"
    }
  }
}

URL Parametersedit

In addition to the standard parameters like pretty, the Delete By Query API also supports refresh, wait_for_completion, wait_for_active_shards, timeout and scroll.

Sending the refresh will refresh all shards involved in the delete by query once the request completes. This is different than the Delete API’s refresh parameter which causes just the shard that received the delete request to be refreshed.

If the request contains wait_for_completion=false then Elasticsearch will perform some preflight checks, launch the request, and then return a task which can be used with Tasks APIs to cancel or get the status of the task. Elasticsearch will also create a record of this task as a document at .tasks/task/${taskId}. This is yours to keep or remove as you see fit. When you are done with it, delete it so Elasticsearch can reclaim the space it uses.

wait_for_active_shards controls how many copies of a shard must be active before proceeding with the request. See here for details. timeout controls how long each write request waits for unavailable shards to become available. Both work exactly how they work in the Bulk API. As _delete_by_query uses scroll search, you can also specify the scroll parameter to control how long it keeps the "search context" alive, eg ?scroll=10m, by default it’s 5 minutes.

requests_per_second can be set to any positive decimal number (1.4, 6, 1000, etc) and throttles rate at which _delete_by_query issues batches of delete operations by padding each batch with a wait time. The throttling can be disabled by setting requests_per_second to -1.

The throttling is done by waiting between batches so that scroll that _delete_by_query uses internally can be given a timeout that takes into account the padding. The padding time is the difference between the batch size divided by the requests_per_second and the time spent writing. By default the batch size is 1000, so if the requests_per_second is set to 500:

target_time = 1000 / 500 per second = 2 seconds
wait_time = target_time - write_time = 2 seconds - .5 seconds = 1.5 seconds

Since the batch is issued as a single _bulk request large batch sizes will cause Elasticsearch to create many requests and then wait for a while before starting the next set. This is "bursty" instead of "smooth". The default is -1.

Response bodyedit

The JSON response looks like this:

{
  "took" : 147,
  "timed_out": false,
  "total": 119,
  "deleted": 119,
  "batches": 1,
  "version_conflicts": 0,
  "noops": 0,
  "retries": {
    "bulk": 0,
    "search": 0
  },
  "throttled_millis": 0,
  "requests_per_second": -1.0,
  "throttled_until_millis": 0,
  "failures" : [ ]
}
took
The number of milliseconds from start to end of the whole operation.
timed_out
This flag is set to true if any of the requests executed during the delete by query execution has timed out.
total
The number of documents that were successfully processed.
deleted
The number of documents that were successfully deleted.
batches
The number of scroll responses pulled back by the delete by query.
version_conflicts
The number of version conflicts that the delete by query hit.
noops
This field is always equal to zero for delete by query. It only exists so that delete by query, update by query and reindex APIs return responses with the same structure.
retries
The number of retries attempted by delete by query. bulk is the number of bulk actions retried and search is the number of search actions retried.
throttled_millis
Number of milliseconds the request slept to conform to requests_per_second.
requests_per_second
The number of requests per second effectively executed during the delete by query.
throttled_until_millis
This field should always be equal to zero in a delete by query response. It only has meaning when using the Task API, where it indicates the next time (in milliseconds since epoch) a throttled request will be executed again in order to conform to requests_per_second.
failures
Array of failures if there were any unrecoverable errors during the process. If this is non-empty then the request aborted because of those failures. Delete-by-query is implemented using batches and any failure causes the entire process to abort but all failures in the current batch are collected into the array. You can use the conflicts option to prevent reindex from aborting on version conflicts.

Works with the Task APIedit

You can fetch the status of any running delete-by-query requests with the Task API:

GET _tasks?detailed=true&actions=*/delete/byquery

The responses looks like:

{
  "nodes" : {
    "r1A2WoRbTwKZ516z6NEs5A" : {
      "name" : "r1A2WoR",
      "transport_address" : "127.0.0.1:9300",
      "host" : "127.0.0.1",
      "ip" : "127.0.0.1:9300",
      "attributes" : {
        "testattr" : "test",
        "portsfile" : "true"
      },
      "tasks" : {
        "r1A2WoRbTwKZ516z6NEs5A:36619" : {
          "node" : "r1A2WoRbTwKZ516z6NEs5A",
          "id" : 36619,
          "type" : "transport",
          "action" : "indices:data/write/delete/byquery",
          "status" : {    
            "total" : 6154,
            "updated" : 0,
            "created" : 0,
            "deleted" : 3500,
            "batches" : 36,
            "version_conflicts" : 0,
            "noops" : 0,
            "retries": 0,
            "throttled_millis": 0
          },
          "description" : ""
        }
      }
    }
  }
}

this object contains the actual status. It is just like the response json with the important addition of the total field. total is the total number of operations that the reindex expects to perform. You can estimate the progress by adding the updated, created, and deleted fields. The request will finish when their sum is equal to the total field.

With the task id you can look up the task directly:

GET /_tasks/taskId:1

The advantage of this API is that it integrates with wait_for_completion=false to transparently return the status of completed tasks. If the task is completed and wait_for_completion=false was set on it then it’ll come back with results or an error field. The cost of this feature is the document that wait_for_completion=false creates at .tasks/task/${taskId}. It is up to you to delete that document.

Works with the Cancel Task APIedit

Any Delete By Query can be canceled using the Task Cancel API:

POST _tasks/task_id:1/_cancel

The task_id can be found using the tasks API above.

Cancellation should happen quickly but might take a few seconds. The task status API above will continue to list the task until it is wakes to cancel itself.

Rethrottlingedit

The value of requests_per_second can be changed on a running delete by query using the _rethrottle API:

POST _delete_by_query/task_id:1/_rethrottle?requests_per_second=-1

The task_id can be found using the tasks API above.

Just like when setting it on the _delete_by_query API requests_per_second can be either -1 to disable throttling or any decimal number like 1.7 or 12 to throttle to that level. Rethrottling that speeds up the query takes effect immediately but rethrotting that slows down the query will take effect on after completing the current batch. This prevents scroll timeouts.

Slicingedit

Delete-by-query supports Sliced Scroll to parallelize the deleting process. This parallelization can improve efficiency and provide a convenient way to break the request down into smaller parts.

Manually slicingedit

Slice a delete-by-query manually by providing a slice id and total number of slices to each request:

POST twitter/_delete_by_query
{
  "slice": {
    "id": 0,
    "max": 2
  },
  "query": {
    "range": {
      "likes": {
        "lt": 10
      }
    }
  }
}
POST twitter/_delete_by_query
{
  "slice": {
    "id": 1,
    "max": 2
  },
  "query": {
    "range": {
      "likes": {
        "lt": 10
      }
    }
  }
}

Which you can verify works with:

GET _refresh
POST twitter/_search?size=0&filter_path=hits.total
{
  "query": {
    "range": {
      "likes": {
        "lt": 10
      }
    }
  }
}

Which results in a sensible total like this one:

{
  "hits": {
    "total": 0
  }
}

Automatic slicingedit

You can also let delete-by-query automatically parallelize using Sliced Scroll to slice on _uid. Use slices to specify the number of slices to use:

POST twitter/_delete_by_query?refresh&slices=5
{
  "query": {
    "range": {
      "likes": {
        "lt": 10
      }
    }
  }
}

Which you also can verify works with:

POST twitter/_search?size=0&filter_path=hits.total
{
  "query": {
    "range": {
      "likes": {
        "lt": 10
      }
    }
  }
}

Which results in a sensible total like this one:

{
  "hits": {
    "total": 0
  }
}

Setting slices to auto will let Elasticsearch choose the number of slices to use. This setting will use one slice per shard, up to a certain limit. If there are multiple source indices, it will choose the number of slices based on the index with the smallest number of shards.

Adding slices to _delete_by_query just automates the manual process used in the section above, creating sub-requests which means it has some quirks:

  • You can see these requests in the Tasks APIs. These sub-requests are "child" tasks of the task for the request with slices.
  • Fetching the status of the task for the request with slices only contains the status of completed slices.
  • These sub-requests are individually addressable for things like cancellation and rethrottling.
  • Rethrottling the request with slices will rethrottle the unfinished sub-request proportionally.
  • Canceling the request with slices will cancel each sub-request.
  • Due to the nature of slices each sub-request won’t get a perfectly even portion of the documents. All documents will be addressed, but some slices may be larger than others. Expect larger slices to have a more even distribution.
  • Parameters like requests_per_second and size on a request with slices are distributed proportionally to each sub-request. Combine that with the point above about distribution being uneven and you should conclude that the using size with slices might not result in exactly size documents being `_delete_by_query`ed.
  • Each sub-requests gets a slightly different snapshot of the source index though these are all taken at approximately the same time.
Picking the number of slicesedit

If slicing automatically, setting slices to auto will choose a reasonable number for most indices. If you’re slicing manually or otherwise tuning automatic slicing, use these guidelines.

Query performance is most efficient when the number of slices is equal to the number of shards in the index. If that number is large, (for example, 500) choose a lower number as too many slices will hurt performance. Setting slices higher than the number of shards generally does not improve efficiency and adds overhead.

Delete performance scales linearly across available resources with the number of slices.

Whether query or delete performance dominates the runtime depends on the documents being reindexed and cluster resources.