Get field data cache information

GET /_cat/fielddata

Get the amount of heap memory currently used by the field data cache on every data node in the cluster.

IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the nodes stats API.

Query parameters

  • bytes string

    The unit used to display byte values.

    Values are b, kb, mb, gb, tb, or pb.

  • fields string | array[string]

    Comma-separated list of fields used to limit returned information.

  • h string | array[string]

    List of columns to appear in the response. Supports simple wildcards.

  • s string | array[string]

    List of columns that determine how the table should be sorted. Sorting defaults to ascending and can be changed by setting :asc or :desc as a suffix to the column name.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
GET /_cat/fielddata
curl \
 --request GET 'http://api.example.com/_cat/fielddata' \
 --header "Authorization: $API_KEY"
Response examples (200)
A successful response from `GET /_cat/fielddata?v=true&fields=body&format=json`. You can specify an individual field in the request body or URL path. This example retrieves heap memory size information for the `body` field.
[
  {
    "id": "Nqk-6inXQq-OxUfOUI8jNQ",
    "host": "127.0.0.1",
    "ip": "127.0.0.1",
    "node": "Nqk-6in",
    "field": "body",
    "size": "544b"
  }
]
A successful response from `GET /_cat/fielddata/body,soul?v=true&format=json`. You can specify a comma-separated list of fields in the request body or URL path. This example retrieves heap memory size information for the `body` and `soul` fields. To get information for all fields, run `GET /_cat/fielddata?v=true`.
[
  {
    "id": "Nqk-6inXQq-OxUfOUI8jNQ",
    "host": "1127.0.0.1",
    "ip": "127.0.0.1",
    "node": "Nqk-6in",
    "field": "body",
    "size": "544b"
  },
  {
    "id": "Nqk-6inXQq-OxUfOUI8jNQ",
    "host": "127.0.0.1",
    "ip": "127.0.0.1",
    "node": "Nqk-6in",
    "field": "soul",
    "size": "480b"
  }
]
































































































































Get transform information Added in 7.7.0

GET /_cat/transforms

Get configuration and usage information about transforms.

CAT APIs are only intended for human consumption using the Kibana console or command line. They are not intended for use by applications. For application consumption, use the get transform statistics API.

Query parameters

  • Specifies what to do when the request: contains wildcard expressions and there are no transforms that match; contains the _all string or no identifiers and there are no matches; contains wildcard expressions and there are only partial matches. If true, it returns an empty transforms array when there are no matches and the subset of results when there are partial matches. If false, the request returns a 404 status code when there are no matches or only partial matches.

  • from number

    Skips the specified number of transforms.

  • h string | array[string]

    Comma-separated list of column names to display.

    Supported values include:

    • changes_last_detection_time (or cldt): The timestamp when changes were last detected in the source indices.
    • checkpoint (or cp): The sequence number for the checkpoint.
    • checkpoint_duration_time_exp_avg (or cdtea, checkpointTimeExpAvg): Exponential moving average of the duration of the checkpoint, in milliseconds.
    • checkpoint_progress (or c, checkpointProgress): The progress of the next checkpoint that is currently in progress.
    • create_time (or ct, createTime): The time the transform was created.
    • delete_time (or dtime): The amount of time spent deleting, in milliseconds.
    • description (or d): The description of the transform.
    • dest_index (or di, destIndex): The destination index for the transform. The mappings of the destination index are deduced based on the source fields when possible. If alternate mappings are required, use the Create index API prior to starting the transform.
    • documents_deleted (or docd): The number of documents that have been deleted from the destination index due to the retention policy for this transform.
    • documents_indexed (or doci): The number of documents that have been indexed into the destination index for the transform.
    • docs_per_second (or dps): Specifies a limit on the number of input documents per second. This setting throttles the transform by adding a wait time between search requests. The default value is null, which disables throttling.
    • documents_processed (or docp): The number of documents that have been processed from the source index of the transform.
    • frequency (or f): The interval between checks for changes in the source indices when the transform is running continuously. Also determines the retry interval in the event of transient failures while the transform is searching or indexing. The minimum value is 1s and the maximum is 1h. The default value is 1m.
    • id: Identifier for the transform.
    • index_failure (or if): The number of indexing failures.
    • index_time (or itime): The amount of time spent indexing, in milliseconds.
    • index_total (or it): The number of index operations.
    • indexed_documents_exp_avg (or idea): Exponential moving average of the number of new documents that have been indexed.
    • last_search_time (or lst, lastSearchTime): The timestamp of the last search in the source indices. This field is only shown if the transform is running.
    • max_page_search_size (or mpsz): Defines the initial page size to use for the composite aggregation for each checkpoint. If circuit breaker exceptions occur, the page size is dynamically adjusted to a lower value. The minimum value is 10 and the maximum is 65,536. The default value is 500.
    • pages_processed (or pp): The number of search or bulk index operations processed. Documents are processed in batches instead of individually.
    • pipeline (or p): The unique identifier for an ingest pipeline.
    • processed_documents_exp_avg (or pdea): Exponential moving average of the number of documents that have been processed.
    • processing_time (or pt): The amount of time spent processing results, in milliseconds.
    • reason (or r): If a transform has a failed state, this property provides details about the reason for the failure.
    • search_failure (or sf): The number of search failures.
    • search_time (or stime): The amount of time spent searching, in milliseconds.
    • search_total (or st): The number of search operations on the source index for the transform.
    • source_index (or si, sourceIndex): The source indices for the transform. It can be a single index, an index pattern (for example, "my-index-*"), an array of indices (for example, ["my-index-000001", "my-index-000002"]), or an array of index patterns (for example, ["my-index-*", "my-other-index-*"]. For remote indices use the syntax "remote_name:index_name". If any indices are in remote clusters then the master node and at least one transform node must have the remote_cluster_client node role.
    • state (or s): The status of the transform, which can be one of the following values:

      • aborting: The transform is aborting.
      • failed: The transform failed. For more information about the failure, check the reason field.
      • indexing: The transform is actively processing data and creating new documents.
      • started: The transform is running but not actively indexing data.
      • stopped: The transform is stopped.
      • stopping: The transform is stopping.
    • transform_type (or tt): Indicates the type of transform: batch or continuous.

    • trigger_count (or tc): The number of times the transform has been triggered by the scheduler. For example, the scheduler triggers the transform indexer to check for updates or ingest new data at an interval specified in the frequency property.

    • version (or v): The version of Elasticsearch that existed on the node when the transform was created.

    Values are changes_last_detection_time, cldt, checkpoint, cp, checkpoint_duration_time_exp_avg, cdtea, checkpointTimeExpAvg, checkpoint_progress, c, checkpointProgress, create_time, ct, createTime, delete_time, dtime, description, d, dest_index, di, destIndex, documents_deleted, docd, documents_indexed, doci, docs_per_second, dps, documents_processed, docp, frequency, f, id, index_failure, if, index_time, itime, index_total, it, indexed_documents_exp_avg, idea, last_search_time, lst, lastSearchTime, max_page_search_size, mpsz, pages_processed, pp, pipeline, p, processed_documents_exp_avg, pdea, processing_time, pt, reason, r, search_failure, sf, search_time, stime, search_total, st, source_index, si, sourceIndex, state, s, transform_type, tt, trigger_count, tc, version, or v.

  • s string | array[string]

    Comma-separated list of column names or column aliases used to sort the response.

    Supported values include:

    • changes_last_detection_time (or cldt): The timestamp when changes were last detected in the source indices.
    • checkpoint (or cp): The sequence number for the checkpoint.
    • checkpoint_duration_time_exp_avg (or cdtea, checkpointTimeExpAvg): Exponential moving average of the duration of the checkpoint, in milliseconds.
    • checkpoint_progress (or c, checkpointProgress): The progress of the next checkpoint that is currently in progress.
    • create_time (or ct, createTime): The time the transform was created.
    • delete_time (or dtime): The amount of time spent deleting, in milliseconds.
    • description (or d): The description of the transform.
    • dest_index (or di, destIndex): The destination index for the transform. The mappings of the destination index are deduced based on the source fields when possible. If alternate mappings are required, use the Create index API prior to starting the transform.
    • documents_deleted (or docd): The number of documents that have been deleted from the destination index due to the retention policy for this transform.
    • documents_indexed (or doci): The number of documents that have been indexed into the destination index for the transform.
    • docs_per_second (or dps): Specifies a limit on the number of input documents per second. This setting throttles the transform by adding a wait time between search requests. The default value is null, which disables throttling.
    • documents_processed (or docp): The number of documents that have been processed from the source index of the transform.
    • frequency (or f): The interval between checks for changes in the source indices when the transform is running continuously. Also determines the retry interval in the event of transient failures while the transform is searching or indexing. The minimum value is 1s and the maximum is 1h. The default value is 1m.
    • id: Identifier for the transform.
    • index_failure (or if): The number of indexing failures.
    • index_time (or itime): The amount of time spent indexing, in milliseconds.
    • index_total (or it): The number of index operations.
    • indexed_documents_exp_avg (or idea): Exponential moving average of the number of new documents that have been indexed.
    • last_search_time (or lst, lastSearchTime): The timestamp of the last search in the source indices. This field is only shown if the transform is running.
    • max_page_search_size (or mpsz): Defines the initial page size to use for the composite aggregation for each checkpoint. If circuit breaker exceptions occur, the page size is dynamically adjusted to a lower value. The minimum value is 10 and the maximum is 65,536. The default value is 500.
    • pages_processed (or pp): The number of search or bulk index operations processed. Documents are processed in batches instead of individually.
    • pipeline (or p): The unique identifier for an ingest pipeline.
    • processed_documents_exp_avg (or pdea): Exponential moving average of the number of documents that have been processed.
    • processing_time (or pt): The amount of time spent processing results, in milliseconds.
    • reason (or r): If a transform has a failed state, this property provides details about the reason for the failure.
    • search_failure (or sf): The number of search failures.
    • search_time (or stime): The amount of time spent searching, in milliseconds.
    • search_total (or st): The number of search operations on the source index for the transform.
    • source_index (or si, sourceIndex): The source indices for the transform. It can be a single index, an index pattern (for example, "my-index-*"), an array of indices (for example, ["my-index-000001", "my-index-000002"]), or an array of index patterns (for example, ["my-index-*", "my-other-index-*"]. For remote indices use the syntax "remote_name:index_name". If any indices are in remote clusters then the master node and at least one transform node must have the remote_cluster_client node role.
    • state (or s): The status of the transform, which can be one of the following values:

      • aborting: The transform is aborting.
      • failed: The transform failed. For more information about the failure, check the reason field.
      • indexing: The transform is actively processing data and creating new documents.
      • started: The transform is running but not actively indexing data.
      • stopped: The transform is stopped.
      • stopping: The transform is stopping.
    • transform_type (or tt): Indicates the type of transform: batch or continuous.

    • trigger_count (or tc): The number of times the transform has been triggered by the scheduler. For example, the scheduler triggers the transform indexer to check for updates or ingest new data at an interval specified in the frequency property.

    • version (or v): The version of Elasticsearch that existed on the node when the transform was created.

    Values are changes_last_detection_time, cldt, checkpoint, cp, checkpoint_duration_time_exp_avg, cdtea, checkpointTimeExpAvg, checkpoint_progress, c, checkpointProgress, create_time, ct, createTime, delete_time, dtime, description, d, dest_index, di, destIndex, documents_deleted, docd, documents_indexed, doci, docs_per_second, dps, documents_processed, docp, frequency, f, id, index_failure, if, index_time, itime, index_total, it, indexed_documents_exp_avg, idea, last_search_time, lst, lastSearchTime, max_page_search_size, mpsz, pages_processed, pp, pipeline, p, processed_documents_exp_avg, pdea, processing_time, pt, reason, r, search_failure, sf, search_time, stime, search_total, st, source_index, si, sourceIndex, state, s, transform_type, tt, trigger_count, tc, version, or v.

  • time string

    The unit used to display time values.

    Values are nanos, micros, ms, s, m, h, or d.

  • size number

    The maximum number of transforms to obtain.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • id string
    • state string

      The status of the transform. Returned values include: aborting: The transform is aborting. failed: The transform failed. For more information about the failure, check thereasonfield. indexing: The transform is actively processing data and creating new documents. started: The transform is running but not actively indexing data. stopped: The transform is stopped. stopping`: The transform is stopping.

    • The sequence number for the checkpoint.

    • The number of documents that have been processed from the source index of the transform.

    • checkpoint_progress string | null

      The progress of the next checkpoint that is currently in progress.

    • last_search_time string | null

      The timestamp of the last search in the source indices. This field is shown only if the transform is running.

    • changes_last_detection_time string | null

      The timestamp when changes were last detected in the source indices.

    • The time the transform was created.

    • version string
    • The source indices for the transform.

    • The destination index for the transform.

    • pipeline string

      The unique identifier for the ingest pipeline.

    • The description of the transform.

    • The type of transform: batch or continuous.

    • The interval between checks for changes in the source indices when the transform is running continuously.

    • The initial page size that is used for the composite aggregation for each checkpoint.

    • The number of input documents per second.

    • reason string

      If a transform has a failed state, these details describe the reason for failure.

    • The total number of search operations on the source index for the transform.

    • The total number of search failures.

    • The total amount of search time, in milliseconds.

    • The total number of index operations done by the transform.

    • The total number of indexing failures.

    • The total time spent indexing documents, in milliseconds.

    • The number of documents that have been indexed into the destination index for the transform.

    • The total time spent deleting documents, in milliseconds.

    • The number of documents deleted from the destination index due to the retention policy for the transform.

    • The number of times the transform has been triggered by the scheduler. For example, the scheduler triggers the transform indexer to check for updates or ingest new data at an interval specified in the frequency property.

    • The number of search or bulk index operations processed. Documents are processed in batches instead of individually.

    • The total time spent processing results, in milliseconds.

    • The exponential moving average of the duration of the checkpoint, in milliseconds.

    • The exponential moving average of the number of new documents that have been indexed.

    • The exponential moving average of the number of documents that have been processed.

GET /_cat/transforms
curl \
 --request GET 'http://api.example.com/_cat/transforms' \
 --header "Authorization: $API_KEY"
Response examples (200)
A successful response from `GET /_cat/transforms?v=true&format=json`.
[
  {
    "id" : "ecommerce_transform",
    "state" : "started",
    "checkpoint" : "1",
    "documents_processed" : "705",
    "checkpoint_progress" : "100.00",
    "changes_last_detection_time" : null
  }
]









































Get the pending cluster tasks

GET /_cluster/pending_tasks

Get information about cluster-level changes (such as create index, update mapping, allocate or fail shard) that have not yet taken effect.

NOTE: This API returns a list of any pending updates to the cluster state. These are distinct from the tasks reported by the task management API which include periodic tasks and tasks initiated by the user, such as node stats, search queries, or create index requests. However, if a user-initiated task such as a create index command causes a cluster state update, the activity of this task might be reported by both task api and pending cluster tasks API.

Query parameters

  • local boolean

    If true, the request retrieves information from the local node only. If false, information is retrieved from the master node.

  • Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.

    Values are -1 or 0.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • tasks array[object] Required
      Hide tasks attributes Show tasks attributes object
      • executing boolean Required

        Indicates whether the pending tasks are currently executing or not.

      • insert_order number Required

        The number that represents when the task has been inserted into the task queue.

      • priority string Required

        The priority of the pending task. The valid priorities in descending priority order are: IMMEDIATE > URGENT > HIGH > NORMAL > LOW > LANGUID.

      • source string Required

        A general description of the cluster task that may include a reason and origin.

      • A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

      • Time unit for milliseconds

GET /_cluster/pending_tasks
curl \
 --request GET 'http://api.example.com/_cluster/pending_tasks' \
 --header "Authorization: $API_KEY"






































































































































































































Update the connector error field Technical preview

PUT /_connector/{connector_id}/_error

Set the error field for the connector. If the error provided in the request body is non-null, the connector’s status is updated to error. Otherwise, if the error is reset to null, the connector status is updated to connected.

Path parameters

  • connector_id string Required

    The unique identifier of the connector to be updated

application/json

Body Required

  • error string | null

    One of:

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • result string Required

      Values are created, updated, deleted, not_found, or noop.

PUT /_connector/{connector_id}/_error
curl \
 --request PUT 'http://api.example.com/_connector/{connector_id}/_error' \
 --header "Authorization: $API_KEY" \
 --header "Content-Type: application/json" \
 --data '"{\n    \"error\": \"Houston, we have a problem!\"\n}"'
Request example
{
    "error": "Houston, we have a problem!"
}
Response examples (200)
{
  "result": "updated"
}

























































































































Update data stream lifecycles Added in 8.11.0

PUT /_data_stream/{name}/_lifecycle

Update the data stream lifecycle of the specified data streams.

Path parameters

  • name string | array[string] Required

    Comma-separated list of data streams used to limit the request. Supports wildcards (*). To target all data streams use * or _all.

Query parameters

  • expand_wildcards string | array[string]

    Type of data stream that wildcard patterns can match. Supports comma-separated values, such as open,hidden. Valid values are: all, hidden, open, closed, none.

    Supported values include:

    • all: Match any data stream or index, including hidden ones.
    • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
    • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
    • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
    • none: Wildcard expressions are not accepted.

    Values are all, open, closed, hidden, or none.

  • Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.

    Values are -1 or 0.

  • timeout string

    Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.

    Values are -1 or 0.

application/json

Body

  • A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

  • Hide downsampling attribute Show downsampling attribute object
    • rounds array[object] Required

      The list of downsampling rounds to execute as part of this downsampling configuration

      Hide rounds attributes Show rounds attributes object
      • after string Required

        A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

      • config object Required
        Hide config attribute Show config attribute object
        • fixed_interval string Required

          A date histogram interval. Similar to Duration with additional units: w (week), M (month), q (quarter) and y (year)

  • enabled boolean

    If defined, it turns data stream lifecycle on/off (true/false) for this data stream. A data stream lifecycle that's disabled (enabled: false) will have no effect on the data stream.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

PUT /_data_stream/{name}/_lifecycle
curl \
 --request PUT 'http://api.example.com/_data_stream/{name}/_lifecycle' \
 --header "Authorization: $API_KEY" \
 --header "Content-Type: application/json" \
 --data '"{\n  \"data_retention\": \"7d\"\n}"'
{
  "data_retention": "7d"
}
This example configures two downsampling rounds.
{
    "downsampling": [
      {
        "after": "1d",
        "fixed_interval": "10m"
      },
      {
        "after": "7d",
        "fixed_interval": "1d"
      }
    ]
}
Response examples (200)
A successful response for configuring a data stream lifecycle.
{
  "acknowledged": true
}









































































Delete documents Added in 5.0.0

POST /{index}/_delete_by_query

Deletes documents that match the specified query.

If the Elasticsearch security features are enabled, you must have the following index privileges for the target data stream, index, or alias:

  • read
  • delete or write

You can specify the query criteria in the request URI or the request body using the same syntax as the search API. When you submit a delete by query request, Elasticsearch gets a snapshot of the data stream or index when it begins processing the request and deletes matching documents using internal versioning. If a document changes between the time that the snapshot is taken and the delete operation is processed, it results in a version conflict and the delete operation fails.

NOTE: Documents with a version equal to 0 cannot be deleted using delete by query because internal versioning does not support 0 as a valid version number.

While processing a delete by query request, Elasticsearch performs multiple search requests sequentially to find all of the matching documents to delete. A bulk delete request is performed for each batch of matching documents. If a search or bulk request is rejected, the requests are retried up to 10 times, with exponential back off. If the maximum retry limit is reached, processing halts and all failed requests are returned in the response. Any delete requests that completed successfully still stick, they are not rolled back.

You can opt to count version conflicts instead of halting and returning by setting conflicts to proceed. Note that if you opt to count version conflicts the operation could attempt to delete more documents from the source than max_docs until it has successfully deleted max_docs documents, or it has gone through every document in the source query.

Throttling delete requests

To control the rate at which delete by query issues batches of delete operations, you can set requests_per_second to any positive decimal number. This pads each batch with a wait time to throttle the rate. Set requests_per_second to -1 to disable throttling.

Throttling uses a wait time between batches so that the internal scroll requests can be given a timeout that takes the request padding into account. The padding time is the difference between the batch size divided by the requests_per_second and the time spent writing. By default the batch size is 1000, so if requests_per_second is set to 500:

target_time = 1000 / 500 per second = 2 seconds
wait_time = target_time - write_time = 2 seconds - .5 seconds = 1.5 seconds

Since the batch is issued as a single _bulk request, large batch sizes cause Elasticsearch to create many requests and wait before starting the next set. This is "bursty" instead of "smooth".

Slicing

Delete by query supports sliced scroll to parallelize the delete process. This can improve efficiency and provide a convenient way to break the request down into smaller parts.

Setting slices to auto lets Elasticsearch choose the number of slices to use. This setting will use one slice per shard, up to a certain limit. If there are multiple source data streams or indices, it will choose the number of slices based on the index or backing index with the smallest number of shards. Adding slices to the delete by query operation creates sub-requests which means it has some quirks:

  • You can see these requests in the tasks APIs. These sub-requests are "child" tasks of the task for the request with slices.
  • Fetching the status of the task for the request with slices only contains the status of completed slices.
  • These sub-requests are individually addressable for things like cancellation and rethrottling.
  • Rethrottling the request with slices will rethrottle the unfinished sub-request proportionally.
  • Canceling the request with slices will cancel each sub-request.
  • Due to the nature of slices each sub-request won't get a perfectly even portion of the documents. All documents will be addressed, but some slices may be larger than others. Expect larger slices to have a more even distribution.
  • Parameters like requests_per_second and max_docs on a request with slices are distributed proportionally to each sub-request. Combine that with the earlier point about distribution being uneven and you should conclude that using max_docs with slices might not result in exactly max_docs documents being deleted.
  • Each sub-request gets a slightly different snapshot of the source data stream or index though these are all taken at approximately the same time.

If you're slicing manually or otherwise tuning automatic slicing, keep in mind that:

  • Query performance is most efficient when the number of slices is equal to the number of shards in the index or backing index. If that number is large (for example, 500), choose a lower number as too many slices hurts performance. Setting slices higher than the number of shards generally does not improve efficiency and adds overhead.
  • Delete performance scales linearly across available resources with the number of slices.

Whether query or delete performance dominates the runtime depends on the documents being reindexed and cluster resources.

Cancel a delete by query operation

Any delete by query can be canceled using the task cancel API. For example:

POST _tasks/r1A2WoRbTwKZ516z6NEs5A:36619/_cancel

The task ID can be found by using the get tasks API.

Cancellation should happen quickly but might take a few seconds. The get task status API will continue to list the delete by query task until this task checks that it has been cancelled and terminates itself.

Path parameters

  • index string | array[string] Required

    A comma-separated list of data streams, indices, and aliases to search. It supports wildcards (*). To search all data streams or indices, omit this parameter or use * or _all.

Query parameters

  • If false, the request returns an error if any wildcard expression, index alias, or _all value targets only missing or closed indices. This behavior applies even if the request targets other open indices. For example, a request targeting foo*,bar* returns an error if an index starts with foo but no index starts with bar.

  • analyzer string

    Analyzer to use for the query string. This parameter can be used only when the q query string parameter is specified.

  • If true, wildcard and prefix queries are analyzed. This parameter can be used only when the q query string parameter is specified.

  • What to do if delete by query hits version conflicts: abort or proceed.

    Supported values include:

    • abort: Stop reindexing if there are conflicts.
    • proceed: Continue reindexing even if there are conflicts.

    Values are abort or proceed.

  • The default operator for query string query: AND or OR. This parameter can be used only when the q query string parameter is specified.

    Values are and, AND, or, or OR.

  • df string

    The field to use as default where no field prefix is given in the query string. This parameter can be used only when the q query string parameter is specified.

  • expand_wildcards string | array[string]

    The type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. It supports comma-separated values, such as open,hidden.

    Supported values include:

    • all: Match any data stream or index, including hidden ones.
    • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
    • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
    • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
    • none: Wildcard expressions are not accepted.

    Values are all, open, closed, hidden, or none.

  • from number

    Skips the specified number of documents.

  • If false, the request returns an error if it targets a missing or closed index.

  • lenient boolean

    If true, format-based query failures (such as providing text to a numeric field) in the query string will be ignored. This parameter can be used only when the q query string parameter is specified.

  • max_docs number

    The maximum number of documents to process. Defaults to all documents. When set to a value less then or equal to scroll_size, a scroll will not be used to retrieve the results for the operation.

  • The node or shard the operation should be performed on. It is random by default.

  • refresh boolean

    If true, Elasticsearch refreshes all shards involved in the delete by query after the request completes. This is different than the delete API's refresh parameter, which causes just the shard that received the delete request to be refreshed. Unlike the delete API, it does not support wait_for.

  • If true, the request cache is used for this request. Defaults to the index-level setting.

  • The throttle for this request in sub-requests per second.

  • routing string

    A custom value used to route operations to a specific shard.

  • q string

    A query in the Lucene query string syntax.

  • scroll string

    The period to retain the search context for scrolling.

    Values are -1 or 0.

  • The size of the scroll request that powers the operation.

  • The explicit timeout for each search request. It defaults to no timeout.

    Values are -1 or 0.

  • The type of the search operation. Available options include query_then_fetch and dfs_query_then_fetch.

    Supported values include:

    • query_then_fetch: Documents are scored using local term and document frequencies for the shard. This is usually faster but less accurate.
    • dfs_query_then_fetch: Documents are scored using global term and document frequencies across all shards. This is usually slower but more accurate.

    Values are query_then_fetch or dfs_query_then_fetch.

  • slices number | string

    The number of slices this task should be divided into.

    Value is auto.

  • sort array[string]

    A comma-separated list of <field>:<direction> pairs.

  • stats array[string]

    The specific tag of the request for logging and statistical purposes.

  • The maximum number of documents to collect for each shard. If a query reaches this limit, Elasticsearch terminates the query early. Elasticsearch collects documents before sorting.

    Use with caution. Elasticsearch applies this parameter to each shard handling the request. When possible, let Elasticsearch perform early termination automatically. Avoid specifying this parameter for requests that target data streams with backing indices across multiple data tiers.

  • timeout string

    The period each deletion request waits for active shards.

    Values are -1 or 0.

  • version boolean

    If true, returns the document version as part of a hit.

  • wait_for_active_shards number | string

    The number of shard copies that must be active before proceeding with the operation. Set to all or any positive integer up to the total number of shards in the index (number_of_replicas+1). The timeout value controls how long each write request waits for unavailable shards to become available.

    Values are all or index-setting.

  • If true, the request blocks until the operation is complete. If false, Elasticsearch performs some preflight checks, launches the request, and returns a task you can use to cancel or get the status of the task. Elasticsearch creates a record of this task as a document at .tasks/task/${taskId}. When you are done with a task, you should delete the task document so Elasticsearch can reclaim the space.

application/json

Body Required

  • max_docs number

    The maximum number of documents to delete.

  • query object

    An Elasticsearch Query DSL (Domain Specific Language) object that defines a query.

    External documentation
  • slice object
    Hide slice attributes Show slice attributes object
    • field string

      Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

    • id string Required
    • max number Required

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • batches number

      The number of scroll responses pulled back by the delete by query.

    • deleted number

      The number of documents that were successfully deleted.

    • failures array[object]

      An array of failures if there were any unrecoverable errors during the process. If this array is not empty, the request ended abnormally because of those failures. Delete by query is implemented using batches and any failures cause the entire process to end but all failures in the current batch are collected into the array. You can use the conflicts option to prevent reindex from ending on version conflicts.

      Hide failures attributes Show failures attributes object
    • noops number

      This field is always equal to zero for delete by query. It exists only so that delete by query, update by query, and reindex APIs return responses with the same structure.

    • The number of requests per second effectively run during the delete by query.

    • retries object
      Hide retries attributes Show retries attributes object
      • bulk number Required

        The number of bulk actions retried.

    • slice_id number
    • A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

    • Time unit for milliseconds

    • A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

    • Time unit for milliseconds

    • timed_out boolean

      If true, some requests run during the delete by query operation timed out.

    • took number

      Time unit for milliseconds

    • total number

      The number of documents that were successfully processed.

    • The number of version conflicts that the delete by query hit.

POST /{index}/_delete_by_query
curl \
 --request POST 'http://api.example.com/{index}/_delete_by_query' \
 --header "Authorization: $API_KEY" \
 --header "Content-Type: application/json" \
 --data '"{\n  \"query\": {\n    \"match_all\": {}\n  }\n}"'
Run `POST /my-index-000001,my-index-000002/_delete_by_query` to delete all documents from multiple data streams or indices.
{
  "query": {
    "match_all": {}
  }
}
Run `POST my-index-000001/_delete_by_query` to delete a document by using a unique attribute.
{
  "query": {
    "term": {
      "user.id": "kimchy"
    }
  },
  "max_docs": 1
}
Run `POST my-index-000001/_delete_by_query` to slice a delete by query manually. Provide a slice ID and total number of slices.
{
  "slice": {
    "id": 0,
    "max": 2
  },
  "query": {
    "range": {
      "http.response.bytes": {
        "lt": 2000000
      }
    }
  }
}
Run `POST my-index-000001/_delete_by_query?refresh&slices=5` to let delete by query automatically parallelize using sliced scroll to slice on `_id`. The `slices` query parameter value specifies the number of slices to use.
{
  "query": {
    "range": {
      "http.response.bytes": {
        "lt": 2000000
      }
    }
  }
}
Response examples (200)
A successful response from `POST /my-index-000001/_delete_by_query`.
{
  "took" : 147,
  "timed_out": false,
  "total": 119,
  "deleted": 119,
  "batches": 1,
  "version_conflicts": 0,
  "noops": 0,
  "retries": {
    "bulk": 0,
    "search": 0
  },
  "throttled_millis": 0,
  "requests_per_second": -1.0,
  "throttled_until_millis": 0,
  "failures" : [ ]
}
















































Reindex documents Added in 2.3.0

POST /_reindex

Copy documents from a source to a destination. You can copy all documents to the destination index or reindex a subset of the documents. The source can be any existing index, alias, or data stream. The destination must differ from the source. For example, you cannot reindex a data stream into itself.

IMPORTANT: Reindex requires _source to be enabled for all documents in the source. The destination should be configured as wanted before calling the reindex API. Reindex does not copy the settings from the source or its associated template. Mappings, shard counts, and replicas, for example, must be configured ahead of time.

If the Elasticsearch security features are enabled, you must have the following security privileges:

  • The read index privilege for the source data stream, index, or alias.
  • The write index privilege for the destination data stream, index, or index alias.
  • To automatically create a data stream or index with a reindex API request, you must have the auto_configure, create_index, or manage index privilege for the destination data stream, index, or alias.
  • If reindexing from a remote cluster, the source.remote.user must have the monitor cluster privilege and the read index privilege for the source data stream, index, or alias.

If reindexing from a remote cluster, you must explicitly allow the remote host in the reindex.remote.whitelist setting. Automatic data stream creation requires a matching index template with data stream enabled.

The dest element can be configured like the index API to control optimistic concurrency control. Omitting version_type or setting it to internal causes Elasticsearch to blindly dump documents into the destination, overwriting any that happen to have the same ID.

Setting version_type to external causes Elasticsearch to preserve the version from the source, create any documents that are missing, and update any documents that have an older version in the destination than they do in the source.

Setting op_type to create causes the reindex API to create only missing documents in the destination. All existing documents will cause a version conflict.

IMPORTANT: Because data streams are append-only, any reindex request to a destination data stream must have an op_type of create. A reindex can only add new documents to a destination data stream. It cannot update existing documents in a destination data stream.

By default, version conflicts abort the reindex process. To continue reindexing if there are conflicts, set the conflicts request body property to proceed. In this case, the response includes a count of the version conflicts that were encountered. Note that the handling of other error types is unaffected by the conflicts property. Additionally, if you opt to count version conflicts, the operation could attempt to reindex more documents from the source than max_docs until it has successfully indexed max_docs documents into the target or it has gone through every document in the source query.

NOTE: The reindex API makes no effort to handle ID collisions. The last document written will "win" but the order isn't usually predictable so it is not a good idea to rely on this behavior. Instead, make sure that IDs are unique by using a script.

Running reindex asynchronously

If the request contains wait_for_completion=false, Elasticsearch performs some preflight checks, launches the request, and returns a task you can use to cancel or get the status of the task. Elasticsearch creates a record of this task as a document at _tasks/<task_id>.

Reindex from multiple sources

If you have many sources to reindex it is generally better to reindex them one at a time rather than using a glob pattern to pick up multiple sources. That way you can resume the process if there are any errors by removing the partially completed source and starting over. It also makes parallelizing the process fairly simple: split the list of sources to reindex and run each list in parallel.

For example, you can use a bash script like this:

for index in i1 i2 i3 i4 i5; do
  curl -HContent-Type:application/json -XPOST localhost:9200/_reindex?pretty -d'{
    "source": {
      "index": "'$index'"
    },
    "dest": {
      "index": "'$index'-reindexed"
    }
  }'
done

Throttling

Set requests_per_second to any positive decimal number (1.4, 6, 1000, for example) to throttle the rate at which reindex issues batches of index operations. Requests are throttled by padding each batch with a wait time. To turn off throttling, set requests_per_second to -1.

The throttling is done by waiting between batches so that the scroll that reindex uses internally can be given a timeout that takes into account the padding. The padding time is the difference between the batch size divided by the requests_per_second and the time spent writing. By default the batch size is 1000, so if requests_per_second is set to 500:

target_time = 1000 / 500 per second = 2 seconds
wait_time = target_time - write_time = 2 seconds - .5 seconds = 1.5 seconds

Since the batch is issued as a single bulk request, large batch sizes cause Elasticsearch to create many requests and then wait for a while before starting the next set. This is "bursty" instead of "smooth".

Slicing

Reindex supports sliced scroll to parallelize the reindexing process. This parallelization can improve efficiency and provide a convenient way to break the request down into smaller parts.

NOTE: Reindexing from remote clusters does not support manual or automatic slicing.

You can slice a reindex request manually by providing a slice ID and total number of slices to each request. You can also let reindex automatically parallelize by using sliced scroll to slice on _id. The slices parameter specifies the number of slices to use.

Adding slices to the reindex request just automates the manual process, creating sub-requests which means it has some quirks:

  • You can see these requests in the tasks API. These sub-requests are "child" tasks of the task for the request with slices.
  • Fetching the status of the task for the request with slices only contains the status of completed slices.
  • These sub-requests are individually addressable for things like cancellation and rethrottling.
  • Rethrottling the request with slices will rethrottle the unfinished sub-request proportionally.
  • Canceling the request with slices will cancel each sub-request.
  • Due to the nature of slices, each sub-request won't get a perfectly even portion of the documents. All documents will be addressed, but some slices may be larger than others. Expect larger slices to have a more even distribution.
  • Parameters like requests_per_second and max_docs on a request with slices are distributed proportionally to each sub-request. Combine that with the previous point about distribution being uneven and you should conclude that using max_docs with slices might not result in exactly max_docs documents being reindexed.
  • Each sub-request gets a slightly different snapshot of the source, though these are all taken at approximately the same time.

If slicing automatically, setting slices to auto will choose a reasonable number for most indices. If slicing manually or otherwise tuning automatic slicing, use the following guidelines.

Query performance is most efficient when the number of slices is equal to the number of shards in the index. If that number is large (for example, 500), choose a lower number as too many slices will hurt performance. Setting slices higher than the number of shards generally does not improve efficiency and adds overhead.

Indexing performance scales linearly across available resources with the number of slices.

Whether query or indexing performance dominates the runtime depends on the documents being reindexed and cluster resources.

Modify documents during reindexing

Like _update_by_query, reindex operations support a script that modifies the document. Unlike _update_by_query, the script is allowed to modify the document's metadata.

Just as in _update_by_query, you can set ctx.op to change the operation that is run on the destination. For example, set ctx.op to noop if your script decides that the document doesn’t have to be indexed in the destination. This "no operation" will be reported in the noop counter in the response body. Set ctx.op to delete if your script decides that the document must be deleted from the destination. The deletion will be reported in the deleted counter in the response body. Setting ctx.op to anything else will return an error, as will setting any other field in ctx.

Think of the possibilities! Just be careful; you are able to change:

  • _id
  • _index
  • _version
  • _routing

Setting _version to null or clearing it from the ctx map is just like not sending the version in an indexing request. It will cause the document to be overwritten in the destination regardless of the version on the target or the version type you use in the reindex API.

Reindex from remote

Reindex supports reindexing from a remote Elasticsearch cluster. The host parameter must contain a scheme, host, port, and optional path. The username and password parameters are optional and when they are present the reindex operation will connect to the remote Elasticsearch node using basic authentication. Be sure to use HTTPS when using basic authentication or the password will be sent in plain text. There are a range of settings available to configure the behavior of the HTTPS connection.

When using Elastic Cloud, it is also possible to authenticate against the remote cluster through the use of a valid API key. Remote hosts must be explicitly allowed with the reindex.remote.whitelist setting. It can be set to a comma delimited list of allowed remote host and port combinations. Scheme is ignored; only the host and port are used. For example:

reindex.remote.whitelist: [otherhost:9200, another:9200, 127.0.10.*:9200, localhost:*"]

The list of allowed hosts must be configured on any nodes that will coordinate the reindex. This feature should work with remote clusters of any version of Elasticsearch. This should enable you to upgrade from any version of Elasticsearch to the current version by reindexing from a cluster of the old version.

WARNING: Elasticsearch does not support forward compatibility across major versions. For example, you cannot reindex from a 7.x cluster into a 6.x cluster.

To enable queries sent to older versions of Elasticsearch, the query parameter is sent directly to the remote host without validation or modification.

NOTE: Reindexing from remote clusters does not support manual or automatic slicing.

Reindexing from a remote server uses an on-heap buffer that defaults to a maximum size of 100mb. If the remote index includes very large documents you'll need to use a smaller batch size. It is also possible to set the socket read timeout on the remote connection with the socket_timeout field and the connection timeout with the connect_timeout field. Both default to 30 seconds.

Configuring SSL parameters

Reindex from remote supports configurable SSL settings. These must be specified in the elasticsearch.yml file, with the exception of the secure settings, which you add in the Elasticsearch keystore. It is not possible to configure SSL in the body of the reindex request.

Query parameters

  • refresh boolean

    If true, the request refreshes affected shards to make this operation visible to search.

  • The throttle for this request in sub-requests per second. By default, there is no throttle.

  • scroll string

    The period of time that a consistent view of the index should be maintained for scrolled search.

    Values are -1 or 0.

  • slices number | string

    The number of slices this task should be divided into. It defaults to one slice, which means the task isn't sliced into subtasks.

    Reindex supports sliced scroll to parallelize the reindexing process. This parallelization can improve efficiency and provide a convenient way to break the request down into smaller parts.

    NOTE: Reindexing from remote clusters does not support manual or automatic slicing.

    If set to auto, Elasticsearch chooses the number of slices to use. This setting will use one slice per shard, up to a certain limit. If there are multiple sources, it will choose the number of slices based on the index or backing index with the smallest number of shards.

    Value is auto.

  • timeout string

    The period each indexing waits for automatic index creation, dynamic mapping updates, and waiting for active shards. By default, Elasticsearch waits for at least one minute before failing. The actual wait time could be longer, particularly when multiple waits occur.

    Values are -1 or 0.

  • wait_for_active_shards number | string

    The number of shard copies that must be active before proceeding with the operation. Set it to all or any positive integer up to the total number of shards in the index (number_of_replicas+1). The default value is one, which means it waits for each primary shard to be active.

    Values are all or index-setting.

  • If true, the request blocks until the operation is complete.

  • If true, the destination must be an index alias.

application/json

Body Required

  • Values are abort or proceed.

  • dest object Required
    Hide dest attributes Show dest attributes object
  • max_docs number

    The maximum number of documents to reindex. By default, all documents are reindexed. If it is a value less then or equal to scroll_size, a scroll will not be used to retrieve the results for the operation.

    If conflicts is set to proceed, the reindex operation could attempt to reindex more documents from the source than max_docs until it has successfully indexed max_docs documents into the target or it has gone through every document in the source query.

  • script object
    Hide script attributes Show script attributes object
    • source string | object

      One of:
    • id string
    • params object

      Specifies any named parameters that are passed into the script as variables. Use parameters instead of hard-coded values to decrease compile time.

      Hide params attribute Show params attribute object
      • * object Additional properties
    • lang string

      Any of:

      Values are painless, expression, mustache, or java.

    • options object
      Hide options attribute Show options attribute object
      • * string Additional properties
  • size number
  • source object Required
    Hide source attributes Show source attributes object
    • index string | array[string] Required
    • query object

      An Elasticsearch Query DSL (Domain Specific Language) object that defines a query.

      External documentation
    • remote object
      Hide remote attributes Show remote attributes object
      • A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

      • headers object

        An object containing the headers of the request.

        Hide headers attribute Show headers attribute object
        • * string Additional properties
      • host string Required
      • username string
      • password string
      • A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

    • size number

      The number of documents to index per batch. Use it when you are indexing from remote to ensure that the batches fit within the on-heap buffer, which defaults to a maximum size of 100 MB.

    • slice object
      Hide slice attributes Show slice attributes object
      • field string

        Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

      • id string Required
      • max number Required
    • sort string | object | array[string | object]

      One of:

      Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

    • _source string | array[string]
    • Hide runtime_mappings attribute Show runtime_mappings attribute object
      • * object Additional properties
        Hide * attributes Show * attributes object
        • fields object

          For type composite

          Hide fields attribute Show fields attribute object
          • * object Additional properties
            Hide * attribute Show * attribute object
            • type string Required

              Values are boolean, composite, date, double, geo_point, geo_shape, ip, keyword, long, or lookup.

        • fetch_fields array[object]

          For type lookup

          Hide fetch_fields attributes Show fetch_fields attributes object
          • field string Required

            Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • format string
        • format string

          A custom format for date type runtime fields.

        • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

        • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

        • script object
          Hide script attributes Show script attributes object
          • source string | object

            One of:
          • id string
          • params object

            Specifies any named parameters that are passed into the script as variables. Use parameters instead of hard-coded values to decrease compile time.

            Hide params attribute Show params attribute object
            • * object Additional properties
          • lang string

            Any of:

            Values are painless, expression, mustache, or java.

          • options object
            Hide options attribute Show options attribute object
            • * string Additional properties
        • type string Required

          Values are boolean, composite, date, double, geo_point, geo_shape, ip, keyword, long, or lookup.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • batches number

      The number of scroll responses that were pulled back by the reindex.

    • created number

      The number of documents that were successfully created.

    • deleted number

      The number of documents that were successfully deleted.

    • failures array[object]

      If there were any unrecoverable errors during the process, it is an array of those failures. If this array is not empty, the request ended because of those failures. Reindex is implemented using batches and any failure causes the entire process to end but all failures in the current batch are collected into the array. You can use the conflicts option to prevent the reindex from ending on version conflicts.

      Hide failures attributes Show failures attributes object
    • noops number

      The number of documents that were ignored because the script used for the reindex returned a noop value for ctx.op.

    • retries object
      Hide retries attributes Show retries attributes object
      • bulk number Required

        The number of bulk actions retried.

    • The number of requests per second effectively run during the reindex.

    • slice_id number
    • Time unit for milliseconds

    • Time unit for milliseconds

    • timed_out boolean

      If any of the requests that ran during the reindex timed out, it is true.

    • took number

      Time unit for milliseconds

    • total number

      The number of documents that were successfully processed.

    • updated number

      The number of documents that were successfully updated. That is to say, a document with the same ID already existed before the reindex updated it.

    • The number of version conflicts that occurred.

POST /_reindex
curl \
 --request POST 'http://api.example.com/_reindex' \
 --header "Authorization: $API_KEY" \
 --header "Content-Type: application/json" \
 --data '"{\n  \"source\": {\n    \"index\": [\"my-index-000001\", \"my-index-000002\"]\n  },\n  \"dest\": {\n    \"index\": \"my-new-index-000002\"\n  }\n}"'
Run `POST _reindex` to reindex from multiple sources. The `index` attribute in source can be a list, which enables you to copy from lots of sources in one request. This example copies documents from the `my-index-000001` and `my-index-000002` indices.
{
  "source": {
    "index": ["my-index-000001", "my-index-000002"]
  },
  "dest": {
    "index": "my-new-index-000002"
  }
}
You can use Painless to reindex daily indices to apply a new template to the existing documents. The script extracts the date from the index name and creates a new index with `-1` appended. For example, all data from `metricbeat-2016.05.31` will be reindexed into `metricbeat-2016.05.31-1`.
{
  "source": {
    "index": "metricbeat-*"
  },
  "dest": {
    "index": "metricbeat"
  },
  "script": {
    "lang": "painless",
    "source": "ctx._index = 'metricbeat-' + (ctx._index.substring('metricbeat-'.length(), ctx._index.length())) + '-1'"
  }
}
Run `POST _reindex` to extract a random subset of the source for testing. You might need to adjust the `min_score` value depending on the relative amount of data extracted from source.
{
  "max_docs": 10,
  "source": {
    "index": "my-index-000001",
    "query": {
      "function_score" : {
        "random_score" : {},
        "min_score" : 0.9
      }
    }
  },
  "dest": {
    "index": "my-new-index-000001"
  }
}
Run `POST _reindex` to modify documents during reindexing. This example bumps the version of the source document.
{
  "source": {
    "index": "my-index-000001"
  },
  "dest": {
    "index": "my-new-index-000001",
    "version_type": "external"
  },
  "script": {
    "source": "if (ctx._source.foo == 'bar') {ctx._version++; ctx._source.remove('foo')}",
    "lang": "painless"
  }
}
When using Elastic Cloud, you can run `POST _reindex` and authenticate against a remote cluster with an API key.
{
  "source": {
    "remote": {
      "host": "http://otherhost:9200",
      "username": "user",
      "password": "pass"
    },
    "index": "my-index-000001",
    "query": {
      "match": {
        "test": "data"
      }
    }
  },
  "dest": {
    "index": "my-new-index-000001"
  }
}
Run `POST _reindex` to slice a reindex request manually. Provide a slice ID and total number of slices to each request.
{
  "source": {
    "index": "my-index-000001",
    "slice": {
      "id": 0,
      "max": 2
    }
  },
  "dest": {
    "index": "my-new-index-000001"
  }
}
Run `POST _reindex?slices=5&refresh` to automatically parallelize using sliced scroll to slice on `_id`. The `slices` parameter specifies the number of slices to use.
{
  "source": {
    "index": "my-index-000001"
  },
  "dest": {
    "index": "my-new-index-000001"
  }
}
By default if reindex sees a document with routing then the routing is preserved unless it's changed by the script. You can set `routing` on the `dest` request to change this behavior. In this example, run `POST _reindex` to copy all documents from the `source` with the company name `cat` into the `dest` with routing set to `cat`.
{
  "source": {
    "index": "source",
    "query": {
      "match": {
        "company": "cat"
      }
    }
  },
  "dest": {
    "index": "dest",
    "routing": "=cat"
  }
}
Run `POST _reindex` and use the ingest pipelines feature.
{
  "source": {
    "index": "source"
  },
  "dest": {
    "index": "dest",
    "pipeline": "some_ingest_pipeline"
  }
}
Run `POST _reindex` and add a query to the `source` to limit the documents to reindex. For example, this request copies documents into `my-new-index-000001` only if they have a `user.id` of `kimchy`.
{
  "source": {
    "index": "my-index-000001",
    "query": {
      "term": {
        "user.id": "kimchy"
      }
    }
  },
  "dest": {
    "index": "my-new-index-000001"
  }
}
You can limit the number of processed documents by setting `max_docs`. For example, run `POST _reindex` to copy a single document from `my-index-000001` to `my-new-index-000001`.
{
  "max_docs": 1,
  "source": {
    "index": "my-index-000001"
  },
  "dest": {
    "index": "my-new-index-000001"
  }
}
You can use source filtering to reindex a subset of the fields in the original documents. For example, run `POST _reindex` the reindex only the `user.id` and `_doc` fields of each document.
{
  "source": {
    "index": "my-index-000001",
    "_source": ["user.id", "_doc"]
  },
  "dest": {
    "index": "my-new-index-000001"
  }
}
A reindex operation can build a copy of an index with renamed fields. If your index has documents with `text` and `flag` fields, you can change the latter field name to `tag` during the reindex.
{
  "source": {
    "index": "my-index-000001"
  },
  "dest": {
    "index": "my-new-index-000001"
  },
  "script": {
    "source": "ctx._source.tag = ctx._source.remove(\"flag\")"
  }
}



























































































Stop async ES|QL query Added in 8.18.0

POST /_query/async/{id}/stop

This API interrupts the query execution and returns the results so far. If the Elasticsearch security features are enabled, only the user who first submitted the ES|QL query can stop it.

External documentation

Path parameters

  • id string Required

    The unique identifier of the query. A query ID is provided in the ES|QL async query API response for a query that does not complete in the designated time. A query ID is also provided when the request was submitted with the keep_on_completion parameter set to true.

Query parameters

  • Indicates whether columns that are entirely null will be removed from the columns and values portion of the results. If true, the response will include an extra section under the name all_columns which has the name of all the columns.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • took number

      Time unit for milliseconds

    • is_partial boolean
    • all_columns array[object]
      Hide all_columns attributes Show all_columns attributes object
    • columns array[object] Required
      Hide columns attributes Show columns attributes object
    • values array[array] Required

      A field value.

      A field value.

    • Hide _clusters attributes Show _clusters attributes object
    • profile object

      Profiling information. Present if profile was true in the request. The contents of this field are currently unstable.

POST /_query/async/{id}/stop
curl \
 --request POST 'http://api.example.com/_query/async/{id}/stop' \
 --header "Authorization: $API_KEY"




















































































Get the dangling indices Added in 7.9.0

GET /_dangling

If Elasticsearch encounters index data that is absent from the current cluster state, those indices are considered to be dangling. For example, this can happen if you delete more than cluster.indices.tombstones.size indices while an Elasticsearch node is offline.

Use this API to list dangling indices, which you can then import or delete.

Responses

GET /_dangling
curl \
 --request GET 'http://api.example.com/_dangling' \
 --header "Authorization: $API_KEY"
Response examples (200)
{
  "dangling_indices": [
   {
    "index_name": "my-index-000001",
    "index_uuid": "zmM4e0JtBkeUjiHD-MihPQ",
    "creation_date_millis": 1589414451372,
    "node_ids": [
      "pL47UN3dAb2d5RCWP6lQ3e"
    ]
   }
  ]
}




















Clear the cache

POST /_cache/clear

Clear the cache of one or more indices. For data streams, the API clears the caches of the stream's backing indices.

By default, the clear cache API clears all caches. To clear only specific caches, use the fielddata, query, or request parameters. To clear the cache only of specific fields, use the fields parameter.

Query parameters

  • index string | array[string]

    Comma-separated list of data streams, indices, and aliases used to limit the request. Supports wildcards (*). To target all data streams and indices, omit this parameter or use * or _all.

  • If false, the request returns an error if any wildcard expression, index alias, or _all value targets only missing or closed indices. This behavior applies even if the request targets other open indices.

  • expand_wildcards string | array[string]

    Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports comma-separated values, such as open,hidden. Valid values are: all, open, closed, hidden, none.

    Supported values include:

    • all: Match any data stream or index, including hidden ones.
    • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
    • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
    • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
    • none: Wildcard expressions are not accepted.

    Values are all, open, closed, hidden, or none.

  • fielddata boolean

    If true, clears the fields cache. Use the fields parameter to clear the cache of specific fields only.

  • fields string | array[string]

    Comma-separated list of field names used to limit the fielddata parameter.

  • If false, the request returns an error if it targets a missing or closed index.

  • query boolean

    If true, clears the query cache.

  • request boolean

    If true, clears the request cache.

Responses

POST /_cache/clear
curl \
 --request POST 'http://api.example.com/_cache/clear' \
 --header "Authorization: $API_KEY"
















Get index information

GET /{index}

Get information about one or more indices. For data streams, the API returns information about the stream’s backing indices.

Path parameters

  • index string | array[string] Required

    Comma-separated list of data streams, indices, and index aliases used to limit the request. Wildcard expressions (*) are supported.

Query parameters

  • If false, the request returns an error if any wildcard expression, index alias, or _all value targets only missing or closed indices. This behavior applies even if the request targets other open indices. For example, a request targeting foo*,bar* returns an error if an index starts with foo but no index starts with bar.

  • expand_wildcards string | array[string]

    Type of index that wildcard expressions can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports comma-separated values, such as open,hidden.

    Supported values include:

    • all: Match any data stream or index, including hidden ones.
    • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
    • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
    • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
    • none: Wildcard expressions are not accepted.

    Values are all, open, closed, hidden, or none.

  • If true, returns settings in flat format.

  • If false, requests that target a missing index return an error.

  • If true, return all default settings in the response.

  • local boolean

    If true, the request retrieves information from the local node only. Defaults to false, which means information is retrieved from the master node.

  • Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.

    Values are -1 or 0.

  • features string | array[string]

    Return only information on specified index features

    Supported values include: aliases, mappings, settings

    Values are aliases, mappings, or settings.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • * object
      Hide * attributes Show * attributes object
      • aliases object
        Hide aliases attribute Show aliases attribute object
      • mappings object
        Hide mappings attributes Show mappings attributes object
      • settings object
        Hide settings attributes Show settings attributes object
        • index object
        • mode string
        • Hide soft_deletes attributes Show soft_deletes attributes object
          • enabled boolean

            Indicates whether soft deletes are enabled on the index.

          • Hide retention_lease attribute Show retention_lease attribute object
            • period string Required

              A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

        • sort object
          Hide sort attributes Show sort attributes object
        • Values are true, false, or checksum.

        • codec string
        • routing_partition_size number | string

          Some APIs will return values such as numbers also as a string (notably epoch timestamps). This behavior is used to capture this behavior while keeping the semantics of the field type.

          Depending on the target language, code generators can keep the union or remove it and leniently parse strings to the target type.

        • auto_expand_replicas string | null

          One of:
        • merge object
          Hide merge attribute Show merge attribute object
          • Hide scheduler attributes Show scheduler attributes object
            • max_thread_count number | string

              Some APIs will return values such as numbers also as a string (notably epoch timestamps). This behavior is used to capture this behavior while keeping the semantics of the field type.

              Depending on the target language, code generators can keep the union or remove it and leniently parse strings to the target type.

            • max_merge_count number | string

              Some APIs will return values such as numbers also as a string (notably epoch timestamps). This behavior is used to capture this behavior while keeping the semantics of the field type.

              Depending on the target language, code generators can keep the union or remove it and leniently parse strings to the target type.

        • A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

        • blocks object
          Hide blocks attributes Show blocks attributes object
          • read_only boolean | string

            Some APIs will return values such as numbers also as a string (notably epoch timestamps). This behavior is used to capture this behavior while keeping the semantics of the field type.

            Depending on the target language, code generators can keep the union or remove it and leniently parse strings to the target type.

          • read_only_allow_delete boolean | string

            Some APIs will return values such as numbers also as a string (notably epoch timestamps). This behavior is used to capture this behavior while keeping the semantics of the field type.

            Depending on the target language, code generators can keep the union or remove it and leniently parse strings to the target type.

          • read boolean | string

            Some APIs will return values such as numbers also as a string (notably epoch timestamps). This behavior is used to capture this behavior while keeping the semantics of the field type.

            Depending on the target language, code generators can keep the union or remove it and leniently parse strings to the target type.

          • write boolean | string

            Some APIs will return values such as numbers also as a string (notably epoch timestamps). This behavior is used to capture this behavior while keeping the semantics of the field type.

            Depending on the target language, code generators can keep the union or remove it and leniently parse strings to the target type.

          • metadata boolean | string

            Some APIs will return values such as numbers also as a string (notably epoch timestamps). This behavior is used to capture this behavior while keeping the semantics of the field type.

            Depending on the target language, code generators can keep the union or remove it and leniently parse strings to the target type.

        • analyze object
          Hide analyze attribute Show analyze attribute object
          • max_token_count number | string

            Some APIs will return values such as numbers also as a string (notably epoch timestamps). This behavior is used to capture this behavior while keeping the semantics of the field type.

            Depending on the target language, code generators can keep the union or remove it and leniently parse strings to the target type.

        • Hide highlight attribute Show highlight attribute object
        • routing object
          Hide routing attributes Show routing attributes object
        • A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

        • Hide lifecycle attributes Show lifecycle attributes object
          • name string
          • indexing_complete boolean | string

            Some APIs will return values such as numbers also as a string (notably epoch timestamps). This behavior is used to capture this behavior while keeping the semantics of the field type.

            Depending on the target language, code generators can keep the union or remove it and leniently parse strings to the target type.

          • If specified, this is the timestamp used to calculate the index age for its phase transitions. Use this setting if you create a new index that contains old data and want to use the original creation date to calculate the index age. Specified as a Unix epoch value in milliseconds.

          • Set to true to parse the origination date from the index name. This origination date is used to calculate the index age for its phase transitions. The index name must match the pattern .*-{date_format}-\d+, where the date_format is yyyy.MM.dd and the trailing digits are optional. An index that was rolled over would normally match the full format, for example logs-2016.10.31-000002). If the index name doesn’t match the pattern, index creation fails.

          • step object
            Hide step attribute Show step attribute object
            • A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

          • The index alias to update when the index rolls over. Specify when using a policy that contains a rollover action. When the index rolls over, the alias is updated to reflect that the index is no longer the write index. For more information about rolling indices, see Rollover.

          • prefer_ilm boolean | string

            Preference for the system that manages a data stream backing index (preferring ILM when both ILM and DLM are applicable for an index).

        • creation_date number | string

          Some APIs will return values such as numbers also as a string (notably epoch timestamps). This behavior is used to capture this behavior while keeping the semantics of the field type.

          Depending on the target language, code generators can keep the union or remove it and leniently parse strings to the target type.

        • creation_date_string string | number

          A date and time, either as a string whose format can depend on the context (defaulting to ISO 8601), or a number of milliseconds since the Epoch. Elasticsearch accepts both as input, but will generally output a string representation.

        • uuid string
        • version object
          Hide version attributes Show version attributes object
        • translog object
          Hide translog attributes Show translog attributes object
        • Hide query_string attribute Show query_string attribute object
          • lenient boolean | string Required

            Some APIs will return values such as numbers also as a string (notably epoch timestamps). This behavior is used to capture this behavior while keeping the semantics of the field type.

            Depending on the target language, code generators can keep the union or remove it and leniently parse strings to the target type.

        • analysis object
          Hide analysis attributes Show analysis attributes object
        • settings object
        • Hide time_series attributes Show time_series attributes object
          • end_time string | number

            A date and time, either as a string whose format can depend on the context (defaulting to ISO 8601), or a number of milliseconds since the Epoch. Elasticsearch accepts both as input, but will generally output a string representation.

          • start_time string | number

            A date and time, either as a string whose format can depend on the context (defaulting to ISO 8601), or a number of milliseconds since the Epoch. Elasticsearch accepts both as input, but will generally output a string representation.

        • queries object
          Hide queries attribute Show queries attribute object
          • cache object
            Hide cache attribute Show cache attribute object
        • Configure custom similarity settings to customize how search results are scored.

        • mapping object
          Hide mapping attributes Show mapping attributes object
          • coerce boolean
          • Hide total_fields attributes Show total_fields attributes object
            • limit number | string

              The maximum number of fields in an index. Field and object mappings, as well as field aliases count towards this limit. The limit is in place to prevent mappings and searches from becoming too large. Higher values can lead to performance degradations and memory issues, especially in clusters with a high load or few resources.

            • ignore_dynamic_beyond_limit boolean | string

              This setting determines what happens when a dynamically mapped field would exceed the total fields limit. When set to false (the default), the index request of the document that tries to add a dynamic field to the mapping will fail with the message Limit of total fields [X] has been exceeded. When set to true, the index request will not fail. Instead, fields that would exceed the limit are not added to the mapping, similar to dynamic: false. The fields that were not added to the mapping will be added to the _ignored field.

          • depth object
            Hide depth attribute Show depth attribute object
            • limit number

              The maximum depth for a field, which is measured as the number of inner objects. For instance, if all fields are defined at the root object level, then the depth is 1. If there is one object mapping, then the depth is 2, etc.

          • Hide nested_fields attribute Show nested_fields attribute object
            • limit number

              The maximum number of distinct nested mappings in an index. The nested type should only be used in special cases, when arrays of objects need to be queried independently of each other. To safeguard against poorly designed mappings, this setting limits the number of unique nested types per index.

          • Hide nested_objects attribute Show nested_objects attribute object
            • limit number

              The maximum number of nested JSON objects that a single document can contain across all nested types. This limit helps to prevent out of memory errors when a document contains too many nested objects.

          • Hide field_name_length attribute Show field_name_length attribute object
            • limit number

              Setting for the maximum length of a field name. This setting isn’t really something that addresses mappings explosion but might still be useful if you want to limit the field length. It usually shouldn’t be necessary to set this setting. The default is okay unless a user starts to add a huge number of fields with really long names. Default is Long.MAX_VALUE (no limit).

          • Hide dimension_fields attribute Show dimension_fields attribute object
            • limit number

              [preview] This functionality is in technical preview and may be changed or removed in a future release. Elastic will work to fix any issues, but features in technical preview are not subject to the support SLA of official GA features.

          • source object
            Hide source attribute Show source attribute object
            • mode string Required

              Values are disabled, stored, or synthetic.

        • Hide indexing.slowlog attributes Show indexing.slowlog attributes object
          • level string
          • source number
          • reformat boolean
          • Hide threshold attribute Show threshold attribute object
            • index object
              Hide index attributes Show index attributes object
              • warn string

                A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

              • info string

                A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

              • debug string

                A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

              • trace string

                A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

        • Hide indexing_pressure attribute Show indexing_pressure attribute object
          • memory object Required
            Hide memory attribute Show memory attribute object
            • limit number

              Number of outstanding bytes that may be consumed by indexing requests. When this limit is reached or exceeded, the node will reject new coordinating and primary operations. When replica operations consume 1.5x this limit, the node will reject new replica operations. Defaults to 10% of the heap.

        • store object
          Hide store attributes Show store attributes object
          • type string Required

            Any of:

            Values are fs, niofs, mmapfs, or hybridfs.

          • allow_mmap boolean

            You can restrict the use of the mmapfs and the related hybridfs store type via the setting node.store.allow_mmap. This is a boolean setting indicating whether or not memory-mapping is allowed. The default is to allow it. This setting is useful, for example, if you are in an environment where you can not control the ability to create a lot of memory maps so you need disable the ability to use memory-mapping.

      • defaults object
        Hide defaults attributes Show defaults attributes object
        • index object
        • mode string
        • Hide soft_deletes attributes Show soft_deletes attributes object
          • enabled boolean

            Indicates whether soft deletes are enabled on the index.

          • Hide retention_lease attribute Show retention_lease attribute object
            • period string Required

              A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

        • sort object
          Hide sort attributes Show sort attributes object
        • Values are true, false, or checksum.

        • codec string
        • routing_partition_size number | string

          Some APIs will return values such as numbers also as a string (notably epoch timestamps). This behavior is used to capture this behavior while keeping the semantics of the field type.

          Depending on the target language, code generators can keep the union or remove it and leniently parse strings to the target type.

        • auto_expand_replicas string | null

          One of:
        • merge object
          Hide merge attribute Show merge attribute object
          • Hide scheduler attributes Show scheduler attributes object
            • max_thread_count number | string

              Some APIs will return values such as numbers also as a string (notably epoch timestamps). This behavior is used to capture this behavior while keeping the semantics of the field type.

              Depending on the target language, code generators can keep the union or remove it and leniently parse strings to the target type.

            • max_merge_count number | string

              Some APIs will return values such as numbers also as a string (notably epoch timestamps). This behavior is used to capture this behavior while keeping the semantics of the field type.

              Depending on the target language, code generators can keep the union or remove it and leniently parse strings to the target type.

        • A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

        • blocks object
          Hide blocks attributes Show blocks attributes object
          • read_only boolean | string

            Some APIs will return values such as numbers also as a string (notably epoch timestamps). This behavior is used to capture this behavior while keeping the semantics of the field type.

            Depending on the target language, code generators can keep the union or remove it and leniently parse strings to the target type.

          • read_only_allow_delete boolean | string

            Some APIs will return values such as numbers also as a string (notably epoch timestamps). This behavior is used to capture this behavior while keeping the semantics of the field type.

            Depending on the target language, code generators can keep the union or remove it and leniently parse strings to the target type.

          • read boolean | string

            Some APIs will return values such as numbers also as a string (notably epoch timestamps). This behavior is used to capture this behavior while keeping the semantics of the field type.

            Depending on the target language, code generators can keep the union or remove it and leniently parse strings to the target type.

          • write boolean | string

            Some APIs will return values such as numbers also as a string (notably epoch timestamps). This behavior is used to capture this behavior while keeping the semantics of the field type.

            Depending on the target language, code generators can keep the union or remove it and leniently parse strings to the target type.

          • metadata boolean | string

            Some APIs will return values such as numbers also as a string (notably epoch timestamps). This behavior is used to capture this behavior while keeping the semantics of the field type.

            Depending on the target language, code generators can keep the union or remove it and leniently parse strings to the target type.

        • analyze object
          Hide analyze attribute Show analyze attribute object
          • max_token_count number | string

            Some APIs will return values such as numbers also as a string (notably epoch timestamps). This behavior is used to capture this behavior while keeping the semantics of the field type.

            Depending on the target language, code generators can keep the union or remove it and leniently parse strings to the target type.

        • Hide highlight attribute Show highlight attribute object
        • routing object
          Hide routing attributes Show routing attributes object
        • A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

        • Hide lifecycle attributes Show lifecycle attributes object
          • name string
          • indexing_complete boolean | string

            Some APIs will return values such as numbers also as a string (notably epoch timestamps). This behavior is used to capture this behavior while keeping the semantics of the field type.

            Depending on the target language, code generators can keep the union or remove it and leniently parse strings to the target type.

          • If specified, this is the timestamp used to calculate the index age for its phase transitions. Use this setting if you create a new index that contains old data and want to use the original creation date to calculate the index age. Specified as a Unix epoch value in milliseconds.

          • Set to true to parse the origination date from the index name. This origination date is used to calculate the index age for its phase transitions. The index name must match the pattern .*-{date_format}-\d+, where the date_format is yyyy.MM.dd and the trailing digits are optional. An index that was rolled over would normally match the full format, for example logs-2016.10.31-000002). If the index name doesn’t match the pattern, index creation fails.

          • step object
            Hide step attribute Show step attribute object
            • A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

          • The index alias to update when the index rolls over. Specify when using a policy that contains a rollover action. When the index rolls over, the alias is updated to reflect that the index is no longer the write index. For more information about rolling indices, see Rollover.

          • prefer_ilm boolean | string

            Preference for the system that manages a data stream backing index (preferring ILM when both ILM and DLM are applicable for an index).

        • creation_date number | string

          Some APIs will return values such as numbers also as a string (notably epoch timestamps). This behavior is used to capture this behavior while keeping the semantics of the field type.

          Depending on the target language, code generators can keep the union or remove it and leniently parse strings to the target type.

        • creation_date_string string | number

          A date and time, either as a string whose format can depend on the context (defaulting to ISO 8601), or a number of milliseconds since the Epoch. Elasticsearch accepts both as input, but will generally output a string representation.

        • uuid string
        • version object
          Hide version attributes Show version attributes object
        • translog object
          Hide translog attributes Show translog attributes object
        • Hide query_string attribute Show query_string attribute object
          • lenient boolean | string Required

            Some APIs will return values such as numbers also as a string (notably epoch timestamps). This behavior is used to capture this behavior while keeping the semantics of the field type.

            Depending on the target language, code generators can keep the union or remove it and leniently parse strings to the target type.

        • analysis object
          Hide analysis attributes Show analysis attributes object
        • settings object
        • Hide time_series attributes Show time_series attributes object
          • end_time string | number

            A date and time, either as a string whose format can depend on the context (defaulting to ISO 8601), or a number of milliseconds since the Epoch. Elasticsearch accepts both as input, but will generally output a string representation.

          • start_time string | number

            A date and time, either as a string whose format can depend on the context (defaulting to ISO 8601), or a number of milliseconds since the Epoch. Elasticsearch accepts both as input, but will generally output a string representation.

        • queries object
          Hide queries attribute Show queries attribute object
          • cache object
            Hide cache attribute Show cache attribute object
        • Configure custom similarity settings to customize how search results are scored.

        • mapping object
          Hide mapping attributes Show mapping attributes object
          • coerce boolean
          • Hide total_fields attributes Show total_fields attributes object
            • limit number | string

              The maximum number of fields in an index. Field and object mappings, as well as field aliases count towards this limit. The limit is in place to prevent mappings and searches from becoming too large. Higher values can lead to performance degradations and memory issues, especially in clusters with a high load or few resources.

            • ignore_dynamic_beyond_limit boolean | string

              This setting determines what happens when a dynamically mapped field would exceed the total fields limit. When set to false (the default), the index request of the document that tries to add a dynamic field to the mapping will fail with the message Limit of total fields [X] has been exceeded. When set to true, the index request will not fail. Instead, fields that would exceed the limit are not added to the mapping, similar to dynamic: false. The fields that were not added to the mapping will be added to the _ignored field.

          • depth object
            Hide depth attribute Show depth attribute object
            • limit number

              The maximum depth for a field, which is measured as the number of inner objects. For instance, if all fields are defined at the root object level, then the depth is 1. If there is one object mapping, then the depth is 2, etc.

          • Hide nested_fields attribute Show nested_fields attribute object
            • limit number

              The maximum number of distinct nested mappings in an index. The nested type should only be used in special cases, when arrays of objects need to be queried independently of each other. To safeguard against poorly designed mappings, this setting limits the number of unique nested types per index.

          • Hide nested_objects attribute Show nested_objects attribute object
            • limit number

              The maximum number of nested JSON objects that a single document can contain across all nested types. This limit helps to prevent out of memory errors when a document contains too many nested objects.

          • Hide field_name_length attribute Show field_name_length attribute object
            • limit number

              Setting for the maximum length of a field name. This setting isn’t really something that addresses mappings explosion but might still be useful if you want to limit the field length. It usually shouldn’t be necessary to set this setting. The default is okay unless a user starts to add a huge number of fields with really long names. Default is Long.MAX_VALUE (no limit).

          • Hide dimension_fields attribute Show dimension_fields attribute object
            • limit number

              [preview] This functionality is in technical preview and may be changed or removed in a future release. Elastic will work to fix any issues, but features in technical preview are not subject to the support SLA of official GA features.

          • source object
            Hide source attribute Show source attribute object
            • mode string Required

              Values are disabled, stored, or synthetic.

        • Hide indexing.slowlog attributes Show indexing.slowlog attributes object
          • level string
          • source number
          • reformat boolean
          • Hide threshold attribute Show threshold attribute object
            • index object
              Hide index attributes Show index attributes object
              • warn string

                A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

              • info string

                A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

              • debug string

                A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

              • trace string

                A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

        • Hide indexing_pressure attribute Show indexing_pressure attribute object
          • memory object Required
            Hide memory attribute Show memory attribute object
            • limit number

              Number of outstanding bytes that may be consumed by indexing requests. When this limit is reached or exceeded, the node will reject new coordinating and primary operations. When replica operations consume 1.5x this limit, the node will reject new replica operations. Defaults to 10% of the heap.

        • store object
          Hide store attributes Show store attributes object
          • type string Required

            Any of:

            Values are fs, niofs, mmapfs, or hybridfs.

          • allow_mmap boolean

            You can restrict the use of the mmapfs and the related hybridfs store type via the setting node.store.allow_mmap. This is a boolean setting indicating whether or not memory-mapping is allowed. The default is to allow it. This setting is useful, for example, if you are in an environment where you can not control the ability to create a lot of memory maps so you need disable the ability to use memory-mapping.

      • Hide lifecycle attributes Show lifecycle attributes object
        • A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

        • Hide downsampling attribute Show downsampling attribute object
          • rounds array[object] Required

            The list of downsampling rounds to execute as part of this downsampling configuration

            Hide rounds attributes Show rounds attributes object
            • after string Required

              A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

            • config object Required
        • enabled boolean

          If defined, it turns data stream lifecycle on/off (true/false) for this data stream. A data stream lifecycle that's disabled (enabled: false) will have no effect on the data stream.

GET /{index}
curl \
 --request GET 'http://api.example.com/{index}' \
 --header "Authorization: $API_KEY"












































































































































































































































Resolve indices Added in 7.9.0

GET /_resolve/index/{name}

Resolve the names and/or index patterns for indices, aliases, and data streams. Multiple patterns and remote clusters are supported.

Path parameters

  • name string | array[string] Required

    Comma-separated name(s) or index pattern(s) of the indices, aliases, and data streams to resolve. Resources on remote clusters can be specified using the <cluster>:<name> syntax.

Query parameters

  • expand_wildcards string | array[string]

    Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports comma-separated values, such as open,hidden. Valid values are: all, open, closed, hidden, none.

    Supported values include:

    • all: Match any data stream or index, including hidden ones.
    • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
    • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
    • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
    • none: Wildcard expressions are not accepted.

    Values are all, open, closed, hidden, or none.

  • If false, the request returns an error if it targets a missing or closed index.

  • If false, the request returns an error if any wildcard expression, index alias, or _all value targets only missing or closed indices. This behavior applies even if the request targets other open indices. For example, a request targeting foo*,bar* returns an error if an index starts with foo but no index starts with bar.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
GET /_resolve/index/{name}
curl \
 --request GET 'http://api.example.com/_resolve/index/{name}' \
 --header "Authorization: $API_KEY"
Response examples (200)
A successful response from `GET /_resolve/index/f*,remoteCluster1:bar*?expand_wildcards=all`.
{
  "indices": [
    {
      "name": "foo_closed",
      "attributes": [
        "closed"
      ]
    },
    {
      "name": "freeze-index",
      "aliases": [
        "f-alias"
      ],
      "attributes": [
        "open"
      ]
    },
    {
      "name": "remoteCluster1:bar-01",
      "attributes": [
        "open"
      ]
    }
  ],
  "aliases": [
    {
      "name": "f-alias",
      "indices": [
        "freeze-index",
        "my-index-000001"
      ]
    }
  ],
  "data_streams": [
    {
      "name": "foo",
      "backing_indices": [
        ".ds-foo-2099.03.07-000001"
      ],
      "timestamp_field": "@timestamp"
    }
  ]
}




















Get index shard stores

GET /{index}/_shard_stores

Get store information about replica shards in one or more indices. For data streams, the API retrieves store information for the stream's backing indices.

The index shard stores API returns the following information:

  • The node on which each replica shard exists.
  • The allocation ID for each replica shard.
  • A unique ID for each replica shard.
  • Any errors encountered while opening the shard index or from an earlier failure.

By default, the API returns store information only for primary shards that are unassigned or have one or more unassigned replica shards.

Path parameters

  • index string | array[string] Required

    List of data streams, indices, and aliases used to limit the request.

Query parameters

  • If false, the request returns an error if any wildcard expression, index alias, or _all value targets only missing or closed indices. This behavior applies even if the request targets other open indices.

  • expand_wildcards string | array[string]

    Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams.

    Supported values include:

    • all: Match any data stream or index, including hidden ones.
    • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
    • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
    • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
    • none: Wildcard expressions are not accepted.

    Values are all, open, closed, hidden, or none.

  • If true, missing or closed indices are not included in the response.

  • status string | array[string]

    List of shard health statuses used to limit the request.

    Supported values include:

    • green: The primary shard and all replica shards are assigned.
    • yellow: One or more replica shards are unassigned.
    • red: The primary shard is unassigned.
    • all: Return all shards, regardless of health status.

    Values are green, yellow, red, or all.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • indices object Required
      Hide indices attribute Show indices attribute object
      • * object Additional properties
        Hide * attribute Show * attribute object
        • shards object Required
          Hide shards attribute Show shards attribute object
          • * object Additional properties
            Hide * attribute Show * attribute object
GET /{index}/_shard_stores
curl \
 --request GET 'http://api.example.com/{index}/_shard_stores' \
 --header "Authorization: $API_KEY"
Response examples (200)
An abbreviated response from `GET /_shard_stores?status=green`.
{
  "indices": {
    "my-index-000001": {
      "shards": {
        "0": {
          "stores": [
            {
              "sPa3OgxLSYGvQ4oPs-Tajw": {
                "name": "node_t0",
                "ephemeral_id": "9NlXRFGCT1m8tkvYCMK-8A",
                "transport_address": "local[1]",
                "external_id": "node_t0",
                "attributes": {},
                "roles": [],
                "version": "8.10.0",
                "min_index_version": 7000099,
                "max_index_version": 8100099
              },
              "allocation_id": "2iNySv_OQVePRX-yaRH_lQ",
              "allocation": "primary",
              "store_exception": {}
            }
          ]
        }
      }
    }
  }
}










































































































































Get an inference endpoint Added in 8.11.0

GET /_inference/{task_type}/{inference_id}

Path parameters

  • task_type string Required

    The task type

    Values are sparse_embedding, text_embedding, rerank, completion, or chat_completion.

  • inference_id string Required

    The inference Id

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • endpoints array[object] Required
      Hide endpoints attributes Show endpoints attributes object
      • Hide chunking_settings attributes Show chunking_settings attributes object
        • The maximum size of a chunk in words. This value cannot be higher than 300 or lower than 20 (for sentence strategy) or 10 (for word strategy).

        • overlap number

          The number of overlapping words for chunks. It is applicable only to a word chunking strategy. This value cannot be higher than half the max_chunk_size value.

        • The number of overlapping sentences for chunks. It is applicable only for a sentence chunking strategy. It can be either 1 or 0.

        • strategy string

          The chunking strategy: sentence or word.

      • service string Required

        The service type

      • service_settings object Required
      • inference_id string Required

        The inference Id

      • task_type string Required

        Values are sparse_embedding, text_embedding, rerank, completion, or chat_completion.

GET /_inference/{task_type}/{inference_id}
curl \
 --request GET 'http://api.example.com/_inference/{task_type}/{inference_id}' \
 --header "Authorization: $API_KEY"