Get index information Generally available

GET /_cat/indices/{index}

All methods and paths for this operation:

GET /_cat/indices

GET /_cat/indices/{index}

Get high-level information about indices in a cluster, including backing indices for data streams.

Use this request to get the following information for each index in a cluster:

  • shard count
  • document count
  • deleted document count
  • primary store size
  • total store size of all shards, including shard replicas

These metrics are retrieved directly from Lucene, which Elasticsearch uses internally to power indexing and search. As a result, all document counts include hidden nested documents. To get an accurate count of Elasticsearch documents, use the cat count or count APIs.

CAT APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use an index endpoint.

Required authorization

  • Index privileges: monitor
  • Cluster privileges: monitor

Path parameters

  • index string | array[string] Required

    Comma-separated list of data streams, indices, and aliases used to limit the request. Supports wildcards (*). To target all data streams and indices, omit this parameter or use * or _all.

Query parameters

  • bytes string

    The unit used to display byte values.

    Values are b, kb, mb, gb, tb, or pb.

  • expand_wildcards string | array[string]

    The type of index that wildcard patterns can match.

    Supported values include:

    • all: Match any data stream or index, including hidden ones.
    • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
    • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
    • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
    • none: Wildcard expressions are not accepted.

    Values are all, open, closed, hidden, or none.

  • health string

    The health status used to limit returned indices. By default, the response includes indices of any health status.

    Supported values include:

    • green (or GREEN): All shards are assigned.
    • yellow (or YELLOW): All primary shards are assigned, but one or more replica shards are unassigned. If a node in the cluster fails, some data could be unavailable until that node is repaired.
    • red (or RED): One or more primary shards are unassigned, so some data is unavailable. This can occur briefly during cluster startup as primary shards are assigned.
    • unknown
    • unavailable

    Values are green, GREEN, yellow, YELLOW, red, RED, unknown, or unavailable.

  • include_unloaded_segments boolean

    If true, the response includes information from segments that are not loaded into memory.

  • pri boolean

    If true, the response only includes information from primary shards.

  • time string

    The unit used to display time values.

    Values are nanos, micros, ms, s, m, h, or d.

  • master_timeout string

    Period to wait for a connection to the master node.

    Values are -1 or 0.

  • h string | array[string]

    List of columns to appear in the response. Supports simple wildcards.

  • s string | array[string]

    List of columns that determine how the table should be sorted. Sorting defaults to ascending and can be changed by setting :asc or :desc as a suffix to the column name.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • health string

      current health status

    • status string

      open/close status

    • index string

      index name

    • uuid string

      index uuid

    • pri string

      number of primary shards

    • rep string

      number of replica shards

    • docs.count string | null

      available docs

    • docs.deleted string | null

      deleted docs

    • creation.date string

      index creation date (millisecond value)

    • creation.date.string string

      index creation date (as string)

    • store.size string | null

      store size of primaries & replicas

    • pri.store.size string | null

      store size of primaries

    • dataset.size string | null

      total size of dataset (including the cache for partially mounted indices)

    • completion.size string

      size of completion

    • pri.completion.size string

      size of completion

    • fielddata.memory_size string

      used fielddata cache

    • pri.fielddata.memory_size string

      used fielddata cache

    • fielddata.evictions string

      fielddata evictions

    • pri.fielddata.evictions string

      fielddata evictions

    • query_cache.memory_size string

      used query cache

    • pri.query_cache.memory_size string

      used query cache

    • query_cache.evictions string

      query cache evictions

    • pri.query_cache.evictions string

      query cache evictions

    • request_cache.memory_size string

      used request cache

    • pri.request_cache.memory_size string

      used request cache

    • request_cache.evictions string

      request cache evictions

    • pri.request_cache.evictions string

      request cache evictions

    • request_cache.hit_count string

      request cache hit count

    • pri.request_cache.hit_count string

      request cache hit count

    • request_cache.miss_count string

      request cache miss count

    • pri.request_cache.miss_count string

      request cache miss count

    • flush.total string

      number of flushes

    • pri.flush.total string

      number of flushes

    • flush.total_time string

      time spent in flush

    • pri.flush.total_time string

      time spent in flush

    • get.current string

      number of current get ops

    • pri.get.current string

      number of current get ops

    • get.time string

      time spent in get

    • pri.get.time string

      time spent in get

    • get.total string

      number of get ops

    • pri.get.total string

      number of get ops

    • get.exists_time string

      time spent in successful gets

    • pri.get.exists_time string

      time spent in successful gets

    • get.exists_total string

      number of successful gets

    • pri.get.exists_total string

      number of successful gets

    • get.missing_time string

      time spent in failed gets

    • pri.get.missing_time string

      time spent in failed gets

    • get.missing_total string

      number of failed gets

    • pri.get.missing_total string

      number of failed gets

    • indexing.delete_current string

      number of current deletions

    • pri.indexing.delete_current string

      number of current deletions

    • indexing.delete_time string

      time spent in deletions

    • pri.indexing.delete_time string

      time spent in deletions

    • indexing.delete_total string

      number of delete ops

    • pri.indexing.delete_total string

      number of delete ops

    • indexing.index_current string

      number of current indexing ops

    • pri.indexing.index_current string

      number of current indexing ops

    • indexing.index_time string

      time spent in indexing

    • pri.indexing.index_time string

      time spent in indexing

    • indexing.index_total string

      number of indexing ops

    • pri.indexing.index_total string

      number of indexing ops

    • indexing.index_failed string

      number of failed indexing ops

    • pri.indexing.index_failed string

      number of failed indexing ops

    • merges.current string

      number of current merges

    • pri.merges.current string

      number of current merges

    • merges.current_docs string

      number of current merging docs

    • pri.merges.current_docs string

      number of current merging docs

    • merges.current_size string

      size of current merges

    • pri.merges.current_size string

      size of current merges

    • merges.total string

      number of completed merge ops

    • pri.merges.total string

      number of completed merge ops

    • merges.total_docs string

      docs merged

    • pri.merges.total_docs string

      docs merged

    • merges.total_size string

      size merged

    • pri.merges.total_size string

      size merged

    • merges.total_time string

      time spent in merges

    • pri.merges.total_time string

      time spent in merges

    • refresh.total string

      total refreshes

    • pri.refresh.total string

      total refreshes

    • refresh.time string

      time spent in refreshes

    • pri.refresh.time string

      time spent in refreshes

    • refresh.external_total string

      total external refreshes

    • pri.refresh.external_total string

      total external refreshes

    • refresh.external_time string

      time spent in external refreshes

    • pri.refresh.external_time string

      time spent in external refreshes

    • refresh.listeners string

      number of pending refresh listeners

    • pri.refresh.listeners string

      number of pending refresh listeners

    • search.fetch_current string

      current fetch phase ops

    • pri.search.fetch_current string

      current fetch phase ops

    • search.fetch_time string

      time spent in fetch phase

    • pri.search.fetch_time string

      time spent in fetch phase

    • search.fetch_total string

      total fetch ops

    • pri.search.fetch_total string

      total fetch ops

    • search.open_contexts string

      open search contexts

    • pri.search.open_contexts string

      open search contexts

    • search.query_current string

      current query phase ops

    • pri.search.query_current string

      current query phase ops

    • search.query_time string

      time spent in query phase

    • pri.search.query_time string

      time spent in query phase

    • search.query_total string

      total query phase ops

    • pri.search.query_total string

      total query phase ops

    • search.scroll_current string

      open scroll contexts

    • pri.search.scroll_current string

      open scroll contexts

    • search.scroll_time string

      time scroll contexts held open

    • pri.search.scroll_time string

      time scroll contexts held open

    • search.scroll_total string

      completed scroll contexts

    • pri.search.scroll_total string

      completed scroll contexts

    • segments.count string

      number of segments

    • pri.segments.count string

      number of segments

    • segments.memory string

      memory used by segments

    • pri.segments.memory string

      memory used by segments

    • segments.index_writer_memory string

      memory used by index writer

    • pri.segments.index_writer_memory string

      memory used by index writer

    • segments.version_map_memory string

      memory used by version map

    • pri.segments.version_map_memory string

      memory used by version map

    • segments.fixed_bitset_memory string

      memory used by fixed bit sets for nested object field types and export type filters for types referred in _parent fields

    • pri.segments.fixed_bitset_memory string

      memory used by fixed bit sets for nested object field types and export type filters for types referred in _parent fields

    • warmer.current string

      current warmer ops

    • pri.warmer.current string

      current warmer ops

    • warmer.total string

      total warmer ops

    • pri.warmer.total string

      total warmer ops

    • warmer.total_time string

      time spent in warmers

    • pri.warmer.total_time string

      time spent in warmers

    • suggest.current string

      number of current suggest ops

    • pri.suggest.current string

      number of current suggest ops

    • suggest.time string

      time spend in suggest

    • pri.suggest.time string

      time spend in suggest

    • suggest.total string

      number of suggest ops

    • pri.suggest.total string

      number of suggest ops

    • memory.total string

      total used memory

    • pri.memory.total string

      total user memory

    • search.throttled string

      indicates if the index is search throttled

    • bulk.total_operations string

      number of bulk shard ops

    • pri.bulk.total_operations string

      number of bulk shard ops

    • bulk.total_time string

      time spend in shard bulk

    • pri.bulk.total_time string

      time spend in shard bulk

    • bulk.total_size_in_bytes string

      total size in bytes of shard bulk

    • pri.bulk.total_size_in_bytes string

      total size in bytes of shard bulk

    • bulk.avg_time string

      average time spend in shard bulk

    • pri.bulk.avg_time string

      average time spend in shard bulk

    • bulk.avg_size_in_bytes string

      average size in bytes of shard bulk

    • pri.bulk.avg_size_in_bytes string

      average size in bytes of shard bulk

GET /_cat/indices/my-index-*?v=true&s=index&format=json
resp = client.cat.indices(
    index="my-index-*",
    v=True,
    s="index",
    format="json",
)
const response = await client.cat.indices({
  index: "my-index-*",
  v: "true",
  s: "index",
  format: "json",
});
response = client.cat.indices(
  index: "my-index-*",
  v: "true",
  s: "index",
  format: "json"
)
$resp = $client->cat()->indices([
    "index" => "my-index-*",
    "v" => "true",
    "s" => "index",
    "format" => "json",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_cat/indices/my-index-*?v=true&s=index&format=json"
client.cat().indices();
Response examples (200)
A successful response from `GET /_cat/indices/my-index-*?v=true&s=index&format=json`.
[
  {
    "health": "yellow",
    "status": "open",
    "index": "my-index-000001",
    "uuid": "u8FNjxh8Rfy_awN11oDKYQ",
    "pri": "1",
    "rep": "1",
    "docs.count": "1200",
    "docs.deleted": "0",
    "store.size": "88.1kb",
    "pri.store.size": "88.1kb",
    "dataset.size": "88.1kb"
  },
  {
    "health": "green",
    "status": "open",
    "index": "my-index-000002",
    "uuid": "nYFWZEO7TUiOjLQXBaYJpA ",
    "pri": "1",
    "rep": "0",
    "docs.count": "0",
    "docs.deleted": "0",
    "store.size": "260b",
    "pri.store.size": "260b",
    "dataset.size": "260b"
  }
]




Get datafeeds Generally available

GET /_cat/ml/datafeeds/{datafeed_id}

All methods and paths for this operation:

GET /_cat/ml/datafeeds

GET /_cat/ml/datafeeds/{datafeed_id}

Get configuration and usage information about datafeeds. This API returns a maximum of 10,000 datafeeds. If the Elasticsearch security features are enabled, you must have monitor_ml, monitor, manage_ml, or manage cluster privileges to use this API.

IMPORTANT: CAT APIs are only intended for human consumption using the Kibana console or command line. They are not intended for use by applications. For application consumption, use the get datafeed statistics API.

Required authorization

  • Cluster privileges: monitor_ml

Path parameters

  • datafeed_id string Required

    A numerical character string that uniquely identifies the datafeed.

Query parameters

  • allow_no_match boolean

    Specifies what to do when the request:

    • Contains wildcard expressions and there are no datafeeds that match.
    • Contains the _all string or no identifiers and there are no matches.
    • Contains wildcard expressions and there are only partial matches.

    If true, the API returns an empty datafeeds array when there are no matches and the subset of results when there are partial matches. If false, the API returns a 404 status code when there are no matches or only partial matches.

  • h string | array[string]

    Comma-separated list of column names to display.

    Supported values include:

    • ae (or assignment_explanation): For started datafeeds only, contains messages relating to the selection of a node.
    • bc (or buckets.count, bucketsCount): The number of buckets processed.
    • id: A numerical character string that uniquely identifies the datafeed.
    • na (or node.address, nodeAddress): For started datafeeds only, the network address of the node where the datafeed is started.
    • ne (or node.ephemeral_id, nodeEphemeralId): For started datafeeds only, the ephemeral ID of the node where the datafeed is started.
    • ni (or node.id, nodeId): For started datafeeds only, the unique identifier of the node where the datafeed is started.
    • nn (or node.name, nodeName): For started datafeeds only, the name of the node where the datafeed is started.
    • sba (or search.bucket_avg, searchBucketAvg): The average search time per bucket, in milliseconds.
    • sc (or search.count, searchCount): The number of searches run by the datafeed.
    • seah (or search.exp_avg_hour, searchExpAvgHour): The exponential average search time per hour, in milliseconds.
    • st (or search.time, searchTime): The total time the datafeed spent searching, in milliseconds.
    • s (or state): The status of the datafeed: starting, started, stopping, or stopped. If starting, the datafeed has been requested to start but has not yet started. If started, the datafeed is actively receiving data. If stopping, the datafeed has been requested to stop gracefully and is completing its final action. If stopped, the datafeed is stopped and will not receive data until it is re-started.
  • s string | array[string]

    Comma-separated list of column names or column aliases used to sort the response.

    Supported values include:

    • ae (or assignment_explanation): For started datafeeds only, contains messages relating to the selection of a node.
    • bc (or buckets.count, bucketsCount): The number of buckets processed.
    • id: A numerical character string that uniquely identifies the datafeed.
    • na (or node.address, nodeAddress): For started datafeeds only, the network address of the node where the datafeed is started.
    • ne (or node.ephemeral_id, nodeEphemeralId): For started datafeeds only, the ephemeral ID of the node where the datafeed is started.
    • ni (or node.id, nodeId): For started datafeeds only, the unique identifier of the node where the datafeed is started.
    • nn (or node.name, nodeName): For started datafeeds only, the name of the node where the datafeed is started.
    • sba (or search.bucket_avg, searchBucketAvg): The average search time per bucket, in milliseconds.
    • sc (or search.count, searchCount): The number of searches run by the datafeed.
    • seah (or search.exp_avg_hour, searchExpAvgHour): The exponential average search time per hour, in milliseconds.
    • st (or search.time, searchTime): The total time the datafeed spent searching, in milliseconds.
    • s (or state): The status of the datafeed: starting, started, stopping, or stopped. If starting, the datafeed has been requested to start but has not yet started. If started, the datafeed is actively receiving data. If stopping, the datafeed has been requested to stop gracefully and is completing its final action. If stopped, the datafeed is stopped and will not receive data until it is re-started.
  • time string

    The unit used to display time values.

    Values are nanos, micros, ms, s, m, h, or d.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • id string

      The datafeed identifier.

    • state string

      The status of the datafeed.

      Values are started, stopped, starting, or stopping.

    • assignment_explanation string

      For started datafeeds only, contains messages relating to the selection of a node.

    • buckets.count string

      The number of buckets processed.

    • search.count string

      The number of searches run by the datafeed.

    • search.time string

      The total time the datafeed spent searching, in milliseconds.

    • search.bucket_avg string

      The average search time per bucket, in milliseconds.

    • search.exp_avg_hour string

      The exponential average search time per hour, in milliseconds.

    • node.id string

      The unique identifier of the assigned node. For started datafeeds only, this information pertains to the node upon which the datafeed is started.

    • node.name string

      The name of the assigned node. For started datafeeds only, this information pertains to the node upon which the datafeed is started.

    • node.ephemeral_id string

      The ephemeral identifier of the assigned node. For started datafeeds only, this information pertains to the node upon which the datafeed is started.

    • node.address string

      The network address of the assigned node. For started datafeeds only, this information pertains to the node upon which the datafeed is started.

GET /_cat/ml/datafeeds/{datafeed_id}
GET _cat/ml/datafeeds?v=true&format=json
resp = client.cat.ml_datafeeds(
    v=True,
    format="json",
)
const response = await client.cat.mlDatafeeds({
  v: "true",
  format: "json",
});
response = client.cat.ml_datafeeds(
  v: "true",
  format: "json"
)
$resp = $client->cat()->mlDatafeeds([
    "v" => "true",
    "format" => "json",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_cat/ml/datafeeds?v=true&format=json"
client.cat().mlDatafeeds();
Response examples (200)
A successful response from `GET _cat/ml/datafeeds?v=true&format=json`.
[
  {
    "id": "datafeed-high_sum_total_sales",
    "state": "stopped",
    "buckets.count": "743",
    "search.count": "7"
  },
  {
    "id": "datafeed-low_request_rate",
    "state": "stopped",
    "buckets.count": "1457",
    "search.count": "3"
  },
  {
    "id": "datafeed-response_code_rates",
    "state": "stopped",
    "buckets.count": "1460",
    "search.count": "18"
  },
  {
    "id": "datafeed-url_scanning",
    "state": "stopped",
    "buckets.count": "1460",
    "search.count": "18"
  }
]

Get anomaly detection jobs Generally available

GET /_cat/ml/anomaly_detectors/{job_id}

All methods and paths for this operation:

GET /_cat/ml/anomaly_detectors

GET /_cat/ml/anomaly_detectors/{job_id}

Get configuration and usage information for anomaly detection jobs. This API returns a maximum of 10,000 jobs. If the Elasticsearch security features are enabled, you must have monitor_ml, monitor, manage_ml, or manage cluster privileges to use this API.

IMPORTANT: CAT APIs are only intended for human consumption using the Kibana console or command line. They are not intended for use by applications. For application consumption, use the get anomaly detection job statistics API.

Required authorization

  • Cluster privileges: monitor_ml

Path parameters

  • job_id string Required

    Identifier for the anomaly detection job.

Query parameters

  • allow_no_match boolean

    Specifies what to do when the request:

    • Contains wildcard expressions and there are no jobs that match.
    • Contains the _all string or no identifiers and there are no matches.
    • Contains wildcard expressions and there are only partial matches.

    If true, the API returns an empty jobs array when there are no matches and the subset of results when there are partial matches. If false, the API returns a 404 status code when there are no matches or only partial matches.

  • bytes string

    The unit used to display byte values.

    Values are b, kb, mb, gb, tb, or pb.

  • h string | array[string]

    Comma-separated list of column names to display.

    Supported values include:

    • assignment_explanation (or ae): For open anomaly detection jobs only, contains messages relating to the selection of a node to run the job.
    • buckets.count (or bc, bucketsCount): The number of bucket results produced by the job.
    • buckets.time.exp_avg (or btea, bucketsTimeExpAvg): Exponential moving average of all bucket processing times, in milliseconds.
    • buckets.time.exp_avg_hour (or bteah, bucketsTimeExpAvgHour): Exponentially-weighted moving average of bucket processing times calculated in a 1 hour time window, in milliseconds.
    • buckets.time.max (or btmax, bucketsTimeMax): Maximum among all bucket processing times, in milliseconds.
    • buckets.time.min (or btmin, bucketsTimeMin): Minimum among all bucket processing times, in milliseconds.
    • buckets.time.total (or btt, bucketsTimeTotal): Sum of all bucket processing times, in milliseconds.
    • data.buckets (or db, dataBuckets): The number of buckets processed.
    • data.earliest_record (or der, dataEarliestRecord): The timestamp of the earliest chronologically input document.
    • data.empty_buckets (or deb, dataEmptyBuckets): The number of buckets which did not contain any data.
    • data.input_bytes (or dib, dataInputBytes): The number of bytes of input data posted to the anomaly detection job.
    • data.input_fields (or dif, dataInputFields): The total number of fields in input documents posted to the anomaly detection job. This count includes fields that are not used in the analysis. However, be aware that if you are using a datafeed, it extracts only the required fields from the documents it retrieves before posting them to the job.
    • data.input_records (or dir, dataInputRecords): The number of input documents posted to the anomaly detection job.
    • data.invalid_dates (or did, dataInvalidDates): The number of input documents with either a missing date field or a date that could not be parsed.
    • data.last (or dl, dataLast): The timestamp at which data was last analyzed, according to server time.
    • data.last_empty_bucket (or dleb, dataLastEmptyBucket): The timestamp of the last bucket that did not contain any data.
    • data.last_sparse_bucket (or dlsb, dataLastSparseBucket): The timestamp of the last bucket that was considered sparse.
    • data.latest_record (or dlr, dataLatestRecord): The timestamp of the latest chronologically input document.
    • data.missing_fields (or dmf, dataMissingFields): The number of input documents that are missing a field that the anomaly detection job is configured to analyze. Input documents with missing fields are still processed because it is possible that not all fields are missing.
    • data.out_of_order_timestamps (or doot, dataOutOfOrderTimestamps): The number of input documents that have a timestamp chronologically preceding the start of the current anomaly detection bucket offset by the latency window. This information is applicable only when you provide data to the anomaly detection job by using the post data API. These out of order documents are discarded, since jobs require time series data to be in ascending chronological order.
    • data.processed_fields (or dpf, dataProcessedFields): The total number of fields in all the documents that have been processed by the anomaly detection job. Only fields that are specified in the detector configuration object contribute to this count. The timestamp is not included in this count.
    • data.processed_records (or dpr, dataProcessedRecords): The number of input documents that have been processed by the anomaly detection job. This value includes documents with missing fields, since they are nonetheless analyzed. If you use datafeeds and have aggregations in your search query, the processed record count is the number of aggregation results processed, not the number of Elasticsearch documents.
    • data.sparse_buckets (or dsb, dataSparseBuckets): The number of buckets that contained few data points compared to the expected number of data points.
    • forecasts.memory.avg (or fmavg, forecastsMemoryAvg): The average memory usage in bytes for forecasts related to the anomaly detection job.
    • forecasts.memory.max (or fmmax, forecastsMemoryMax): The maximum memory usage in bytes for forecasts related to the anomaly detection job.
    • forecasts.memory.min (or fmmin, forecastsMemoryMin): The minimum memory usage in bytes for forecasts related to the anomaly detection job.
    • forecasts.memory.total (or fmt, forecastsMemoryTotal): The total memory usage in bytes for forecasts related to the anomaly detection job.
    • forecasts.records.avg (or fravg, forecastsRecordsAvg): The average number of model_forecast` documents written for forecasts related to the anomaly detection job.
    • forecasts.records.max (or frmax, forecastsRecordsMax): The maximum number of model_forecast documents written for forecasts related to the anomaly detection job.
    • forecasts.records.min (or frmin, forecastsRecordsMin): The minimum number of model_forecast documents written for forecasts related to the anomaly detection job.
    • forecasts.records.total (or frt, forecastsRecordsTotal): The total number of model_forecast documents written for forecasts related to the anomaly detection job.
    • forecasts.time.avg (or ftavg, forecastsTimeAvg): The average runtime in milliseconds for forecasts related to the anomaly detection job.
    • forecasts.time.max (or ftmax, forecastsTimeMax): The maximum runtime in milliseconds for forecasts related to the anomaly detection job.
    • forecasts.time.min (or ftmin, forecastsTimeMin): The minimum runtime in milliseconds for forecasts related to the anomaly detection job.
    • forecasts.time.total (or ftt, forecastsTimeTotal): The total runtime in milliseconds for forecasts related to the anomaly detection job.
    • forecasts.total (or ft, forecastsTotal): The number of individual forecasts currently available for the job.
    • id: Identifier for the anomaly detection job.
    • model.bucket_allocation_failures (or mbaf, modelBucketAllocationFailures): The number of buckets for which new entities in incoming data were not processed due to insufficient model memory.
    • model.by_fields (or mbf, modelByFields): The number of by field values that were analyzed by the models. This value is cumulative for all detectors in the job.
    • model.bytes (or mb, modelBytes): The number of bytes of memory used by the models. This is the maximum value since the last time the model was persisted. If the job is closed, this value indicates the latest size.
    • model.bytes_exceeded (or mbe, modelBytesExceeded): The number of bytes over the high limit for memory usage at the last allocation failure.
    • model.categorization_status (or mcs, modelCategorizationStatus): The status of categorization for the job: ok or warn. If ok, categorization is performing acceptably well (or not being used at all). If warn, categorization is detecting a distribution of categories that suggests the input data is inappropriate for categorization. Problems could be that there is only one category, more than 90% of categories are rare, the number of categories is greater than 50% of the number of categorized documents, there are no frequently matched categories, or more than 50% of categories are dead.
    • model.categorized_doc_count (or mcdc, modelCategorizedDocCount): The number of documents that have had a field categorized.
    • model.dead_category_count (or mdcc, modelDeadCategoryCount): The number of categories created by categorization that will never be assigned again because another category’s definition makes it a superset of the dead category. Dead categories are a side effect of the way categorization has no prior training.
    • model.failed_category_count (or mdcc, modelFailedCategoryCount): The number of times that categorization wanted to create a new category but couldn’t because the job had hit its model memory limit. This count does not track which specific categories failed to be created. Therefore, you cannot use this value to determine the number of unique categories that were missed.
    • model.frequent_category_count (or mfcc, modelFrequentCategoryCount): The number of categories that match more than 1% of categorized documents.
    • model.log_time (or mlt, modelLogTime): The timestamp when the model stats were gathered, according to server time.
    • model.memory_limit (or mml, modelMemoryLimit): The timestamp when the model stats were gathered, according to server time.
    • model.memory_status (or mms, modelMemoryStatus): The status of the mathematical models: ok, soft_limit, or hard_limit. If ok, the models stayed below the configured value. If soft_limit, the models used more than 60% of the configured memory limit and older unused models will be pruned to free up space. Additionally, in categorization jobs no further category examples will be stored. If hard_limit, the models used more space than the configured memory limit. As a result, not all incoming data was processed.
    • model.over_fields (or mof, modelOverFields): The number of over field values that were analyzed by the models. This value is cumulative for all detectors in the job.
    • model.partition_fields (or mpf, modelPartitionFields): The number of partition field values that were analyzed by the models. This value is cumulative for all detectors in the job.
    • model.rare_category_count (or mrcc, modelRareCategoryCount): The number of categories that match just one categorized document.
    • model.timestamp (or mt, modelTimestamp): The timestamp of the last record when the model stats were gathered.
    • model.total_category_count (or mtcc, modelTotalCategoryCount): The number of categories created by categorization.
    • node.address (or na, nodeAddress): The network address of the node that runs the job. This information is available only for open jobs.
    • node.ephemeral_id (or ne, nodeEphemeralId): The ephemeral ID of the node that runs the job. This information is available only for open jobs.
    • node.id (or ni, nodeId): The unique identifier of the node that runs the job. This information is available only for open jobs.
    • node.name (or nn, nodeName): The name of the node that runs the job. This information is available only for open jobs.
    • opened_time (or ot): For open jobs only, the elapsed time for which the job has been open.
    • state (or s): The status of the anomaly detection job: closed, closing, failed, opened, or opening. If closed, the job finished successfully with its model state persisted. The job must be opened before it can accept further data. If closing, the job close action is in progress and has not yet completed. A closing job cannot accept further data. If failed, the job did not finish successfully due to an error. This situation can occur due to invalid input data, a fatal error occurring during the analysis, or an external interaction such as the process being killed by the Linux out of memory (OOM) killer. If the job had irrevocably failed, it must be force closed and then deleted. If the datafeed can be corrected, the job can be closed and then re-opened. If opened, the job is available to receive and process data. If opening, the job open action is in progress and has not yet completed.
  • s string | array[string]

    Comma-separated list of column names or column aliases used to sort the response.

    Supported values include:

    • assignment_explanation (or ae): For open anomaly detection jobs only, contains messages relating to the selection of a node to run the job.
    • buckets.count (or bc, bucketsCount): The number of bucket results produced by the job.
    • buckets.time.exp_avg (or btea, bucketsTimeExpAvg): Exponential moving average of all bucket processing times, in milliseconds.
    • buckets.time.exp_avg_hour (or bteah, bucketsTimeExpAvgHour): Exponentially-weighted moving average of bucket processing times calculated in a 1 hour time window, in milliseconds.
    • buckets.time.max (or btmax, bucketsTimeMax): Maximum among all bucket processing times, in milliseconds.
    • buckets.time.min (or btmin, bucketsTimeMin): Minimum among all bucket processing times, in milliseconds.
    • buckets.time.total (or btt, bucketsTimeTotal): Sum of all bucket processing times, in milliseconds.
    • data.buckets (or db, dataBuckets): The number of buckets processed.
    • data.earliest_record (or der, dataEarliestRecord): The timestamp of the earliest chronologically input document.
    • data.empty_buckets (or deb, dataEmptyBuckets): The number of buckets which did not contain any data.
    • data.input_bytes (or dib, dataInputBytes): The number of bytes of input data posted to the anomaly detection job.
    • data.input_fields (or dif, dataInputFields): The total number of fields in input documents posted to the anomaly detection job. This count includes fields that are not used in the analysis. However, be aware that if you are using a datafeed, it extracts only the required fields from the documents it retrieves before posting them to the job.
    • data.input_records (or dir, dataInputRecords): The number of input documents posted to the anomaly detection job.
    • data.invalid_dates (or did, dataInvalidDates): The number of input documents with either a missing date field or a date that could not be parsed.
    • data.last (or dl, dataLast): The timestamp at which data was last analyzed, according to server time.
    • data.last_empty_bucket (or dleb, dataLastEmptyBucket): The timestamp of the last bucket that did not contain any data.
    • data.last_sparse_bucket (or dlsb, dataLastSparseBucket): The timestamp of the last bucket that was considered sparse.
    • data.latest_record (or dlr, dataLatestRecord): The timestamp of the latest chronologically input document.
    • data.missing_fields (or dmf, dataMissingFields): The number of input documents that are missing a field that the anomaly detection job is configured to analyze. Input documents with missing fields are still processed because it is possible that not all fields are missing.
    • data.out_of_order_timestamps (or doot, dataOutOfOrderTimestamps): The number of input documents that have a timestamp chronologically preceding the start of the current anomaly detection bucket offset by the latency window. This information is applicable only when you provide data to the anomaly detection job by using the post data API. These out of order documents are discarded, since jobs require time series data to be in ascending chronological order.
    • data.processed_fields (or dpf, dataProcessedFields): The total number of fields in all the documents that have been processed by the anomaly detection job. Only fields that are specified in the detector configuration object contribute to this count. The timestamp is not included in this count.
    • data.processed_records (or dpr, dataProcessedRecords): The number of input documents that have been processed by the anomaly detection job. This value includes documents with missing fields, since they are nonetheless analyzed. If you use datafeeds and have aggregations in your search query, the processed record count is the number of aggregation results processed, not the number of Elasticsearch documents.
    • data.sparse_buckets (or dsb, dataSparseBuckets): The number of buckets that contained few data points compared to the expected number of data points.
    • forecasts.memory.avg (or fmavg, forecastsMemoryAvg): The average memory usage in bytes for forecasts related to the anomaly detection job.
    • forecasts.memory.max (or fmmax, forecastsMemoryMax): The maximum memory usage in bytes for forecasts related to the anomaly detection job.
    • forecasts.memory.min (or fmmin, forecastsMemoryMin): The minimum memory usage in bytes for forecasts related to the anomaly detection job.
    • forecasts.memory.total (or fmt, forecastsMemoryTotal): The total memory usage in bytes for forecasts related to the anomaly detection job.
    • forecasts.records.avg (or fravg, forecastsRecordsAvg): The average number of model_forecast` documents written for forecasts related to the anomaly detection job.
    • forecasts.records.max (or frmax, forecastsRecordsMax): The maximum number of model_forecast documents written for forecasts related to the anomaly detection job.
    • forecasts.records.min (or frmin, forecastsRecordsMin): The minimum number of model_forecast documents written for forecasts related to the anomaly detection job.
    • forecasts.records.total (or frt, forecastsRecordsTotal): The total number of model_forecast documents written for forecasts related to the anomaly detection job.
    • forecasts.time.avg (or ftavg, forecastsTimeAvg): The average runtime in milliseconds for forecasts related to the anomaly detection job.
    • forecasts.time.max (or ftmax, forecastsTimeMax): The maximum runtime in milliseconds for forecasts related to the anomaly detection job.
    • forecasts.time.min (or ftmin, forecastsTimeMin): The minimum runtime in milliseconds for forecasts related to the anomaly detection job.
    • forecasts.time.total (or ftt, forecastsTimeTotal): The total runtime in milliseconds for forecasts related to the anomaly detection job.
    • forecasts.total (or ft, forecastsTotal): The number of individual forecasts currently available for the job.
    • id: Identifier for the anomaly detection job.
    • model.bucket_allocation_failures (or mbaf, modelBucketAllocationFailures): The number of buckets for which new entities in incoming data were not processed due to insufficient model memory.
    • model.by_fields (or mbf, modelByFields): The number of by field values that were analyzed by the models. This value is cumulative for all detectors in the job.
    • model.bytes (or mb, modelBytes): The number of bytes of memory used by the models. This is the maximum value since the last time the model was persisted. If the job is closed, this value indicates the latest size.
    • model.bytes_exceeded (or mbe, modelBytesExceeded): The number of bytes over the high limit for memory usage at the last allocation failure.
    • model.categorization_status (or mcs, modelCategorizationStatus): The status of categorization for the job: ok or warn. If ok, categorization is performing acceptably well (or not being used at all). If warn, categorization is detecting a distribution of categories that suggests the input data is inappropriate for categorization. Problems could be that there is only one category, more than 90% of categories are rare, the number of categories is greater than 50% of the number of categorized documents, there are no frequently matched categories, or more than 50% of categories are dead.
    • model.categorized_doc_count (or mcdc, modelCategorizedDocCount): The number of documents that have had a field categorized.
    • model.dead_category_count (or mdcc, modelDeadCategoryCount): The number of categories created by categorization that will never be assigned again because another category’s definition makes it a superset of the dead category. Dead categories are a side effect of the way categorization has no prior training.
    • model.failed_category_count (or mdcc, modelFailedCategoryCount): The number of times that categorization wanted to create a new category but couldn’t because the job had hit its model memory limit. This count does not track which specific categories failed to be created. Therefore, you cannot use this value to determine the number of unique categories that were missed.
    • model.frequent_category_count (or mfcc, modelFrequentCategoryCount): The number of categories that match more than 1% of categorized documents.
    • model.log_time (or mlt, modelLogTime): The timestamp when the model stats were gathered, according to server time.
    • model.memory_limit (or mml, modelMemoryLimit): The timestamp when the model stats were gathered, according to server time.
    • model.memory_status (or mms, modelMemoryStatus): The status of the mathematical models: ok, soft_limit, or hard_limit. If ok, the models stayed below the configured value. If soft_limit, the models used more than 60% of the configured memory limit and older unused models will be pruned to free up space. Additionally, in categorization jobs no further category examples will be stored. If hard_limit, the models used more space than the configured memory limit. As a result, not all incoming data was processed.
    • model.over_fields (or mof, modelOverFields): The number of over field values that were analyzed by the models. This value is cumulative for all detectors in the job.
    • model.partition_fields (or mpf, modelPartitionFields): The number of partition field values that were analyzed by the models. This value is cumulative for all detectors in the job.
    • model.rare_category_count (or mrcc, modelRareCategoryCount): The number of categories that match just one categorized document.
    • model.timestamp (or mt, modelTimestamp): The timestamp of the last record when the model stats were gathered.
    • model.total_category_count (or mtcc, modelTotalCategoryCount): The number of categories created by categorization.
    • node.address (or na, nodeAddress): The network address of the node that runs the job. This information is available only for open jobs.
    • node.ephemeral_id (or ne, nodeEphemeralId): The ephemeral ID of the node that runs the job. This information is available only for open jobs.
    • node.id (or ni, nodeId): The unique identifier of the node that runs the job. This information is available only for open jobs.
    • node.name (or nn, nodeName): The name of the node that runs the job. This information is available only for open jobs.
    • opened_time (or ot): For open jobs only, the elapsed time for which the job has been open.
    • state (or s): The status of the anomaly detection job: closed, closing, failed, opened, or opening. If closed, the job finished successfully with its model state persisted. The job must be opened before it can accept further data. If closing, the job close action is in progress and has not yet completed. A closing job cannot accept further data. If failed, the job did not finish successfully due to an error. This situation can occur due to invalid input data, a fatal error occurring during the analysis, or an external interaction such as the process being killed by the Linux out of memory (OOM) killer. If the job had irrevocably failed, it must be force closed and then deleted. If the datafeed can be corrected, the job can be closed and then re-opened. If opened, the job is available to receive and process data. If opening, the job open action is in progress and has not yet completed.
  • time string

    The unit used to display time values.

    Values are nanos, micros, ms, s, m, h, or d.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • id string

      The anomaly detection job identifier.

    • state string

      The status of the anomaly detection job.

      Supported values include:

      • closing: The job close action is in progress and has not yet completed. A closing job cannot accept further data.
      • closed: The job finished successfully with its model state persisted. The job must be opened before it can accept further data.
      • opened: The job is available to receive and process data.
      • failed: The job did not finish successfully due to an error. This situation can occur due to invalid input data, a fatal error occurring during the analysis, or an external interaction such as the process being killed by the Linux out of memory (OOM) killer. If the job had irrevocably failed, it must be force closed and then deleted. If the datafeed can be corrected, the job can be closed and then re-opened.
      • opening: The job open action is in progress and has not yet completed.

      Values are closing, closed, opened, failed, or opening.

    • opened_time string

      For open jobs only, the amount of time the job has been opened.

    • assignment_explanation string

      For open anomaly detection jobs only, contains messages relating to the selection of a node to run the job.

    • data.processed_records string

      The number of input documents that have been processed by the anomaly detection job. This value includes documents with missing fields, since they are nonetheless analyzed. If you use datafeeds and have aggregations in your search query, the processed_record_count is the number of aggregation results processed, not the number of Elasticsearch documents.

    • data.processed_fields string

      The total number of fields in all the documents that have been processed by the anomaly detection job. Only fields that are specified in the detector configuration object contribute to this count. The timestamp is not included in this count.

    • data.input_bytes number | string

      The number of bytes of input data posted to the anomaly detection job.

      One of:

      The number of bytes of input data posted to the anomaly detection job.

    • data.input_records string

      The number of input documents posted to the anomaly detection job.

    • data.input_fields string

      The total number of fields in input documents posted to the anomaly detection job. This count includes fields that are not used in the analysis. However, be aware that if you are using a datafeed, it extracts only the required fields from the documents it retrieves before posting them to the job.

    • data.invalid_dates string

      The number of input documents with either a missing date field or a date that could not be parsed.

    • data.missing_fields string

      The number of input documents that are missing a field that the anomaly detection job is configured to analyze. Input documents with missing fields are still processed because it is possible that not all fields are missing. If you are using datafeeds or posting data to the job in JSON format, a high missing_field_count is often not an indication of data issues. It is not necessarily a cause for concern.

    • data.out_of_order_timestamps string

      The number of input documents that have a timestamp chronologically preceding the start of the current anomaly detection bucket offset by the latency window. This information is applicable only when you provide data to the anomaly detection job by using the post data API. These out of order documents are discarded, since jobs require time series data to be in ascending chronological order.

    • data.empty_buckets string

      The number of buckets which did not contain any data. If your data contains many empty buckets, consider increasing your bucket_span or using functions that are tolerant to gaps in data such as mean, non_null_sum or non_zero_count.

    • data.sparse_buckets string

      The number of buckets that contained few data points compared to the expected number of data points. If your data contains many sparse buckets, consider using a longer bucket_span.

    • data.buckets string

      The total number of buckets processed.

    • data.earliest_record string

      The timestamp of the earliest chronologically input document.

    • data.latest_record string

      The timestamp of the latest chronologically input document.

    • data.last string

      The timestamp at which data was last analyzed, according to server time.

    • data.last_empty_bucket string

      The timestamp of the last bucket that did not contain any data.

    • data.last_sparse_bucket string

      The timestamp of the last bucket that was considered sparse.

    • model.bytes number | string

      The number of bytes of memory used by the models. This is the maximum value since the last time the model was persisted. If the job is closed, this value indicates the latest size.

      One of:

      The number of bytes of memory used by the models. This is the maximum value since the last time the model was persisted. If the job is closed, this value indicates the latest size.

    • model.memory_status string

      The status of the mathematical models.

      Values are ok, soft_limit, or hard_limit.

    • model.bytes_exceeded number | string

      The number of bytes over the high limit for memory usage at the last allocation failure.

      One of:

      The number of bytes over the high limit for memory usage at the last allocation failure.

    • model.memory_limit string

      The upper limit for model memory usage, checked on increasing values.

    • model.by_fields string

      The number of by field values that were analyzed by the models. This value is cumulative for all detectors in the job.

    • model.over_fields string

      The number of over field values that were analyzed by the models. This value is cumulative for all detectors in the job.

    • model.partition_fields string

      The number of partition field values that were analyzed by the models. This value is cumulative for all detectors in the job.

    • model.bucket_allocation_failures string

      The number of buckets for which new entities in incoming data were not processed due to insufficient model memory. This situation is also signified by a hard_limit: memory_status property value.

    • model.categorization_status string

      The status of categorization for the job.

      Values are ok or warn.

    • model.categorized_doc_count string

      The number of documents that have had a field categorized.

    • model.total_category_count string

      The number of categories created by categorization.

    • model.frequent_category_count string

      The number of categories that match more than 1% of categorized documents.

    • model.rare_category_count string

      The number of categories that match just one categorized document.

    • model.dead_category_count string

      The number of categories created by categorization that will never be assigned again because another category’s definition makes it a superset of the dead category. Dead categories are a side effect of the way categorization has no prior training.

    • model.failed_category_count string

      The number of times that categorization wanted to create a new category but couldn’t because the job had hit its model_memory_limit. This count does not track which specific categories failed to be created. Therefore you cannot use this value to determine the number of unique categories that were missed.

    • model.log_time string

      The timestamp when the model stats were gathered, according to server time.

    • model.timestamp string

      The timestamp of the last record when the model stats were gathered.

    • forecasts.total string

      The number of individual forecasts currently available for the job. A value of one or more indicates that forecasts exist.

    • forecasts.memory.min string

      The minimum memory usage in bytes for forecasts related to the anomaly detection job.

    • forecasts.memory.max string

      The maximum memory usage in bytes for forecasts related to the anomaly detection job.

    • forecasts.memory.avg string

      The average memory usage in bytes for forecasts related to the anomaly detection job.

    • forecasts.memory.total string

      The total memory usage in bytes for forecasts related to the anomaly detection job.

    • forecasts.records.min string

      The minimum number of model_forecast documents written for forecasts related to the anomaly detection job.

    • forecasts.records.max string

      The maximum number of model_forecast documents written for forecasts related to the anomaly detection job.

    • forecasts.records.avg string

      The average number of model_forecast documents written for forecasts related to the anomaly detection job.

    • forecasts.records.total string

      The total number of model_forecast documents written for forecasts related to the anomaly detection job.

    • forecasts.time.min string

      The minimum runtime in milliseconds for forecasts related to the anomaly detection job.

    • forecasts.time.max string

      The maximum runtime in milliseconds for forecasts related to the anomaly detection job.

    • forecasts.time.avg string

      The average runtime in milliseconds for forecasts related to the anomaly detection job.

    • forecasts.time.total string

      The total runtime in milliseconds for forecasts related to the anomaly detection job.

    • node.id string

      The uniqe identifier of the assigned node.

    • node.name string

      The name of the assigned node.

    • node.ephemeral_id string

      The ephemeral identifier of the assigned node.

    • node.address string

      The network address of the assigned node.

    • buckets.count string

      The number of bucket results produced by the job.

    • buckets.time.total string

      The sum of all bucket processing times, in milliseconds.

    • buckets.time.min string

      The minimum of all bucket processing times, in milliseconds.

    • buckets.time.max string

      The maximum of all bucket processing times, in milliseconds.

    • buckets.time.exp_avg string

      The exponential moving average of all bucket processing times, in milliseconds.

    • buckets.time.exp_avg_hour string

      The exponential moving average of bucket processing times calculated in a one hour time window, in milliseconds.

GET /_cat/ml/anomaly_detectors/{job_id}
GET _cat/ml/anomaly_detectors?h=id,s,dpr,mb&v=true&format=json
resp = client.cat.ml_jobs(
    h="id,s,dpr,mb",
    v=True,
    format="json",
)
const response = await client.cat.mlJobs({
  h: "id,s,dpr,mb",
  v: "true",
  format: "json",
});
response = client.cat.ml_jobs(
  h: "id,s,dpr,mb",
  v: "true",
  format: "json"
)
$resp = $client->cat()->mlJobs([
    "h" => "id,s,dpr,mb",
    "v" => "true",
    "format" => "json",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_cat/ml/anomaly_detectors?h=id,s,dpr,mb&v=true&format=json"
client.cat().mlJobs();
Response examples (200)
A successful response from `GET _cat/ml/anomaly_detectors?h=id,s,dpr,mb&v=true&format=json`.
[
  {
    "id": "high_sum_total_sales",
    "s": "closed",
    "dpr": "14022",
    "mb": "1.5mb"
  },
  {
    "id": "low_request_rate",
    "s": "closed",
    "dpr": "1216",
    "mb": "40.5kb"
  },
  {
    "id": "response_code_rates",
    "s": "closed",
    "dpr": "28146",
    "mb": "132.7kb"
  },
  {
    "id": "url_scanning",
    "s": "closed",
    "dpr": "28146",
    "mb": "501.6kb"
  }
]








Cluster

The cluster APIs enable you to retrieve information about your infrastructure on cluster, node, or shard level. You can manage cluster settings and voting configuration exceptions, collect node statistics and retrieve node information.









Connector

The connector and sync jobs APIs provide a convenient way to create and manage Elastic connectors and sync jobs in an internal index. Connectors are Elasticsearch integrations for syncing content from third-party data sources, which can be deployed on Elastic Cloud or hosted on your own infrastructure. This API provides an alternative to relying solely on Kibana UI for connector and sync job management. The API comes with a set of validations and assertions to ensure that the state representation in the internal index remains valid. This API requires the manage_connector privilege or, for read-only endpoints, the monitor_connector privilege.

Check out the connector API tutorial




















Create a connector Beta

POST /_connector

Connectors are Elasticsearch integrations that bring content from third-party data sources, which can be deployed on Elastic Cloud or hosted on your own infrastructure. Elastic managed connectors (Native connectors) are a managed service on Elastic Cloud. Self-managed connectors (Connector clients) are self-managed on your infrastructure.

application/json

Body

  • description string
  • index_name string
  • is_native boolean
  • language string
  • name string
  • service_type string

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • result string Required

      Values are created, updated, deleted, not_found, or noop.

    • id string Required
POST /_connector
curl \
 --request POST 'http://api.example.com/_connector' \
 --header "Authorization: $API_KEY" \
 --header "Content-Type: application/json" \
 --data '{"description":"string","index_name":"string","is_native":true,"language":"string","name":"string","service_type":"string"}'

Cancel a connector sync job Beta

PUT /_connector/_sync_job/{connector_sync_job_id}/_cancel

Cancel a connector sync job, which sets the status to cancelling and updates cancellation_requested_at to the current time. The connector service is then responsible for setting the status of connector sync jobs to cancelled.

Path parameters

  • connector_sync_job_id string Required

    The unique identifier of the connector sync job

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • result string Required

      Values are created, updated, deleted, not_found, or noop.

PUT /_connector/_sync_job/{connector_sync_job_id}/_cancel
PUT _connector/_sync_job/my-connector-sync-job-id/_cancel
resp = client.connector.sync_job_cancel(
    connector_sync_job_id="my-connector-sync-job-id",
)
const response = await client.connector.syncJobCancel({
  connector_sync_job_id: "my-connector-sync-job-id",
});
response = client.connector.sync_job_cancel(
  connector_sync_job_id: "my-connector-sync-job-id"
)
$resp = $client->connector()->syncJobCancel([
    "connector_sync_job_id" => "my-connector-sync-job-id",
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_connector/_sync_job/my-connector-sync-job-id/_cancel"
client.connector().syncJobCancel(s -> s
    .connectorSyncJobId("my-connector-sync-job-id")
);




















Update the connector API key ID Beta

PUT /_connector/{connector_id}/_api_key_id

Update the api_key_id and api_key_secret_id fields of a connector. You can specify the ID of the API key used for authorization and the ID of the connector secret where the API key is stored. The connector secret ID is required only for Elastic managed (native) connectors. Self-managed connectors (connector clients) do not use this field.

Path parameters

  • connector_id string Required

    The unique identifier of the connector to be updated

application/json

Body Required

  • api_key_id string
  • api_key_secret_id string

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • result string Required

      Values are created, updated, deleted, not_found, or noop.

PUT /_connector/{connector_id}/_api_key_id
PUT _connector/my-connector/_api_key_id
{
    "api_key_id": "my-api-key-id",
    "api_key_secret_id": "my-connector-secret-id"
}
resp = client.connector.update_api_key_id(
    connector_id="my-connector",
    api_key_id="my-api-key-id",
    api_key_secret_id="my-connector-secret-id",
)
const response = await client.connector.updateApiKeyId({
  connector_id: "my-connector",
  api_key_id: "my-api-key-id",
  api_key_secret_id: "my-connector-secret-id",
});
response = client.connector.update_api_key_id(
  connector_id: "my-connector",
  body: {
    "api_key_id": "my-api-key-id",
    "api_key_secret_id": "my-connector-secret-id"
  }
)
$resp = $client->connector()->updateApiKeyId([
    "connector_id" => "my-connector",
    "body" => [
        "api_key_id" => "my-api-key-id",
        "api_key_secret_id" => "my-connector-secret-id",
    ],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"api_key_id":"my-api-key-id","api_key_secret_id":"my-connector-secret-id"}' "$ELASTICSEARCH_URL/_connector/my-connector/_api_key_id"
client.connector().updateApiKeyId(u -> u
    .apiKeyId("my-api-key-id")
    .apiKeySecretId("my-connector-secret-id")
    .connectorId("my-connector")
);
Request example
{
    "api_key_id": "my-api-key-id",
    "api_key_secret_id": "my-connector-secret-id"
}
Response examples (200)
{
  "result": "updated"
}




















Update the connector name and description Beta

PUT /_connector/{connector_id}/_name

Path parameters

  • connector_id string Required

    The unique identifier of the connector to be updated

application/json

Body Required

  • name string
  • description string

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • result string Required

      Values are created, updated, deleted, not_found, or noop.

PUT /_connector/{connector_id}/_name
PUT _connector/my-connector/_name
{
    "name": "Custom connector",
    "description": "This is my customized connector"
}
resp = client.connector.update_name(
    connector_id="my-connector",
    name="Custom connector",
    description="This is my customized connector",
)
const response = await client.connector.updateName({
  connector_id: "my-connector",
  name: "Custom connector",
  description: "This is my customized connector",
});
response = client.connector.update_name(
  connector_id: "my-connector",
  body: {
    "name": "Custom connector",
    "description": "This is my customized connector"
  }
)
$resp = $client->connector()->updateName([
    "connector_id" => "my-connector",
    "body" => [
        "name" => "Custom connector",
        "description" => "This is my customized connector",
    ],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"name":"Custom connector","description":"This is my customized connector"}' "$ELASTICSEARCH_URL/_connector/my-connector/_name"
client.connector().updateName(u -> u
    .connectorId("my-connector")
    .description("This is my customized connector")
    .name("Custom connector")
);
Request example
{
    "name": "Custom connector",
    "description": "This is my customized connector"
}
Response examples (200)
{
  "result": "updated"
}




Update the connector pipeline Beta

PUT /_connector/{connector_id}/_pipeline

When you create a new connector, the configuration of an ingest pipeline is populated with default settings.

Path parameters

  • connector_id string Required

    The unique identifier of the connector to be updated

application/json

Body Required

  • pipeline object Required
    Hide pipeline attributes Show pipeline attributes object
    • extract_binary_content boolean Required
    • name string Required
    • reduce_whitespace boolean Required
    • run_ml_inference boolean Required

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • result string Required

      Values are created, updated, deleted, not_found, or noop.

PUT /_connector/{connector_id}/_pipeline
PUT _connector/my-connector/_pipeline
{
    "pipeline": {
        "extract_binary_content": true,
        "name": "my-connector-pipeline",
        "reduce_whitespace": true,
        "run_ml_inference": true
    }
}
resp = client.connector.update_pipeline(
    connector_id="my-connector",
    pipeline={
        "extract_binary_content": True,
        "name": "my-connector-pipeline",
        "reduce_whitespace": True,
        "run_ml_inference": True
    },
)
const response = await client.connector.updatePipeline({
  connector_id: "my-connector",
  pipeline: {
    extract_binary_content: true,
    name: "my-connector-pipeline",
    reduce_whitespace: true,
    run_ml_inference: true,
  },
});
response = client.connector.update_pipeline(
  connector_id: "my-connector",
  body: {
    "pipeline": {
      "extract_binary_content": true,
      "name": "my-connector-pipeline",
      "reduce_whitespace": true,
      "run_ml_inference": true
    }
  }
)
$resp = $client->connector()->updatePipeline([
    "connector_id" => "my-connector",
    "body" => [
        "pipeline" => [
            "extract_binary_content" => true,
            "name" => "my-connector-pipeline",
            "reduce_whitespace" => true,
            "run_ml_inference" => true,
        ],
    ],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"pipeline":{"extract_binary_content":true,"name":"my-connector-pipeline","reduce_whitespace":true,"run_ml_inference":true}}' "$ELASTICSEARCH_URL/_connector/my-connector/_pipeline"
client.connector().updatePipeline(u -> u
    .connectorId("my-connector")
    .pipeline(p -> p
        .extractBinaryContent(true)
        .name("my-connector-pipeline")
        .reduceWhitespace(true)
        .runMlInference(true)
    )
);
Request example
{
    "pipeline": {
        "extract_binary_content": true,
        "name": "my-connector-pipeline",
        "reduce_whitespace": true,
        "run_ml_inference": true
    }
}
Response examples (200)
{
  "result": "updated"
}








Update the connector status Technical preview

PUT /_connector/{connector_id}/_status

Path parameters

  • connector_id string Required

    The unique identifier of the connector to be updated

application/json

Body Required

  • status string Required

    Values are created, needs_configuration, configured, connected, or error.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • result string Required

      Values are created, updated, deleted, not_found, or noop.

PUT /_connector/{connector_id}/_status
PUT _connector/my-connector/_status
{
    "status": "needs_configuration"
}
resp = client.connector.update_status(
    connector_id="my-connector",
    status="needs_configuration",
)
const response = await client.connector.updateStatus({
  connector_id: "my-connector",
  status: "needs_configuration",
});
response = client.connector.update_status(
  connector_id: "my-connector",
  body: {
    "status": "needs_configuration"
  }
)
$resp = $client->connector()->updateStatus([
    "connector_id" => "my-connector",
    "body" => [
        "status" => "needs_configuration",
    ],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"status":"needs_configuration"}' "$ELASTICSEARCH_URL/_connector/my-connector/_status"
client.connector().updateStatus(u -> u
    .connectorId("my-connector")
    .status(ConnectorStatus.NeedsConfiguration)
);
Request example
{
    "status": "needs_configuration"
}
Response examples (200)
{
  "result": "updated"
}





























Update data stream mappings Generally available

PUT /_data_stream/{name}/_mappings

This API can be used to override mappings on specific data streams. These overrides will take precedence over what is specified in the template that the data stream matches. The mapping change is only applied to new write indices that are created during rollover after this API is called. No indices are changed by this API.

Required authorization

  • Index privileges: manage

Path parameters

  • name string | array[string] Required

    A comma-separated list of data streams or data stream patterns.

Query parameters

  • dry_run boolean

    If true, the request does not actually change the mappings on any data streams. Instead, it simulates changing the settings and reports back to the user what would have happened had these settings actually been applied.

  • master_timeout string

    The period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.

    Values are -1 or 0.

  • timeout string

    The period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.

    Values are -1 or 0.

application/json

Body Required

  • all_field object
    Hide all_field attributes Show all_field attributes object
    • analyzer string Required
    • enabled boolean Required
    • omit_norms boolean Required
    • search_analyzer string Required
    • similarity string Required
    • store boolean Required
    • store_term_vector_offsets boolean Required
    • store_term_vector_payloads boolean Required
    • store_term_vector_positions boolean Required
    • store_term_vectors boolean Required
  • date_detection boolean
  • dynamic string

    Values are strict, runtime, true, or false.

  • dynamic_date_formats array[string]
  • dynamic_templates array[object]
  • _field_names object
    Hide _field_names attribute Show _field_names attribute object
    • enabled boolean Required
  • index_field object
    Hide index_field attribute Show index_field attribute object
    • enabled boolean Required
  • _meta object
    Hide _meta attribute Show _meta attribute object
    • * object Additional properties
  • numeric_detection boolean
  • properties object
  • _routing object
    Hide _routing attribute Show _routing attribute object
    • required boolean Required
  • _size object
    Hide _size attribute Show _size attribute object
    • enabled boolean Required
  • _source object
    Hide _source attributes Show _source attributes object
    • compress boolean
    • compress_threshold string
    • enabled boolean
    • excludes array[string]
    • includes array[string]
    • mode string

      Supported values include:

      • disabled
      • stored
      • synthetic: Instead of storing source documents on disk exactly as you send them, Elasticsearch can reconstruct source content on the fly upon retrieval.

      Values are disabled, stored, or synthetic.

  • runtime object
    Hide runtime attribute Show runtime attribute object
    • * object Additional properties
      Hide * attributes Show * attributes object
      • fields object

        For type composite

        Hide fields attribute Show fields attribute object
        • * object Additional properties
          Hide * attribute Show * attribute object
          • type string Required

            Values are boolean, composite, date, double, geo_point, geo_shape, ip, keyword, long, or lookup.

      • fetch_fields array[object]

        For type lookup

        Hide fetch_fields attributes Show fetch_fields attributes object
        • field string Required

          Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

        • format string
      • format string

        A custom format for date type runtime fields.

      • input_field string

        For type lookup

      • target_field string

        For type lookup

      • target_index string

        For type lookup

      • script object

        Painless script executed at query time.

        Hide script attributes Show script attributes object
        • source string | object

          The script source.

          One of:

          The script source.

        • id string

          The id for a stored script.

        • params object

          Specifies any named parameters that are passed into the script as variables. Use parameters instead of hard-coded values to decrease compile time.

          Hide params attribute Show params attribute object
          • * object Additional properties
        • lang string

          Specifies the language the script is written in.

          Supported values include:

          • painless: Painless scripting language, purpose-built for Elasticsearch.
          • expression: Lucene’s expressions language, compiles a JavaScript expression to bytecode.
          • mustache: Mustache templated, used for templates.
          • java: Expert Java API
          Any of:

          Specifies the language the script is written in.

          Supported values include:

          • painless: Painless scripting language, purpose-built for Elasticsearch.
          • expression: Lucene’s expressions language, compiles a JavaScript expression to bytecode.
          • mustache: Mustache templated, used for templates.
          • java: Expert Java API

          Values are painless, expression, mustache, or java.

        • options object
          Hide options attribute Show options attribute object
          • * string Additional properties
      • type string Required

        Field type, which can be: boolean, composite, date, double, geo_point, ip,keyword, long, or lookup.

        Values are boolean, composite, date, double, geo_point, geo_shape, ip, keyword, long, or lookup.

  • enabled boolean
  • subobjects string

    Values are true or false.

  • _data_stream_timestamp object
    Hide _data_stream_timestamp attribute Show _data_stream_timestamp attribute object
    • enabled boolean Required

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • data_streams array[object] Required
      Hide data_streams attributes Show data_streams attributes object
      • name string Required

        The data stream name.

      • applied_to_data_stream boolean Required

        If the mappings were successfully applied to the data stream (or would have been, if running in dry_run mode), it is true. If an error occurred, it is false.

      • error string

        A message explaining why the mappings could not be applied to the data stream.

      • mappings object

        The mappings that are specfic to this data stream that will override any mappings from the matching index template.

        Hide mappings attributes Show mappings attributes object
        • all_field object
        • date_detection boolean
        • dynamic string

          Values are strict, runtime, true, or false.

        • dynamic_date_formats array[string]
        • dynamic_templates array[object]
        • _field_names object
        • index_field object
        • _meta object
        • numeric_detection boolean
        • properties object
        • _routing object
        • _size object
        • _source object
        • runtime object
          Hide runtime attribute Show runtime attribute object
          • * object Additional properties
        • enabled boolean
        • subobjects string

          Values are true or false.

        • _data_stream_timestamp object
      • effective_mappings object

        The mappings that are effective on this data stream, taking into account the mappings from the matching index template and the mappings specific to this data stream.

        Hide effective_mappings attributes Show effective_mappings attributes object
        • all_field object
        • date_detection boolean
        • dynamic string

          Values are strict, runtime, true, or false.

        • dynamic_date_formats array[string]
        • dynamic_templates array[object]
        • _field_names object
        • index_field object
        • _meta object
        • numeric_detection boolean
        • properties object
        • _routing object
        • _size object
        • _source object
        • runtime object
          Hide runtime attribute Show runtime attribute object
          • * object Additional properties
        • enabled boolean
        • subobjects string

          Values are true or false.

        • _data_stream_timestamp object
PUT /_data_stream/{name}/_mappings
PUT /_data_stream/my-data-stream/_mappings
{
   "properties":{
      "field1":{
         "type":"ip"
      },
      "field3":{
         "type":"text"
      }
   }
}
resp = client.indices.put_data_stream_mappings(
    name="my-data-stream",
    mappings={
        "properties": {
            "field1": {
                "type": "ip"
            },
            "field3": {
                "type": "text"
            }
        }
    },
)
const response = await client.indices.putDataStreamMappings({
  name: "my-data-stream",
  mappings: {
    properties: {
      field1: {
        type: "ip",
      },
      field3: {
        type: "text",
      },
    },
  },
});
response = client.indices.put_data_stream_mappings(
  name: "my-data-stream",
  body: {
    "properties": {
      "field1": {
        "type": "ip"
      },
      "field3": {
        "type": "text"
      }
    }
  }
)
$resp = $client->indices()->putDataStreamMappings([
    "name" => "my-data-stream",
    "body" => [
        "properties" => [
            "field1" => [
                "type" => "ip",
            ],
            "field3" => [
                "type" => "text",
            ],
        ],
    ],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"properties":{"field1":{"type":"ip"},"field3":{"type":"text"}}}' "$ELASTICSEARCH_URL/_data_stream/my-data-stream/_mappings"
Request example
This is a request to add or modify two fields in a mapping on a data stream.
{
   "properties":{
      "field1":{
         "type":"ip"
      },
      "field3":{
         "type":"text"
      }
   }
}
This shows a response to `PUT /_data_stream/my-data-stream/_settings` when two settings are successfully updated on the data stream. In this case, `index.number_of_shards` is only applied to the data stream -- it will be applied to the write index on rollover. The setting `index.lifecycle.name` is applied to the data stream and all backing indices.
{
  "data_streams": [
    {
      "name": "my-data-stream",
      "applied_to_data_stream": true,
      "mappings": {
        "properties": {
          "field1": {
            "type": "ip"
          },
          "field3": {
            "type": "text"
          }
        }
      },
      "effective_mappings": {
        "properties": {
          "field1": {
            "type": "ip"
          },
          "field3": {
            "type": "text"
          }
        }
      }
    }
  ]
}
This shows a response to `PUT /_data_stream/my-data-stream/_settings` when a user attempts to set a setting that is not allowed on a data stream. As a result, no change was applied to the data stream.
{
  "data_streams": [
    {
      "name": "my-data-stream",
      "applied_to_data_stream": false,
      "error": "Failed to parse mapping: The mapper type [txt] declared on field [field1] does not exist. It might have been created within a future version or requires a plugin to be installed. Check the documentation.",
      "mappings": {
        "_doc": {}
      },
      "effective_mappings": {
        "_doc": {}
      }
    }
  ]
}

Get data stream options Generally available

GET /_data_stream/{name}/_options

Get the data stream options configuration of one or more data streams.

Path parameters

  • name string | array[string] Required

    Comma-separated list of data streams to limit the request. Supports wildcards (*). To target all data streams, omit this parameter or use * or _all.

Query parameters

  • expand_wildcards string | array[string]

    Type of data stream that wildcard patterns can match. Supports comma-separated values, such as open,hidden.

    Supported values include:

    • all: Match any data stream or index, including hidden ones.
    • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
    • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
    • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
    • none: Wildcard expressions are not accepted.

    Values are all, open, closed, hidden, or none.

  • master_timeout string

    Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.

    Values are -1 or 0.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • data_streams array[object] Required
      Hide data_streams attributes Show data_streams attributes object
      • name string Required
      • options object

        Data stream options contain the configuration of data stream level features for a given data stream, for example, the failure store configuration.

        Hide options attribute Show options attribute object
        • failure_store object

          If defined, it specifies configuration for the failure store of this data stream.

GET /_data_stream/{name}/_options
curl \
 --request GET 'http://api.example.com/_data_stream/{name}/_options' \
 --header "Authorization: $API_KEY"




Get data stream settings Generally available

GET /_data_stream/{name}/_settings

Get setting information for one or more data streams.

Required authorization

  • Index privileges: view_index_metadata

Path parameters

  • name string | array[string] Required

    A comma-separated list of data streams or data stream patterns. Supports wildcards (*).

Query parameters

  • master_timeout string

    The period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.

    Values are -1 or 0.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • data_streams array[object] Required
      Hide data_streams attributes Show data_streams attributes object
      • name string Required

        The name of the data stream.

      • settings object Required Additional properties

        The settings specific to this data stream

        Index settings
      • effective_settings object Required Additional properties

        The settings specific to this data stream merged with the settings from its template. These effective_settings are the settings that will be used when a new index is created for this data stream.

        Index settings
GET /_data_stream/{name}/_settings
GET /_data_stream/my-data-stream/_settings
resp = client.indices.get_data_stream_settings(
    name="my-data-stream",
)
const response = await client.indices.getDataStreamSettings({
  name: "my-data-stream",
});
response = client.indices.get_data_stream_settings(
  name: "my-data-stream"
)
$resp = $client->indices()->getDataStreamSettings([
    "name" => "my-data-stream",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_data_stream/my-data-stream/_settings"
Response examples (200)
This is a response to `GET /_data_stream/my-data-stream/_settings` where my-data-stream that has two settings set. The `effective_settings` field shows additional settings that are pulled from its template.
{
  "data_streams": [
    {
      "name": "my-data-stream",
      "settings": {
        "index": {
          "lifecycle": {
            "name": "new-test-policy"
          },
          "number_of_shards": "11"
        }
      },
      "effective_settings": {
        "index": {
          "lifecycle": {
            "name": "new-test-policy"
          },
          "mode": "standard",
          "number_of_shards": "11",
          "number_of_replicas": "0"
        }
      }
    }
  ]
}













Bulk index or delete documents Generally available

PUT /{index}/_bulk

All methods and paths for this operation:

POST /_bulk

PUT /_bulk
POST /{index}/_bulk
PUT /{index}/_bulk

Perform multiple index, create, delete, and update actions in a single request. This reduces overhead and can greatly increase indexing speed.

If the Elasticsearch security features are enabled, you must have the following index privileges for the target data stream, index, or index alias:

  • To use the create action, you must have the create_doc, create, index, or write index privilege. Data streams support only the create action.
  • To use the index action, you must have the create, index, or write index privilege.
  • To use the delete action, you must have the delete or write index privilege.
  • To use the update action, you must have the index or write index privilege.
  • To automatically create a data stream or index with a bulk API request, you must have the auto_configure, create_index, or manage index privilege.
  • To make the result of a bulk operation visible to search using the refresh parameter, you must have the maintenance or manage index privilege.

Automatic data stream creation requires a matching index template with data stream enabled.

The actions are specified in the request body using a newline delimited JSON (NDJSON) structure:

action_and_meta_data\n
optional_source\n
action_and_meta_data\n
optional_source\n
....
action_and_meta_data\n
optional_source\n

The index and create actions expect a source on the next line and have the same semantics as the op_type parameter in the standard index API. A create action fails if a document with the same ID already exists in the target An index action adds or replaces a document as necessary.

NOTE: Data streams support only the create action. To update or delete a document in a data stream, you must target the backing index containing the document.

An update action expects that the partial doc, upsert, and script and its options are specified on the next line.

A delete action does not expect a source on the next line and has the same semantics as the standard delete API.

NOTE: The final line of data must end with a newline character (\n). Each newline character may be preceded by a carriage return (\r). When sending NDJSON data to the _bulk endpoint, use a Content-Type header of application/json or application/x-ndjson. Because this format uses literal newline characters (\n) as delimiters, make sure that the JSON actions and sources are not pretty printed.

If you provide a target in the request path, it is used for any actions that don't explicitly specify an _index argument.

A note on the format: the idea here is to make processing as fast as possible. As some of the actions are redirected to other shards on other nodes, only action_meta_data is parsed on the receiving node side.

Client libraries using this protocol should try and strive to do something similar on the client side, and reduce buffering as much as possible.

There is no "correct" number of actions to perform in a single bulk request. Experiment with different settings to find the optimal size for your particular workload. Note that Elasticsearch limits the maximum size of a HTTP request to 100mb by default so clients must ensure that no request exceeds this size. It is not possible to index a single document that exceeds the size limit, so you must pre-process any such documents into smaller pieces before sending them to Elasticsearch. For instance, split documents into pages or chapters before indexing them, or store raw binary data in a system outside Elasticsearch and replace the raw data with a link to the external system in the documents that you send to Elasticsearch.

Client suppport for bulk requests

Some of the officially supported clients provide helpers to assist with bulk requests and reindexing:

  • Go: Check out esutil.BulkIndexer
  • Perl: Check out Search::Elasticsearch::Client::5_0::Bulk and Search::Elasticsearch::Client::5_0::Scroll
  • Python: Check out elasticsearch.helpers.*
  • JavaScript: Check out client.helpers.*
  • .NET: Check out BulkAllObservable
  • PHP: Check out bulk indexing.
  • Ruby: Check out Elasticsearch::Helpers::BulkHelper

Submitting bulk requests with cURL

If you're providing text file input to curl, you must use the --data-binary flag instead of plain -d. The latter doesn't preserve newlines. For example:

$ cat requests
{ "index" : { "_index" : "test", "_id" : "1" } }
{ "field1" : "value1" }
$ curl -s -H "Content-Type: application/x-ndjson" -XPOST localhost:9200/_bulk --data-binary "@requests"; echo
{"took":7, "errors": false, "items":[{"index":{"_index":"test","_id":"1","_version":1,"result":"created","forced_refresh":false}}]}

Optimistic concurrency control

Each index and delete action within a bulk API call may include the if_seq_no and if_primary_term parameters in their respective action and meta data lines. The if_seq_no and if_primary_term parameters control how operations are run, based on the last modification to existing documents. See Optimistic concurrency control for more details.

Versioning

Each bulk item can include the version value using the version field. It automatically follows the behavior of the index or delete operation based on the _version mapping. It also support the version_type.

Routing

Each bulk item can include the routing value using the routing field. It automatically follows the behavior of the index or delete operation based on the _routing mapping.

NOTE: Data streams do not support custom routing unless they were created with the allow_custom_routing setting enabled in the template.

Wait for active shards

When making bulk calls, you can set the wait_for_active_shards parameter to require a minimum number of shard copies to be active before starting to process the bulk request.

Refresh

Control when the changes made by this request are visible to search.

NOTE: Only the shards that receive the bulk request will be affected by refresh. Imagine a _bulk?refresh=wait_for request with three documents in it that happen to be routed to different shards in an index with five shards. The request will only wait for those three shards to refresh. The other two shards that make up the index do not participate in the _bulk request at all.

You might want to disable the refresh interval temporarily to improve indexing throughput for large bulk requests. Refer to the linked documentation for step-by-step instructions using the index settings API.

External documentation

Path parameters

  • index string Required

    The name of the data stream, index, or index alias to perform bulk actions on.

Query parameters

  • include_source_on_error boolean

    True or false if to include the document source in the error message in case of parsing errors.

  • list_executed_pipelines boolean

    If true, the response will include the ingest pipelines that were run for each index or create.

  • pipeline string

    The pipeline identifier to use to preprocess incoming documents. If the index has a default ingest pipeline specified, setting the value to _none turns off the default ingest pipeline for this request. If a final pipeline is configured, it will always run regardless of the value of this parameter.

  • refresh string

    If true, Elasticsearch refreshes the affected shards to make this operation visible to search. If wait_for, wait for a refresh to make this operation visible to search. If false, do nothing with refreshes. Valid values: true, false, wait_for.

    Values are true, false, or wait_for.

  • routing string

    A custom value that is used to route operations to a specific shard.

  • _source boolean | string | array[string]

    Indicates whether to return the _source field (true or false) or contains a list of fields to return.

  • _source_excludes string | array[string]

    A comma-separated list of source fields to exclude from the response. You can also use this parameter to exclude fields from the subset specified in _source_includes query parameter. If the _source parameter is false, this parameter is ignored.

  • _source_includes string | array[string]

    A comma-separated list of source fields to include in the response. If this parameter is specified, only these source fields are returned. You can exclude fields from this subset using the _source_excludes query parameter. If the _source parameter is false, this parameter is ignored.

  • timeout string

    The period each action waits for the following operations: automatic index creation, dynamic mapping updates, and waiting for active shards. The default is 1m (one minute), which guarantees Elasticsearch waits for at least the timeout before failing. The actual wait time could be longer, particularly when multiple waits occur.

    Values are -1 or 0.

  • wait_for_active_shards number | string

    The number of shard copies that must be active before proceeding with the operation. Set to all or any positive integer up to the total number of shards in the index (number_of_replicas+1). The default is 1, which waits for each primary shard to be active.

    Values are all or index-setting.

  • require_alias boolean

    If true, the request's actions must target an index alias.

  • require_data_stream boolean

    If true, the request's actions must target a data stream (existing or to be created).

application/json

Body object Required

One of:
  • index object

    Index the specified document. If the document exists, it replaces the document and increments the version. The following line must contain the source data to be indexed.

    Hide index attributes Show index attributes object
    • if_primary_term number
    • dynamic_templates object

      A map from the full name of fields to the name of dynamic templates. It defaults to an empty map. If a name matches a dynamic template, that template will be applied regardless of other match predicates defined in the template. If a field is already defined in the mapping, then this parameter won't be used.

    • pipeline string

      The ID of the pipeline to use to preprocess incoming documents. If the index has a default ingest pipeline specified, setting the value to _none turns off the default ingest pipeline for this request. If a final pipeline is configured, it will always run regardless of the value of this parameter.

    • require_alias boolean

      If true, the request's actions must target an index alias.

      Default value is false.

  • create object

    Index the specified document if it does not already exist. The following line must contain the source data to be indexed.

    Hide create attributes Show create attributes object
    • if_primary_term number
    • dynamic_templates object

      A map from the full name of fields to the name of dynamic templates. It defaults to an empty map. If a name matches a dynamic template, that template will be applied regardless of other match predicates defined in the template. If a field is already defined in the mapping, then this parameter won't be used.

    • pipeline string

      The ID of the pipeline to use to preprocess incoming documents. If the index has a default ingest pipeline specified, setting the value to _none turns off the default ingest pipeline for this request. If a final pipeline is configured, it will always run regardless of the value of this parameter.

    • require_alias boolean

      If true, the request's actions must target an index alias.

      Default value is false.

  • update object

    Perform a partial document update. The following line must contain the partial document and update options.

    Hide update attributes Show update attributes object
    • _id string

      The document ID.

    • _index string

      The name of the index or index alias to perform the action on.

    • routing string

      A custom value used to route operations to a specific shard.

    • if_primary_term number
    • if_seq_no number
    • version number
    • version_type string

      Supported values include:

      • internal: Use internal versioning that starts at 1 and increments with each update or delete.
      • external: Only index the document if the specified version is strictly higher than the version of the stored document or if there is no existing document.
      • external_gte: Only index the document if the specified version is equal or higher than the version of the stored document or if there is no existing document. NOTE: The external_gte version type is meant for special use cases and should be used with care. If used incorrectly, it can result in loss of data.
      • force: This option is deprecated because it can cause primary and replica shards to diverge.

      Values are internal, external, external_gte, or force.

    • require_alias boolean

      If true, the request's actions must target an index alias.

      Default value is false.

    • retry_on_conflict number

      The number of times an update should be retried in the case of a version conflict.

  • delete object

    Remove the specified document from the index.

    Hide delete attributes Show delete attributes object
    • _id string

      The document ID.

    • _index string

      The name of the index or index alias to perform the action on.

    • routing string

      A custom value used to route operations to a specific shard.

    • if_primary_term number
    • if_seq_no number
    • version number
    • version_type string

      Supported values include:

      • internal: Use internal versioning that starts at 1 and increments with each update or delete.
      • external: Only index the document if the specified version is strictly higher than the version of the stored document or if there is no existing document.
      • external_gte: Only index the document if the specified version is equal or higher than the version of the stored document or if there is no existing document. NOTE: The external_gte version type is meant for special use cases and should be used with care. If used incorrectly, it can result in loss of data.
      • force: This option is deprecated because it can cause primary and replica shards to diverge.

      Values are internal, external, external_gte, or force.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • errors boolean Required

      If true, one or more of the operations in the bulk request did not complete successfully.

    • items array[object] Required

      The result of each operation in the bulk request, in the order they were submitted.

      Hide items attribute Show items attribute object
      • * object Additional properties
        Hide * attributes Show * attributes object
        • _id string | null

          The document ID associated with the operation.

        • _index string Required

          The name of the index associated with the operation. If the operation targeted a data stream, this is the backing index into which the document was written.

        • status number Required

          The HTTP status code returned for the operation.

        • failure_store string

          Values are not_applicable_or_unknown, used, not_enabled, or failed.

        • error object

          Additional information about the failed operation. The property is returned only for failed operations.

          Hide error attributes Show error attributes object
          • type string Required

            The type of error

          • reason
          • stack_trace string

            The server stack trace. Present only if the error_trace=true parameter was sent with the request.

          • root_cause array[object]
          • suppressed array[object]
        • _primary_term number

          The primary term assigned to the document for the operation. This property is returned only for successful operations.

        • result string

          The result of the operation. Successful values are created, deleted, and updated.

        • _seq_no number

          The sequence number assigned to the document for the operation. Sequence numbers are used to ensure an older version of a document doesn't overwrite a newer version.

        • _shards object

          Shard information for the operation.

          Hide _shards attribute Show _shards attribute object
          • failures array[object]
        • _version number

          The document version associated with the operation. The document version is incremented each time the document is updated. This property is returned only for successful actions.

        • forced_refresh boolean
        • get object
          Hide get attributes Show get attributes object
          • fields object
          • found boolean Required
          • _primary_term number
          • _source object
    • took number Required

      The length of time, in milliseconds, it took to process the bulk request.

    • ingest_took number
POST _bulk
{ "index" : { "_index" : "test", "_id" : "1" } }
{ "field1" : "value1" }
{ "delete" : { "_index" : "test", "_id" : "2" } }
{ "create" : { "_index" : "test", "_id" : "3" } }
{ "field1" : "value3" }
{ "update" : {"_id" : "1", "_index" : "test"} }
{ "doc" : {"field2" : "value2"} }
resp = client.bulk(
    operations=[
        {
            "index": {
                "_index": "test",
                "_id": "1"
            }
        },
        {
            "field1": "value1"
        },
        {
            "delete": {
                "_index": "test",
                "_id": "2"
            }
        },
        {
            "create": {
                "_index": "test",
                "_id": "3"
            }
        },
        {
            "field1": "value3"
        },
        {
            "update": {
                "_id": "1",
                "_index": "test"
            }
        },
        {
            "doc": {
                "field2": "value2"
            }
        }
    ],
)
const response = await client.bulk({
  operations: [
    {
      index: {
        _index: "test",
        _id: "1",
      },
    },
    {
      field1: "value1",
    },
    {
      delete: {
        _index: "test",
        _id: "2",
      },
    },
    {
      create: {
        _index: "test",
        _id: "3",
      },
    },
    {
      field1: "value3",
    },
    {
      update: {
        _id: "1",
        _index: "test",
      },
    },
    {
      doc: {
        field2: "value2",
      },
    },
  ],
});
response = client.bulk(
  body: [
    {
      "index": {
        "_index": "test",
        "_id": "1"
      }
    },
    {
      "field1": "value1"
    },
    {
      "delete": {
        "_index": "test",
        "_id": "2"
      }
    },
    {
      "create": {
        "_index": "test",
        "_id": "3"
      }
    },
    {
      "field1": "value3"
    },
    {
      "update": {
        "_id": "1",
        "_index": "test"
      }
    },
    {
      "doc": {
        "field2": "value2"
      }
    }
  ]
)
$resp = $client->bulk([
    "body" => array(
        [
            "index" => [
                "_index" => "test",
                "_id" => "1",
            ],
        ],
        [
            "field1" => "value1",
        ],
        [
            "delete" => [
                "_index" => "test",
                "_id" => "2",
            ],
        ],
        [
            "create" => [
                "_index" => "test",
                "_id" => "3",
            ],
        ],
        [
            "field1" => "value3",
        ],
        [
            "update" => [
                "_id" => "1",
                "_index" => "test",
            ],
        ],
        [
            "doc" => [
                "field2" => "value2",
            ],
        ],
    ),
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '[{"index":{"_index":"test","_id":"1"}},{"field1":"value1"},{"delete":{"_index":"test","_id":"2"}},{"create":{"_index":"test","_id":"3"}},{"field1":"value3"},{"update":{"_id":"1","_index":"test"}},{"doc":{"field2":"value2"}}]' "$ELASTICSEARCH_URL/_bulk"
Run `POST _bulk` to perform multiple operations.
{ "index" : { "_index" : "test", "_id" : "1" } }
{ "field1" : "value1" }
{ "delete" : { "_index" : "test", "_id" : "2" } }
{ "create" : { "_index" : "test", "_id" : "3" } }
{ "field1" : "value3" }
{ "update" : {"_id" : "1", "_index" : "test"} }
{ "doc" : {"field2" : "value2"} }
When you run `POST _bulk` and use the `update` action, you can use `retry_on_conflict` as a field in the action itself (not in the extra payload line) to specify how many times an update should be retried in the case of a version conflict.
{ "update" : {"_id" : "1", "_index" : "index1", "retry_on_conflict" : 3} }
{ "doc" : {"field" : "value"} }
{ "update" : { "_id" : "0", "_index" : "index1", "retry_on_conflict" : 3} }
{ "script" : { "source": "ctx._source.counter += params.param1", "lang" : "painless", "params" : {"param1" : 1}}, "upsert" : {"counter" : 1}}
{ "update" : {"_id" : "2", "_index" : "index1", "retry_on_conflict" : 3} }
{ "doc" : {"field" : "value"}, "doc_as_upsert" : true }
{ "update" : {"_id" : "3", "_index" : "index1", "_source" : true} }
{ "doc" : {"field" : "value"} }
{ "update" : {"_id" : "4", "_index" : "index1"} }
{ "doc" : {"field" : "value"}, "_source": true}
To return only information about failed operations, run `POST /_bulk?filter_path=items.*.error`.
{ "update": {"_id": "5", "_index": "index1"} }
{ "doc": {"my_field": "foo"} }
{ "update": {"_id": "6", "_index": "index1"} }
{ "doc": {"my_field": "foo"} }
{ "create": {"_id": "7", "_index": "index1"} }
{ "my_field": "foo" }
Run `POST /_bulk` to perform a bulk request that consists of index and create actions with the `dynamic_templates` parameter. The bulk request creates two new fields `work_location` and `home_location` with type `geo_point` according to the `dynamic_templates` parameter. However, the `raw_location` field is created using default dynamic mapping rules, as a text field in that case since it is supplied as a string in the JSON document.
{ "index" : { "_index" : "my_index", "_id" : "1", "dynamic_templates": {"work_location": "geo_point"}} }
{ "field" : "value1", "work_location": "41.12,-71.34", "raw_location": "41.12,-71.34"}
{ "create" : { "_index" : "my_index", "_id" : "2", "dynamic_templates": {"home_location": "geo_point"}} }
{ "field" : "value2", "home_location": "41.12,-71.34"}
Response examples (200)
{
   "took": 30,
   "errors": false,
   "items": [
      {
         "index": {
            "_index": "test",
            "_id": "1",
            "_version": 1,
            "result": "created",
            "_shards": {
               "total": 2,
               "successful": 1,
               "failed": 0
            },
            "status": 201,
            "_seq_no" : 0,
            "_primary_term": 1
         }
      },
      {
         "delete": {
            "_index": "test",
            "_id": "2",
            "_version": 1,
            "result": "not_found",
            "_shards": {
               "total": 2,
               "successful": 1,
               "failed": 0
            },
            "status": 404,
            "_seq_no" : 1,
            "_primary_term" : 2
         }
      },
      {
         "create": {
            "_index": "test",
            "_id": "3",
            "_version": 1,
            "result": "created",
            "_shards": {
               "total": 2,
               "successful": 1,
               "failed": 0
            },
            "status": 201,
            "_seq_no" : 2,
            "_primary_term" : 3
         }
      },
      {
         "update": {
            "_index": "test",
            "_id": "1",
            "_version": 2,
            "result": "updated",
            "_shards": {
                "total": 2,
                "successful": 1,
                "failed": 0
            },
            "status": 200,
            "_seq_no" : 3,
            "_primary_term" : 4
         }
      }
   ]
}
If you run `POST /_bulk` with operations that update non-existent documents, the operations cannot complete successfully. The API returns a response with an `errors` property value `true`. The response also includes an error object for any failed operations. The error object contains additional information about the failure, such as the error type and reason.
{
  "took": 486,
  "errors": true,
  "items": [
    {
      "update": {
        "_index": "index1",
        "_id": "5",
        "status": 404,
        "error": {
          "type": "document_missing_exception",
          "reason": "[5]: document missing",
          "index_uuid": "aAsFqTI0Tc2W0LCWgPNrOA",
          "shard": "0",
          "index": "index1"
        }
      }
    },
    {
      "update": {
        "_index": "index1",
        "_id": "6",
        "status": 404,
        "error": {
          "type": "document_missing_exception",
          "reason": "[6]: document missing",
          "index_uuid": "aAsFqTI0Tc2W0LCWgPNrOA",
          "shard": "0",
          "index": "index1"
        }
      }
    },
    {
      "create": {
        "_index": "index1",
        "_id": "7",
        "_version": 1,
        "result": "created",
        "_shards": {
          "total": 2,
          "successful": 1,
          "failed": 0
        },
        "_seq_no": 0,
        "_primary_term": 1,
        "status": 201
      }
    }
  ]
}
An example response from `POST /_bulk?filter_path=items.*.error`, which returns only information about failed operations.
{
  "items": [
    {
      "update": {
        "error": {
          "type": "document_missing_exception",
          "reason": "[5]: document missing",
          "index_uuid": "aAsFqTI0Tc2W0LCWgPNrOA",
          "shard": "0",
          "index": "index1"
        }
      }
    },
    {
      "update": {
        "error": {
          "type": "document_missing_exception",
          "reason": "[6]: document missing",
          "index_uuid": "aAsFqTI0Tc2W0LCWgPNrOA",
          "shard": "0",
          "index": "index1"
        }
      }
    }
  ]
}




























Check for a document source Generally available

HEAD /{index}/_source/{id}

Check whether a document source exists in an index. For example:

HEAD my-index-000001/_source/1

A document's source is not available if it is disabled in the mapping.

Required authorization

  • Index privileges: read
External documentation

Path parameters

  • index string Required

    A comma-separated list of data streams, indices, and aliases. It supports wildcards (*).

  • id string Required

    A unique identifier for the document.

Query parameters

  • preference string

    The node or shard the operation should be performed on. By default, the operation is randomized between the shard replicas.

  • realtime boolean

    If true, the request is real-time as opposed to near-real-time.

  • refresh boolean

    If true, the request refreshes the relevant shards before retrieving the document. Setting it to true should be done after careful thought and verification that this does not cause a heavy load on the system (and slow down indexing).

  • routing string

    A custom value used to route operations to a specific shard.

  • _source boolean | string | array[string]

    Indicates whether to return the _source field (true or false) or lists the fields to return.

  • _source_excludes string | array[string]

    A comma-separated list of source fields to exclude in the response.

  • _source_includes string | array[string]

    A comma-separated list of source fields to include in the response.

  • version number

    The version number for concurrency control. It must match the current version of the document for the request to succeed.

  • version_type string

    The version type.

    Supported values include:

    • internal: Use internal versioning that starts at 1 and increments with each update or delete.
    • external: Only index the document if the specified version is strictly higher than the version of the stored document or if there is no existing document.
    • external_gte: Only index the document if the specified version is equal or higher than the version of the stored document or if there is no existing document. NOTE: The external_gte version type is meant for special use cases and should be used with care. If used incorrectly, it can result in loss of data.
    • force: This option is deprecated because it can cause primary and replica shards to diverge.

    Values are internal, external, external_gte, or force.

Responses

  • 200 application/json
HEAD my-index-000001/_source/1
resp = client.exists_source(
    index="my-index-000001",
    id="1",
)
const response = await client.existsSource({
  index: "my-index-000001",
  id: 1,
});
response = client.exists_source(
  index: "my-index-000001",
  id: "1"
)
$resp = $client->existsSource([
    "index" => "my-index-000001",
    "id" => "1",
]);
curl --head -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/my-index-000001/_source/1"
client.existsSource(e -> e
    .id("1")
    .index("my-index-000001")
);








Reindex documents Generally available

POST /_reindex

Copy documents from a source to a destination. You can copy all documents to the destination index or reindex a subset of the documents. The source can be any existing index, alias, or data stream. The destination must differ from the source. For example, you cannot reindex a data stream into itself.

IMPORTANT: Reindex requires _source to be enabled for all documents in the source. The destination should be configured as wanted before calling the reindex API. Reindex does not copy the settings from the source or its associated template. Mappings, shard counts, and replicas, for example, must be configured ahead of time.

If the Elasticsearch security features are enabled, you must have the following security privileges:

  • The read index privilege for the source data stream, index, or alias.
  • The write index privilege for the destination data stream, index, or index alias.
  • To automatically create a data stream or index with a reindex API request, you must have the auto_configure, create_index, or manage index privilege for the destination data stream, index, or alias.
  • If reindexing from a remote cluster, the source.remote.user must have the monitor cluster privilege and the read index privilege for the source data stream, index, or alias.

If reindexing from a remote cluster, you must explicitly allow the remote host in the reindex.remote.whitelist setting. Automatic data stream creation requires a matching index template with data stream enabled.

The dest element can be configured like the index API to control optimistic concurrency control. Omitting version_type or setting it to internal causes Elasticsearch to blindly dump documents into the destination, overwriting any that happen to have the same ID.

Setting version_type to external causes Elasticsearch to preserve the version from the source, create any documents that are missing, and update any documents that have an older version in the destination than they do in the source.

Setting op_type to create causes the reindex API to create only missing documents in the destination. All existing documents will cause a version conflict.

IMPORTANT: Because data streams are append-only, any reindex request to a destination data stream must have an op_type of create. A reindex can only add new documents to a destination data stream. It cannot update existing documents in a destination data stream.

By default, version conflicts abort the reindex process. To continue reindexing if there are conflicts, set the conflicts request body property to proceed. In this case, the response includes a count of the version conflicts that were encountered. Note that the handling of other error types is unaffected by the conflicts property. Additionally, if you opt to count version conflicts, the operation could attempt to reindex more documents from the source than max_docs until it has successfully indexed max_docs documents into the target or it has gone through every document in the source query.

It's recommended to reindex on indices with a green status. Reindexing can fail when a node shuts down or crashes.

  • When requested with wait_for_completion=true (default), the request fails if the node shuts down.
  • When requested with wait_for_completion=false, a task id is returned, for use with the task management APIs. The task may disappear or fail if the node shuts down. When retrying a failed reindex operation, it might be necessary to set conflicts=proceed or to first delete the partial destination index. Additionally, dry runs, checking disk space, and fetching index recovery information can help address the root cause.

Refer to the linked documentation for examples of how to reindex documents.

Required authorization

  • Index privileges: read,write
External documentation

Query parameters

  • refresh boolean

    If true, the request refreshes affected shards to make this operation visible to search.

  • requests_per_second number

    The throttle for this request in sub-requests per second. By default, there is no throttle.

  • scroll string

    The period of time that a consistent view of the index should be maintained for scrolled search.

    Values are -1 or 0.

  • slices number | string

    The number of slices this task should be divided into. It defaults to one slice, which means the task isn't sliced into subtasks.

    Reindex supports sliced scroll to parallelize the reindexing process. This parallelization can improve efficiency and provide a convenient way to break the request down into smaller parts.

    NOTE: Reindexing from remote clusters does not support manual or automatic slicing.

    If set to auto, Elasticsearch chooses the number of slices to use. This setting will use one slice per shard, up to a certain limit. If there are multiple sources, it will choose the number of slices based on the index or backing index with the smallest number of shards.

    Value is auto.

  • max_docs number

    The maximum number of documents to reindex. By default, all documents are reindexed. If it is a value less then or equal to scroll_size, a scroll will not be used to retrieve the results for the operation.

    If conflicts is set to proceed, the reindex operation could attempt to reindex more documents from the source than max_docs until it has successfully indexed max_docs documents into the target or it has gone through every document in the source query.

  • timeout string

    The period each indexing waits for automatic index creation, dynamic mapping updates, and waiting for active shards. By default, Elasticsearch waits for at least one minute before failing. The actual wait time could be longer, particularly when multiple waits occur.

    Values are -1 or 0.

  • wait_for_active_shards number | string

    The number of shard copies that must be active before proceeding with the operation. Set it to all or any positive integer up to the total number of shards in the index (number_of_replicas+1). The default value is one, which means it waits for each primary shard to be active.

    Values are all or index-setting.

  • wait_for_completion boolean

    If true, the request blocks until the operation is complete.

  • require_alias boolean

    If true, the destination must be an index alias.

application/json

Body Required

  • conflicts string

    Indicates whether to continue reindexing even when there are conflicts.

    Supported values include:

    • abort: Stop reindexing if there are conflicts.
    • proceed: Continue reindexing even if there are conflicts.

    Values are abort or proceed.

  • dest object Required

    The destination you are copying to.

    Hide dest attributes Show dest attributes object
    • index string Required

      The name of the data stream, index, or index alias you are copying to.

    • op_type string

      If it is create, the operation will only index documents that do not already exist (also known as "put if absent").

      IMPORTANT: To reindex to a data stream destination, this argument must be create.

      Supported values include:

      • index: Overwrite any documents that already exist.
      • create: Only index documents that do not already exist.

      Values are index or create.

    • pipeline string

      The name of the pipeline to use.

    • routing string

      By default, a document's routing is preserved unless it's changed by the script. If it is keep, the routing on the bulk request sent for each match is set to the routing on the match. If it is discard, the routing on the bulk request sent for each match is set to null. If it is =value, the routing on the bulk request sent for each match is set to all value specified after the equals sign (=).

    • version_type string

      The versioning to use for the indexing operation.

      Supported values include:

      • internal: Use internal versioning that starts at 1 and increments with each update or delete.
      • external: Only index the document if the specified version is strictly higher than the version of the stored document or if there is no existing document.
      • external_gte: Only index the document if the specified version is equal or higher than the version of the stored document or if there is no existing document. NOTE: The external_gte version type is meant for special use cases and should be used with care. If used incorrectly, it can result in loss of data.
      • force: This option is deprecated because it can cause primary and replica shards to diverge.

      Values are internal, external, external_gte, or force.

  • max_docs number

    The maximum number of documents to reindex. By default, all documents are reindexed. If it is a value less then or equal to scroll_size, a scroll will not be used to retrieve the results for the operation.

    If conflicts is set to proceed, the reindex operation could attempt to reindex more documents from the source than max_docs until it has successfully indexed max_docs documents into the target or it has gone through every document in the source query.

  • script object

    The script to run to update the document source or metadata when reindexing.

    Hide script attributes Show script attributes object
    • source string | object

      The script source.

      One of:

      The script source.

    • id string

      The id for a stored script.

    • params object

      Specifies any named parameters that are passed into the script as variables. Use parameters instead of hard-coded values to decrease compile time.

      Hide params attribute Show params attribute object
      • * object Additional properties
    • lang string

      Specifies the language the script is written in.

      Supported values include:

      • painless: Painless scripting language, purpose-built for Elasticsearch.
      • expression: Lucene’s expressions language, compiles a JavaScript expression to bytecode.
      • mustache: Mustache templated, used for templates.
      • java: Expert Java API
      Any of:

      Specifies the language the script is written in.

      Supported values include:

      • painless: Painless scripting language, purpose-built for Elasticsearch.
      • expression: Lucene’s expressions language, compiles a JavaScript expression to bytecode.
      • mustache: Mustache templated, used for templates.
      • java: Expert Java API

      Values are painless, expression, mustache, or java.

    • options object
      Hide options attribute Show options attribute object
      • * string Additional properties
  • size number
  • source object Required

    The source you are copying from.

    Hide source attributes Show source attributes object
    • index string | array[string] Required

      The name of the data stream, index, or alias you are copying from. It accepts a comma-separated list to reindex from multiple sources.

    • query object

      The documents to reindex, which is defined with Query DSL.

      External documentation
    • remote object

      A remote instance of Elasticsearch that you want to index from.

      Hide remote attributes Show remote attributes object
      • connect_timeout string

        The remote connection timeout.

      • headers object

        An object containing the headers of the request.

        Hide headers attribute Show headers attribute object
        • * string Additional properties
      • host string Required

        The URL for the remote instance of Elasticsearch that you want to index from. This information is required when you're indexing from remote.

      • username string

        The username to use for authentication with the remote host.

      • password string

        The password to use for authentication with the remote host.

      • socket_timeout string

        The remote socket read timeout.

    • size number

      The number of documents to index per batch. Use it when you are indexing from remote to ensure that the batches fit within the on-heap buffer, which defaults to a maximum size of 100 MB.

      Default value is 1000.

    • slice object

      Slice the reindex request manually using the provided slice ID and total number of slices.

      Hide slice attributes Show slice attributes object
      • field string

        Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

      • id string Required
      • max number Required
    • sort string | object | array[string | object]

      A comma-separated list of <field>:<direction> pairs to sort by before indexing. Use it in conjunction with max_docs to control what documents are reindexed.

      WARNING: Sort in reindex is deprecated. Sorting in reindex was never guaranteed to index documents in order and prevents further development of reindex such as resilience and performance improvements. If used in combination with max_docs, consider using a query filter instead.

      One of:

      A comma-separated list of <field>:<direction> pairs to sort by before indexing. Use it in conjunction with max_docs to control what documents are reindexed.

      WARNING: Sort in reindex is deprecated. Sorting in reindex was never guaranteed to index documents in order and prevents further development of reindex such as resilience and performance improvements. If used in combination with max_docs, consider using a query filter instead.

    • _source string | array[string]

      If true, reindex all source fields. Set it to a list to reindex select fields.

    • runtime_mappings object
      Hide runtime_mappings attribute Show runtime_mappings attribute object
      • * object Additional properties
        Hide * attributes Show * attributes object
        • fields object

          For type composite

          Hide fields attribute Show fields attribute object
          • * object Additional properties
        • fetch_fields array[object]

          For type lookup

        • format string

          A custom format for date type runtime fields.

        • input_field string

          For type lookup

        • target_field string

          For type lookup

        • target_index string

          For type lookup

        • script object

          Painless script executed at query time.

        • type string Required

          Field type, which can be: boolean, composite, date, double, geo_point, ip,keyword, long, or lookup.

          Values are boolean, composite, date, double, geo_point, geo_shape, ip, keyword, long, or lookup.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • batches number

      The number of scroll responses that were pulled back by the reindex.

    • created number

      The number of documents that were successfully created.

    • deleted number

      The number of documents that were successfully deleted.

    • failures array[object]

      If there were any unrecoverable errors during the process, it is an array of those failures. If this array is not empty, the request ended because of those failures. Reindex is implemented using batches and any failure causes the entire process to end but all failures in the current batch are collected into the array. You can use the conflicts option to prevent the reindex from ending on version conflicts.

      Hide failures attributes Show failures attributes object
      • cause object Required

        Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

        Hide cause attributes Show cause attributes object
        • type string Required

          The type of error

        • reason string | null

          A human-readable explanation of the error, in English.

        • stack_trace string

          The server stack trace. Present only if the error_trace=true parameter was sent with the request.

        • caused_by object

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

        • root_cause array[object]

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

        • suppressed array[object]

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

      • id string Required
      • index string Required
      • status number Required
    • noops number

      The number of documents that were ignored because the script used for the reindex returned a noop value for ctx.op.

    • retries object

      The number of retries attempted by reindex.

      Hide retries attributes Show retries attributes object
      • bulk number Required

        The number of bulk actions retried.

    • requests_per_second number

      The number of requests per second effectively run during the reindex.

    • slice_id number
    • task string
    • throttled_millis number

      Time unit for milliseconds

    • throttled_until_millis number

      Time unit for milliseconds

    • timed_out boolean

      If any of the requests that ran during the reindex timed out, it is true.

    • took number

      Time unit for milliseconds

    • total number

      The number of documents that were successfully processed.

    • updated number

      The number of documents that were successfully updated. That is to say, a document with the same ID already existed before the reindex updated it.

    • version_conflicts number

      The number of version conflicts that occurred.

POST _reindex
{
  "source": {
    "index": ["my-index-000001", "my-index-000002"]
  },
  "dest": {
    "index": "my-new-index-000002"
  }
}
resp = client.reindex(
    source={
        "index": [
            "my-index-000001",
            "my-index-000002"
        ]
    },
    dest={
        "index": "my-new-index-000002"
    },
)
const response = await client.reindex({
  source: {
    index: ["my-index-000001", "my-index-000002"],
  },
  dest: {
    index: "my-new-index-000002",
  },
});
response = client.reindex(
  body: {
    "source": {
      "index": [
        "my-index-000001",
        "my-index-000002"
      ]
    },
    "dest": {
      "index": "my-new-index-000002"
    }
  }
)
$resp = $client->reindex([
    "body" => [
        "source" => [
            "index" => array(
                "my-index-000001",
                "my-index-000002",
            ),
        ],
        "dest" => [
            "index" => "my-new-index-000002",
        ],
    ],
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"source":{"index":["my-index-000001","my-index-000002"]},"dest":{"index":"my-new-index-000002"}}' "$ELASTICSEARCH_URL/_reindex"
client.reindex(r -> r
  .dest(d -> d
    .index("my-new-index-000002")
  )
  .source(s -> s
    .index(List.of("my-index-000001","my-index-000002"))
  )
);
Run `POST _reindex` to reindex from multiple sources. The `index` attribute in source can be a list, which enables you to copy from lots of sources in one request. This example copies documents from the `my-index-000001` and `my-index-000002` indices.
{
  "source": {
    "index": ["my-index-000001", "my-index-000002"]
  },
  "dest": {
    "index": "my-new-index-000002"
  }
}
You can use Painless to reindex daily indices to apply a new template to the existing documents. The script extracts the date from the index name and creates a new index with `-1` appended. For example, all data from `metricbeat-2016.05.31` will be reindexed into `metricbeat-2016.05.31-1`.
{
  "source": {
    "index": "metricbeat-*"
  },
  "dest": {
    "index": "metricbeat"
  },
  "script": {
    "lang": "painless",
    "source": "ctx._index = 'metricbeat-' + (ctx._index.substring('metricbeat-'.length(), ctx._index.length())) + '-1'"
  }
}
Run `POST _reindex` to extract a random subset of the source for testing. You might need to adjust the `min_score` value depending on the relative amount of data extracted from source.
{
  "max_docs": 10,
  "source": {
    "index": "my-index-000001",
    "query": {
      "function_score" : {
        "random_score" : {},
        "min_score" : 0.9
      }
    }
  },
  "dest": {
    "index": "my-new-index-000001"
  }
}
Run `POST _reindex` to modify documents during reindexing. This example bumps the version of the source document.
{
  "source": {
    "index": "my-index-000001"
  },
  "dest": {
    "index": "my-new-index-000001",
    "version_type": "external"
  },
  "script": {
    "source": "if (ctx._source.foo == 'bar') {ctx._version++; ctx._source.remove('foo')}",
    "lang": "painless"
  }
}
When using Elastic Cloud, you can run `POST _reindex` and authenticate against a remote cluster with an API key.
{
  "source": {
    "remote": {
      "host": "http://otherhost:9200",
      "username": "user",
      "password": "pass"
    },
    "index": "my-index-000001",
    "query": {
      "match": {
        "test": "data"
      }
    }
  },
  "dest": {
    "index": "my-new-index-000001"
  }
}
Run `POST _reindex` to slice a reindex request manually. Provide a slice ID and total number of slices to each request.
{
  "source": {
    "index": "my-index-000001",
    "slice": {
      "id": 0,
      "max": 2
    }
  },
  "dest": {
    "index": "my-new-index-000001"
  }
}
Run `POST _reindex?slices=5&refresh` to automatically parallelize using sliced scroll to slice on `_id`. The `slices` parameter specifies the number of slices to use.
{
  "source": {
    "index": "my-index-000001"
  },
  "dest": {
    "index": "my-new-index-000001"
  }
}
By default if reindex sees a document with routing then the routing is preserved unless it's changed by the script. You can set `routing` on the `dest` request to change this behavior. In this example, run `POST _reindex` to copy all documents from the `source` with the company name `cat` into the `dest` with routing set to `cat`.
{
  "source": {
    "index": "source",
    "query": {
      "match": {
        "company": "cat"
      }
    }
  },
  "dest": {
    "index": "dest",
    "routing": "=cat"
  }
}
Run `POST _reindex` and use the ingest pipelines feature.
{
  "source": {
    "index": "source"
  },
  "dest": {
    "index": "dest",
    "pipeline": "some_ingest_pipeline"
  }
}
Run `POST _reindex` and add a query to the `source` to limit the documents to reindex. For example, this request copies documents into `my-new-index-000001` only if they have a `user.id` of `kimchy`.
{
  "source": {
    "index": "my-index-000001",
    "query": {
      "term": {
        "user.id": "kimchy"
      }
    }
  },
  "dest": {
    "index": "my-new-index-000001"
  }
}
You can limit the number of processed documents by setting `max_docs`. For example, run `POST _reindex` to copy a single document from `my-index-000001` to `my-new-index-000001`.
{
  "max_docs": 1,
  "source": {
    "index": "my-index-000001"
  },
  "dest": {
    "index": "my-new-index-000001"
  }
}
You can use source filtering to reindex a subset of the fields in the original documents. For example, run `POST _reindex` the reindex only the `user.id` and `_doc` fields of each document.
{
  "source": {
    "index": "my-index-000001",
    "_source": ["user.id", "_doc"]
  },
  "dest": {
    "index": "my-new-index-000001"
  }
}
A reindex operation can build a copy of an index with renamed fields. If your index has documents with `text` and `flag` fields, you can change the latter field name to `tag` during the reindex.
{
  "source": {
    "index": "my-index-000001"
  },
  "dest": {
    "index": "my-new-index-000001"
  },
  "script": {
    "source": "ctx._source.tag = ctx._source.remove(\"flag\")"
  }
}

Get term vector information Generally available

POST /{index}/_termvectors/{id}

All methods and paths for this operation:

GET /{index}/_termvectors

POST /{index}/_termvectors
GET /{index}/_termvectors/{id}
POST /{index}/_termvectors/{id}

Get information and statistics about terms in the fields of a particular document.

You can retrieve term vectors for documents stored in the index or for artificial documents passed in the body of the request. You can specify the fields you are interested in through the fields parameter or by adding the fields to the request body. For example:

GET /my-index-000001/_termvectors/1?fields=message

Fields can be specified using wildcards, similar to the multi match query.

Term vectors are real-time by default, not near real-time. This can be changed by setting realtime parameter to false.

You can request three types of values: term information, term statistics, and field statistics. By default, all term information and field statistics are returned for all fields but term statistics are excluded.

Term information

  • term frequency in the field (always returned)
  • term positions (positions: true)
  • start and end offsets (offsets: true)
  • term payloads (payloads: true), as base64 encoded bytes

If the requested information wasn't stored in the index, it will be computed on the fly if possible. Additionally, term vectors could be computed for documents not even existing in the index, but instead provided by the user.


Start and end offsets assume UTF-16 encoding is being used. If you want to use these offsets in order to get the original text that produced this token, you should make sure that the string you are taking a sub-string of is also encoded using UTF-16.

Behaviour

The term and field statistics are not accurate. Deleted documents are not taken into account. The information is only retrieved for the shard the requested document resides in. The term and field statistics are therefore only useful as relative measures whereas the absolute numbers have no meaning in this context. By default, when requesting term vectors of artificial documents, a shard to get the statistics from is randomly selected. Use routing only to hit a particular shard. Refer to the linked documentation for detailed examples of how to use this API.

Required authorization

  • Index privileges: read
External documentation

Path parameters

  • index string Required

    The name of the index that contains the document.

  • id string Required

    A unique identifier for the document.

Query parameters

  • fields string | array[string]

    A comma-separated list or wildcard expressions of fields to include in the statistics. It is used as the default list unless a specific field list is provided in the completion_fields or fielddata_fields parameters.

  • field_statistics boolean

    If true, the response includes:

    • The document count (how many documents contain this field).
    • The sum of document frequencies (the sum of document frequencies for all terms in this field).
    • The sum of total term frequencies (the sum of total term frequencies of each term in this field).
  • offsets boolean

    If true, the response includes term offsets.

  • payloads boolean

    If true, the response includes term payloads.

  • positions boolean

    If true, the response includes term positions.

  • preference string

    The node or shard the operation should be performed on. It is random by default.

  • realtime boolean

    If true, the request is real-time as opposed to near-real-time.

  • routing string

    A custom value that is used to route operations to a specific shard.

  • term_statistics boolean

    If true, the response includes:

    • The total term frequency (how often a term occurs in all documents).
    • The document frequency (the number of documents containing the current term).

    By default these values are not returned since term statistics can have a serious performance impact.

  • version number

    If true, returns the document version as part of a hit.

  • version_type string

    The version type.

    Supported values include:

    • internal: Use internal versioning that starts at 1 and increments with each update or delete.
    • external: Only index the document if the specified version is strictly higher than the version of the stored document or if there is no existing document.
    • external_gte: Only index the document if the specified version is equal or higher than the version of the stored document or if there is no existing document. NOTE: The external_gte version type is meant for special use cases and should be used with care. If used incorrectly, it can result in loss of data.
    • force: This option is deprecated because it can cause primary and replica shards to diverge.

    Values are internal, external, external_gte, or force.

application/json

Body

  • doc object

    An artificial document (a document not present in the index) for which you want to retrieve term vectors.

  • filter object

    Filter terms based on their tf-idf scores. This could be useful in order find out a good characteristic vector of a document. This feature works in a similar manner to the second phase of the More Like This Query.

    Hide filter attributes Show filter attributes object
    • max_doc_freq number

      Ignore words which occur in more than this many docs. Defaults to unbounded.

    • max_num_terms number

      The maximum number of terms that must be returned per field.

      Default value is 25.

    • max_term_freq number

      Ignore words with more than this frequency in the source doc. It defaults to unbounded.

    • max_word_length number

      The maximum word length above which words will be ignored. Defaults to unbounded.

      Default value is 0.

    • min_doc_freq number

      Ignore terms which do not occur in at least this many docs.

      Default value is 1.

    • min_term_freq number

      Ignore words with less than this frequency in the source doc.

      Default value is 1.

    • min_word_length number

      The minimum word length below which words will be ignored.

      Default value is 0.

  • per_field_analyzer object

    Override the default per-field analyzer. This is useful in order to generate term vectors in any fashion, especially when using artificial documents. When providing an analyzer for a field that already stores term vectors, the term vectors will be regenerated.

    Hide per_field_analyzer attribute Show per_field_analyzer attribute object
    • * string Additional properties
  • fields array[string]

    A list of fields to include in the statistics. It is used as the default list unless a specific field list is provided in the completion_fields or fielddata_fields parameters.

  • field_statistics boolean

    If true, the response includes:

    • The document count (how many documents contain this field).
    • The sum of document frequencies (the sum of document frequencies for all terms in this field).
    • The sum of total term frequencies (the sum of total term frequencies of each term in this field).

    Default value is true.

  • offsets boolean

    If true, the response includes term offsets.

    Default value is true.

  • payloads boolean

    If true, the response includes term payloads.

    Default value is true.

  • positions boolean

    If true, the response includes term positions.

    Default value is true.

  • term_statistics boolean

    If true, the response includes:

    • The total term frequency (how often a term occurs in all documents).
    • The document frequency (the number of documents containing the current term).

    By default these values are not returned since term statistics can have a serious performance impact.

    Default value is false.

  • routing string

    A custom value that is used to route operations to a specific shard.

  • version number

    If true, returns the document version as part of a hit.

  • version_type string

    The version type.

    Supported values include:

    • internal: Use internal versioning that starts at 1 and increments with each update or delete.
    • external: Only index the document if the specified version is strictly higher than the version of the stored document or if there is no existing document.
    • external_gte: Only index the document if the specified version is equal or higher than the version of the stored document or if there is no existing document. NOTE: The external_gte version type is meant for special use cases and should be used with care. If used incorrectly, it can result in loss of data.
    • force: This option is deprecated because it can cause primary and replica shards to diverge.

    Values are internal, external, external_gte, or force.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • found boolean Required
    • _id string
    • _index string Required
    • term_vectors object
      Hide term_vectors attribute Show term_vectors attribute object
      • * object Additional properties
        Hide * attributes Show * attributes object
        • field_statistics object
          Hide field_statistics attributes Show field_statistics attributes object
          • doc_count number Required
          • sum_doc_freq number Required
          • sum_ttf number Required
        • terms object Required
          Hide terms attribute Show terms attribute object
          • * object Additional properties
            Hide * attributes Show * attributes object
            • doc_freq number
            • score number
            • term_freq number Required
            • tokens array[object]
            • ttf number
    • took number Required
    • _version number Required
GET /my-index-000001/_termvectors/1
{
  "fields" : ["text"],
  "offsets" : true,
  "payloads" : true,
  "positions" : true,
  "term_statistics" : true,
  "field_statistics" : true
}
resp = client.termvectors(
    index="my-index-000001",
    id="1",
    fields=[
        "text"
    ],
    offsets=True,
    payloads=True,
    positions=True,
    term_statistics=True,
    field_statistics=True,
)
const response = await client.termvectors({
  index: "my-index-000001",
  id: 1,
  fields: ["text"],
  offsets: true,
  payloads: true,
  positions: true,
  term_statistics: true,
  field_statistics: true,
});
response = client.termvectors(
  index: "my-index-000001",
  id: "1",
  body: {
    "fields": [
      "text"
    ],
    "offsets": true,
    "payloads": true,
    "positions": true,
    "term_statistics": true,
    "field_statistics": true
  }
)
$resp = $client->termvectors([
    "index" => "my-index-000001",
    "id" => "1",
    "body" => [
        "fields" => array(
            "text",
        ),
        "offsets" => true,
        "payloads" => true,
        "positions" => true,
        "term_statistics" => true,
        "field_statistics" => true,
    ],
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"fields":["text"],"offsets":true,"payloads":true,"positions":true,"term_statistics":true,"field_statistics":true}' "$ELASTICSEARCH_URL/my-index-000001/_termvectors/1"
client.termvectors(t -> t
    .fieldStatistics(true)
    .fields("text")
    .id("1")
    .index("my-index-000001")
    .offsets(true)
    .payloads(true)
    .positions(true)
    .termStatistics(true)
);
Run `GET /my-index-000001/_termvectors/1` to return all information and statistics for field `text` in document 1.
{
  "fields" : ["text"],
  "offsets" : true,
  "payloads" : true,
  "positions" : true,
  "term_statistics" : true,
  "field_statistics" : true
}
Run `GET /my-index-000001/_termvectors/1` to set per-field analyzers. A different analyzer than the one at the field may be provided by using the `per_field_analyzer` parameter.
{
  "doc" : {
    "fullname" : "John Doe",
    "text" : "test test test"
  },
  "fields": ["fullname"],
  "per_field_analyzer" : {
    "fullname": "keyword"
  }
}
Run `GET /imdb/_termvectors` to filter the terms returned based on their tf-idf scores. It returns the three most "interesting" keywords from the artificial document having the given "plot" field value. Notice that the keyword "Tony" or any stop words are not part of the response, as their tf-idf must be too low.
{
  "doc": {
    "plot": "When wealthy industrialist Tony Stark is forced to build an armored suit after a life-threatening incident, he ultimately decides to use its technology to fight against evil."
  },
  "term_statistics": true,
  "field_statistics": true,
  "positions": false,
  "offsets": false,
  "filter": {
    "max_num_terms": 3,
    "min_term_freq": 1,
    "min_doc_freq": 1
  }
}
Run `GET /my-index-000001/_termvectors/1`. Term vectors which are not explicitly stored in the index are automatically computed on the fly. This request returns all information and statistics for the fields in document 1, even though the terms haven't been explicitly stored in the index. Note that for the field text, the terms are not regenerated.
{
  "fields" : ["text", "some_field_without_term_vectors"],
  "offsets" : true,
  "positions" : true,
  "term_statistics" : true,
  "field_statistics" : true
}
Run `GET /my-index-000001/_termvectors`. Term vectors can be generated for artificial documents, that is for documents not present in the index. If dynamic mapping is turned on (default), the document fields not in the original mapping will be dynamically created.
{
  "doc" : {
    "fullname" : "John Doe",
    "text" : "test test test"
  }
}
Response examples (200)
A successful response from `GET /my-index-000001/_termvectors/1`.
{
  "_index": "my-index-000001",
  "_id": "1",
  "_version": 1,
  "found": true,
  "took": 6,
  "term_vectors": {
    "text": {
      "field_statistics": {
        "sum_doc_freq": 4,
        "doc_count": 2,
        "sum_ttf": 6
      },
      "terms": {
        "test": {
          "doc_freq": 2,
          "ttf": 4,
          "term_freq": 3,
          "tokens": [
            {
              "position": 0,
              "start_offset": 0,
              "end_offset": 4,
              "payload": "d29yZA=="
            },
            {
              "position": 1,
              "start_offset": 5,
              "end_offset": 9,
              "payload": "d29yZA=="
            },
            {
              "position": 2,
              "start_offset": 10,
              "end_offset": 14,
              "payload": "d29yZA=="
            }
          ]
        }
      }
    }
  }
}
A successful response from `GET /my-index-000001/_termvectors` with `per_field_analyzer` in the request body.
{
  "_index": "my-index-000001",
  "_version": 0,
  "found": true,
  "took": 6,
  "term_vectors": {
    "fullname": {
      "field_statistics": {
          "sum_doc_freq": 2,
          "doc_count": 4,
          "sum_ttf": 4
      },
      "terms": {
          "John Doe": {
            "term_freq": 1,
            "tokens": [
                {
                  "position": 0,
                  "start_offset": 0,
                  "end_offset": 8
                }
            ]
          }
      }
    }
  }
}
A successful response from `GET /my-index-000001/_termvectors` with a `filter` in the request body.
{
  "_index": "imdb",
  "_version": 0,
  "found": true,
  "term_vectors": {
      "plot": {
        "field_statistics": {
            "sum_doc_freq": 3384269,
            "doc_count": 176214,
            "sum_ttf": 3753460
        },
        "terms": {
            "armored": {
              "doc_freq": 27,
              "ttf": 27,
              "term_freq": 1,
              "score": 9.74725
            },
            "industrialist": {
              "doc_freq": 88,
              "ttf": 88,
              "term_freq": 1,
              "score": 8.590818
            },
            "stark": {
              "doc_freq": 44,
              "ttf": 47,
              "term_freq": 1,
              "score": 9.272792
            }
        }
      }
  }
}

Update a document Generally available

POST /{index}/_update/{id}

Update a document by running a script or passing a partial document.

If the Elasticsearch security features are enabled, you must have the index or write index privilege for the target index or index alias.

The script can update, delete, or skip modifying the document. The API also supports passing a partial document, which is merged into the existing document. To fully replace an existing document, use the index API. This operation:

  • Gets the document (collocated with the shard) from the index.
  • Runs the specified script.
  • Indexes the result.

The document must still be reindexed, but using this API removes some network roundtrips and reduces chances of version conflicts between the GET and the index operation.

The _source field must be enabled to use this API. In addition to _source, you can access the following variables through the ctx map: _index, _type, _id, _version, _routing, and _now (the current timestamp). For usage examples such as partial updates, upserts, and scripted updates, see the External documentation.

Required authorization

  • Index privileges: write
External documentation

Path parameters

  • index string Required

    The name of the target index. By default, the index is created automatically if it doesn't exist.

  • id string Required

    A unique identifier for the document to be updated.

Query parameters

  • if_primary_term number

    Only perform the operation if the document has this primary term.

  • if_seq_no number

    Only perform the operation if the document has this sequence number.

  • include_source_on_error boolean

    True or false if to include the document source in the error message in case of parsing errors.

  • lang string

    The script language.

  • refresh string

    If 'true', Elasticsearch refreshes the affected shards to make this operation visible to search. If 'wait_for', it waits for a refresh to make this operation visible to search. If 'false', it does nothing with refreshes.

    Values are true, false, or wait_for.

  • require_alias boolean

    If true, the destination must be an index alias.

  • retry_on_conflict number

    The number of times the operation should be retried when a conflict occurs.

  • routing string

    A custom value used to route operations to a specific shard.

  • timeout string

    The period to wait for the following operations: dynamic mapping updates and waiting for active shards. Elasticsearch waits for at least the timeout period before failing. The actual wait time could be longer, particularly when multiple waits occur.

    Values are -1 or 0.

  • wait_for_active_shards number | string

    The number of copies of each shard that must be active before proceeding with the operation. Set to 'all' or any positive integer up to the total number of shards in the index (number_of_replicas+1). The default value of 1 means it waits for each primary shard to be active.

    Values are all or index-setting.

  • _source boolean | string | array[string]

    If false, source retrieval is turned off. You can also specify a comma-separated list of the fields you want to retrieve.

  • _source_excludes string | array[string]

    The source fields you want to exclude.

  • _source_includes string | array[string]

    The source fields you want to retrieve.

application/json

Body Required

  • detect_noop boolean

    If true, the result in the response is set to noop (no operation) when there are no changes to the document.

    Default value is true.

  • doc object

    A partial update to an existing document. If both doc and script are specified, doc is ignored.

  • doc_as_upsert boolean

    If true, use the contents of 'doc' as the value of 'upsert'. NOTE: Using ingest pipelines with doc_as_upsert is not supported.

    Default value is false.

  • script object

    The script to run to update the document.

    Hide script attributes Show script attributes object
    • source string | object

      The script source.

      One of:

      The script source.

    • id string

      The id for a stored script.

    • params object

      Specifies any named parameters that are passed into the script as variables. Use parameters instead of hard-coded values to decrease compile time.

      Hide params attribute Show params attribute object
      • * object Additional properties
    • lang string

      Specifies the language the script is written in.

      Supported values include:

      • painless: Painless scripting language, purpose-built for Elasticsearch.
      • expression: Lucene’s expressions language, compiles a JavaScript expression to bytecode.
      • mustache: Mustache templated, used for templates.
      • java: Expert Java API
      Any of:

      Specifies the language the script is written in.

      Supported values include:

      • painless: Painless scripting language, purpose-built for Elasticsearch.
      • expression: Lucene’s expressions language, compiles a JavaScript expression to bytecode.
      • mustache: Mustache templated, used for templates.
      • java: Expert Java API

      Values are painless, expression, mustache, or java.

    • options object
      Hide options attribute Show options attribute object
      • * string Additional properties
  • scripted_upsert boolean

    If true, run the script whether or not the document exists.

    Default value is false.

  • _source boolean | object

    If false, turn off source retrieval. You can also specify a comma-separated list of the fields you want to retrieve.

    One of:

    If false, turn off source retrieval. You can also specify a comma-separated list of the fields you want to retrieve.

  • upsert object

    If the document does not already exist, the contents of 'upsert' are inserted as a new document. If the document exists, the 'script' is run.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • _id string Required

      The unique identifier for the added document.

    • _index string Required

      The name of the index the document was added to.

    • _primary_term number

      The primary term assigned to the document for the indexing operation.

    • result string Required

      The result of the indexing operation: created or updated.

      Values are created, updated, deleted, not_found, or noop.

    • _seq_no number

      The sequence number assigned to the document for the indexing operation. Sequence numbers are used to ensure an older version of a document doesn't overwrite a newer version.

    • _shards object Required

      Information about the replication process of the operation.

      Hide _shards attributes Show _shards attributes object
      • failed number Required

        The number of shards the operation or search attempted to run on but failed.

      • successful number Required

        The number of shards the operation or search succeeded on.

      • total number Required

        The number of shards the operation or search will run on overall.

      • failures array[object]
        Hide failures attributes Show failures attributes object
        • index
        • node string
        • reason
        • shard number
        • status string
        • primary boolean
      • skipped number
    • _version number Required

      The document version, which is incremented each time the document is updated.

    • forced_refresh boolean
    • get object
      Hide get attributes Show get attributes object
      • fields object
        Hide fields attribute Show fields attribute object
        • * object Additional properties
      • found boolean Required
      • _seq_no number
      • _primary_term number
      • _routing string
      • _source object
POST test/_update/1
{
  "script" : {
    "source": "ctx._source.counter += params.count",
    "lang": "painless",
    "params" : {
      "count" : 4
    }
  }
}
resp = client.update(
    index="test",
    id="1",
    script={
        "source": "ctx._source.counter += params.count",
        "lang": "painless",
        "params": {
            "count": 4
        }
    },
)
const response = await client.update({
  index: "test",
  id: 1,
  script: {
    source: "ctx._source.counter += params.count",
    lang: "painless",
    params: {
      count: 4,
    },
  },
});
response = client.update(
  index: "test",
  id: "1",
  body: {
    "script": {
      "source": "ctx._source.counter += params.count",
      "lang": "painless",
      "params": {
        "count": 4
      }
    }
  }
)
$resp = $client->update([
    "index" => "test",
    "id" => "1",
    "body" => [
        "script" => [
            "source" => "ctx._source.counter += params.count",
            "lang" => "painless",
            "params" => [
                "count" => 4,
            ],
        ],
    ],
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"script":{"source":"ctx._source.counter += params.count","lang":"painless","params":{"count":4}}}' "$ELASTICSEARCH_URL/test/_update/1"
client.update(u -> u
    .id("1")
    .index("test")
    .script(s -> s
        .source(so -> so
            .scriptString("ctx._source.counter += params.count")
        )
        .params("count", JsonData.fromJson("4"))
        .lang("painless")
    )
,Void.class);
Run `POST test/_update/1` to increment a counter by using a script.
{
  "script" : {
    "source": "ctx._source.counter += params.count",
    "lang": "painless",
    "params" : {
      "count" : 4
    }
  }
}
Run `POST test/_update/1` to perform a scripted upsert. When `scripted_upsert` is `true`, the script runs whether or not the document exists.
{
  "scripted_upsert": true,
  "script": {
    "source": """
      if ( ctx.op == 'create' ) {
        ctx._source.counter = params.count
      } else {
        ctx._source.counter += params.count
      }
    """,
    "params": {
      "count": 4
    }
  },
  "upsert": {}
}
Run `POST test/_update/1` to perform a doc as upsert. Instead of sending a partial `doc` plus an `upsert` doc, you can set `doc_as_upsert` to `true` to use the contents of `doc` as the `upsert` value.
{
  "doc": {
    "name": "new_name"
  },
  "doc_as_upsert": true
}
Run `POST test/_update/1` to use a script to add a tag to a list of tags. In this example, it is just a list, so the tag is added even it exists.
{
  "script": {
    "source": "ctx._source.tags.add(params.tag)",
    "lang": "painless",
    "params": {
      "tag": "blue"
    }
  }
}
Run `POST test/_update/1` to use a script to remove a tag from a list of tags. The Painless function to remove a tag takes the array index of the element you want to remove. To avoid a possible runtime error, you first need to make sure the tag exists. If the list contains duplicates of the tag, this script just removes one occurrence.
{
  "script": {
    "source": "if (ctx._source.tags.contains(params.tag)) { ctx._source.tags.remove(ctx._source.tags.indexOf(params.tag)) }",
    "lang": "painless",
    "params": {
      "tag": "blue"
    }
  }
}
Run `POST test/_update/1` to use a script to add a field `new_field` to the document.
{
  "script" : "ctx._source.new_field = 'value_of_new_field'"
}
Run `POST test/_update/1` to use a script to remove a field `new_field` from the document.
{
  "script" : "ctx._source.remove('new_field')"
}
Run `POST test/_update/1` to use a script to remove a subfield from an object field.
{
  "script": "ctx._source['my-object'].remove('my-subfield')"
}
Run `POST test/_update/1` to change the operation that runs from within the script. For example, this request deletes the document if the `tags` field contains `green`, otherwise it does nothing (`noop`).
{
  "script": {
    "source": "if (ctx._source.tags.contains(params.tag)) { ctx.op = 'delete' } else { ctx.op = 'noop' }",
    "lang": "painless",
    "params": {
      "tag": "green"
    }
  }
}
Run `POST test/_update/1` to do a partial update that adds a new field to the existing document.
{
  "doc": {
    "name": "new_name"
  }
}
Run `POST test/_update/1` to perfom an upsert. If the document does not already exist, the contents of the upsert element are inserted as a new document. If the document exists, the script is run.
{
  "script": {
    "source": "ctx._source.counter += params.count",
    "lang": "painless",
    "params": {
      "count": 4
    }
  },
  "upsert": {
    "counter": 1
  }
}
Response examples (200)
By default updates that don't change anything detect that they don't change anything and return `"result": "noop"`.
{
   "_shards": {
        "total": 0,
        "successful": 0,
        "failed": 0
   },
   "_index": "test",
   "_id": "1",
   "_version": 2,
   "_primary_term": 1,
   "_seq_no": 1,
   "result": "noop"
}









Create an enrich policy Generally available

PUT /_enrich/policy/{name}

Creates an enrich policy.

Path parameters

  • name string Required

    Name of the enrich policy to create or update.

Query parameters

  • master_timeout string

    Period to wait for a connection to the master node.

    Values are -1 or 0.

application/json

Body Required

  • geo_match object Additional properties

    Matches enrich data to incoming documents based on a geo_shape query.

    Hide geo_match attributes Show geo_match attributes object
    • enrich_fields string | array[string] Required
    • indices string | array[string] Required
    • match_field string Required

      Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

    • query object

      An Elasticsearch Query DSL (Domain Specific Language) object that defines a query.

      External documentation
    • name string
    • elasticsearch_version string
  • match object Additional properties

    Matches enrich data to incoming documents based on a term query.

    Hide match attributes Show match attributes object
    • enrich_fields string | array[string] Required
    • indices string | array[string] Required
    • match_field string Required

      Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

    • query object

      An Elasticsearch Query DSL (Domain Specific Language) object that defines a query.

      External documentation
    • name string
    • elasticsearch_version string
  • range object Additional properties

    Matches a number, date, or IP address in incoming documents to a range in the enrich index based on a term query.

    Hide range attributes Show range attributes object
    • enrich_fields string | array[string] Required
    • indices string | array[string] Required
    • match_field string Required

      Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

    • query object

      An Elasticsearch Query DSL (Domain Specific Language) object that defines a query.

      External documentation
    • name string
    • elasticsearch_version string

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

PUT /_enrich/policy/postal_policy
{
  "geo_match": {
    "indices": "postal_codes",
    "match_field": "location",
    "enrich_fields": [ "location", "postal_code" ]
  }
}
resp = client.enrich.put_policy(
    name="postal_policy",
    geo_match={
        "indices": "postal_codes",
        "match_field": "location",
        "enrich_fields": [
            "location",
            "postal_code"
        ]
    },
)
const response = await client.enrich.putPolicy({
  name: "postal_policy",
  geo_match: {
    indices: "postal_codes",
    match_field: "location",
    enrich_fields: ["location", "postal_code"],
  },
});
response = client.enrich.put_policy(
  name: "postal_policy",
  body: {
    "geo_match": {
      "indices": "postal_codes",
      "match_field": "location",
      "enrich_fields": [
        "location",
        "postal_code"
      ]
    }
  }
)
$resp = $client->enrich()->putPolicy([
    "name" => "postal_policy",
    "body" => [
        "geo_match" => [
            "indices" => "postal_codes",
            "match_field" => "location",
            "enrich_fields" => array(
                "location",
                "postal_code",
            ),
        ],
    ],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"geo_match":{"indices":"postal_codes","match_field":"location","enrich_fields":["location","postal_code"]}}' "$ELASTICSEARCH_URL/_enrich/policy/postal_policy"
client.enrich().putPolicy(p -> p
    .geoMatch(g -> g
        .enrichFields(List.of("location","postal_code"))
        .indices("postal_codes")
        .matchField("location")
    )
    .name("postal_policy")
);
Request example
An example body for a `PUT /_enrich/policy/postal_policy` request.
{
  "geo_match": {
    "indices": "postal_codes",
    "match_field": "location",
    "enrich_fields": [ "location", "postal_code" ]
  }
}













Delete an async EQL search Generally available

DELETE /_eql/search/{id}

Delete an async EQL search or a stored synchronous EQL search. The API also deletes results for the search.

Path parameters

  • id string Required

    Identifier for the search to delete. A search ID is provided in the EQL search API's response for an async search. A search ID is also provided if the request’s keep_on_completion parameter is true.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

DELETE /_eql/search/FmNJRUZ1YWZCU3dHY1BIOUhaenVSRkEaaXFlZ3h4c1RTWFNocDdnY2FSaERnUTozNDE=
resp = client.eql.delete(
    id="FmNJRUZ1YWZCU3dHY1BIOUhaenVSRkEaaXFlZ3h4c1RTWFNocDdnY2FSaERnUTozNDE=",
)
const response = await client.eql.delete({
  id: "FmNJRUZ1YWZCU3dHY1BIOUhaenVSRkEaaXFlZ3h4c1RTWFNocDdnY2FSaERnUTozNDE=",
});
response = client.eql.delete(
  id: "FmNJRUZ1YWZCU3dHY1BIOUhaenVSRkEaaXFlZ3h4c1RTWFNocDdnY2FSaERnUTozNDE="
)
$resp = $client->eql()->delete([
    "id" => "FmNJRUZ1YWZCU3dHY1BIOUhaenVSRkEaaXFlZ3h4c1RTWFNocDdnY2FSaERnUTozNDE=",
]);
curl -X DELETE -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_eql/search/FmNJRUZ1YWZCU3dHY1BIOUhaenVSRkEaaXFlZ3h4c1RTWFNocDdnY2FSaERnUTozNDE="
client.eql().delete(d -> d
    .id("FmNJRUZ1YWZCU3dHY1BIOUhaenVSRkEaaXFlZ3h4c1RTWFNocDdnY2FSaERnUTozNDE=")
);








ES|QL

The Elasticsearch Query Language (ES|QL) provides a powerful way to filter, transform, and analyze data stored in Elasticsearch, and in the future in other runtimes.

Learn more about ES|QL




Get running ES|QL queries information Technical preview

GET /_query/queries

Returns an object containing IDs and other information about the running ES|QL queries.

Required authorization

  • Cluster privileges: monitor_esql

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • queries object Required
      Hide queries attribute Show queries attribute object
      • * object Additional properties
        Hide * attributes Show * attributes object
        • id number Required
        • node string Required
        • start_time_millis number Required
        • running_time_nanos number Required
        • query string Required
GET /_query/queries
curl \
 --request GET 'http://api.example.com/_query/queries' \
 --header "Authorization: $API_KEY"

Run an ES|QL query Generally available

POST /_query

Get search results for an ES|QL (Elasticsearch query language) query.

External documentation

Query parameters

  • format string

    A short version of the Accept header, e.g. json, yaml.

    csv, tsv, and txt formats will return results in a tabular format, excluding other metadata fields from the response.

    Values are csv, json, tsv, txt, yaml, cbor, smile, or arrow.

  • delimiter string

    The character to use between values within a CSV row. Only valid for the CSV format.

  • drop_null_columns boolean

    Should columns that are entirely null be removed from the columns and values portion of the results? Defaults to false. If true then the response will include an extra section under the name all_columns which has the name of all columns.

  • allow_partial_results boolean

    If true, partial results will be returned if there are shard failures, but the query can continue to execute on other clusters and shards. If false, the query will fail if there are any failures.

    To override the default behavior, you can set the esql.query.allow_partial_results cluster setting to false.

application/json

Body Required

  • columnar boolean

    By default, ES|QL returns results as rows. For example, FROM returns each individual document as one row. For the JSON, YAML, CBOR and smile formats, ES|QL can return the results in a columnar fashion where one row represents all the values of a certain column in the results.

  • filter object

    Specify a Query DSL query in the filter parameter to filter the set of documents that an ES|QL query runs on.

    External documentation
  • locale string
  • params array[number | string | boolean | null]

    To avoid any attempts of hacking or code injection, extract the values in a separate list of parameters. Use question mark placeholders (?) in the query string for each of the parameters.

  • profile boolean

    If provided and true the response will include an extra profile object with information on how the query was executed. This information is for human debugging and its format can change at any time but it can give some insight into the performance of each part of the query.

  • query string Required

    The ES|QL query API accepts an ES|QL query string in the query parameter, runs it, and returns the results.

  • tables object

    Tables to use with the LOOKUP operation. The top level key is the table name and the next level key is the column name.

    Hide tables attribute Show tables attribute object
  • include_ccs_metadata boolean

    When set to true and performing a cross-cluster query, the response will include an extra _clusters object with information about the clusters that participated in the search along with info such as shards count.

    Default value is false.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • took number

      Time unit for milliseconds

    • is_partial boolean
    • all_columns array[object]
      Hide all_columns attributes Show all_columns attributes object
      • name string Required
      • type string Required
    • columns array[object] Required
      Hide columns attributes Show columns attributes object
      • name string Required
      • type string Required
    • values array[array] Required

      A field value.

      A field value.

    • _clusters object

      Cross-cluster search information. Present if include_ccs_metadata was true in the request and a cross-cluster search was performed.

      Hide _clusters attributes Show _clusters attributes object
      • total number Required
      • successful number Required
      • running number Required
      • skipped number Required
      • partial number Required
      • failed number Required
      • details object Required
        Hide details attribute Show details attribute object
        • * object Additional properties
          Hide * attributes Show * attributes object
          • status string Required

            Values are running, successful, partial, skipped, or failed.

          • indices string Required
          • _shards object
          • failures array[object]
    • profile object

      Profiling information. Present if profile was true in the request. The contents of this field are currently unstable.

POST /_query
{
  "query": """
    FROM library,remote-*:library
    | EVAL year = DATE_TRUNC(1 YEARS, release_date)
    | STATS MAX(page_count) BY year
    | SORT year
    | LIMIT 5
  """,
  "include_ccs_metadata": true
}
resp = client.esql.query(
    query="\n    FROM library,remote-*:library\n    | EVAL year = DATE_TRUNC(1 YEARS, release_date)\n    | STATS MAX(page_count) BY year\n    | SORT year\n    | LIMIT 5\n  ",
    include_ccs_metadata=True,
)
const response = await client.esql.query({
  query:
    "\n    FROM library,remote-*:library\n    | EVAL year = DATE_TRUNC(1 YEARS, release_date)\n    | STATS MAX(page_count) BY year\n    | SORT year\n    | LIMIT 5\n  ",
  include_ccs_metadata: true,
});
response = client.esql.query(
  body: {
    "query": "\n    FROM library,remote-*:library\n    | EVAL year = DATE_TRUNC(1 YEARS, release_date)\n    | STATS MAX(page_count) BY year\n    | SORT year\n    | LIMIT 5\n  ",
    "include_ccs_metadata": true
  }
)
$resp = $client->esql()->query([
    "body" => [
        "query" => "\n    FROM library,remote-*:library\n    | EVAL year = DATE_TRUNC(1 YEARS, release_date)\n    | STATS MAX(page_count) BY year\n    | SORT year\n    | LIMIT 5\n  ",
        "include_ccs_metadata" => true,
    ],
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"query":"\n    FROM library,remote-*:library\n    | EVAL year = DATE_TRUNC(1 YEARS, release_date)\n    | STATS MAX(page_count) BY year\n    | SORT year\n    | LIMIT 5\n  ","include_ccs_metadata":true}' "$ELASTICSEARCH_URL/_query"
client.esql().query(q -> q
    .includeCcsMetadata(true)
    .query(" FROM library,remote-*:library | EVAL year = DATE_TRUNC(1 YEARS, release_date) | STATS MAX(page_count) BY year | SORT year | LIMIT 5 ")
);
Request example
Run `POST /_query` to get results for an ES|QL query.
{
  "query": """
    FROM library,remote-*:library
    | EVAL year = DATE_TRUNC(1 YEARS, release_date)
    | STATS MAX(page_count) BY year
    | SORT year
    | LIMIT 5
  """,
  "include_ccs_metadata": true
}

Explore graph analytics Generally available

POST /{index}/_graph/explore

All methods and paths for this operation:

GET /{index}/_graph/explore

POST /{index}/_graph/explore

Extract and summarize information about the documents and terms in an Elasticsearch data stream or index. The easiest way to understand the behavior of this API is to use the Graph UI to explore connections. An initial request to the _explore API contains a seed query that identifies the documents of interest and specifies the fields that define the vertices and connections you want to include in the graph. Subsequent requests enable you to spider out from one more vertices of interest. You can exclude vertices that have already been returned.

External documentation

Path parameters

  • index string | array[string] Required

    Name of the index.

Query parameters

  • routing string

    Custom value used to route operations to a specific shard.

  • timeout string

    Specifies the period of time to wait for a response from each shard. If no response is received before the timeout expires, the request fails and returns an error. Defaults to no timeout.

    Values are -1 or 0.

application/json

Body

  • connections object

    Specifies or more fields from which you want to extract terms that are associated with the specified vertices.

    Hide connections attributes Show connections attributes object
    • connections object

      Specifies one or more fields from which you want to extract terms that are associated with the specified vertices.

    • query object

      An optional guiding query that constrains the Graph API as it explores connected terms.

      External documentation
    • vertices array[object] Required

      Contains the fields you are interested in.

      Hide vertices attributes Show vertices attributes object
      • exclude array[string]

        Prevents the specified terms from being included in the results.

      • field string Required

        Identifies a field in the documents of interest.

      • include array[object]

        Identifies the terms of interest that form the starting points from which you want to spider out.

        Hide include attributes Show include attributes object
        • boost number
        • term string Required
      • min_doc_count number

        Specifies how many documents must contain a pair of terms before it is considered to be a useful connection. This setting acts as a certainty threshold.

        Default value is 3.

      • shard_min_doc_count number

        Controls how many documents on a particular shard have to contain a pair of terms before the connection is returned for global consideration.

        Default value is 2.

      • size number

        Specifies the maximum number of vertex terms returned for each field.

        Default value is 5.

  • controls object

    Direct the Graph API how to build the graph.

    Hide controls attributes Show controls attributes object
    • sample_diversity object

      To avoid the top-matching documents sample being dominated by a single source of results, it is sometimes necessary to request diversity in the sample. You can do this by selecting a single-value field and setting a maximum number of documents per value for that field.

      Hide sample_diversity attributes Show sample_diversity attributes object
      • field string Required

        Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

      • max_docs_per_value number Required
    • sample_size number

      Each hop considers a sample of the best-matching documents on each shard. Using samples improves the speed of execution and keeps exploration focused on meaningfully-connected terms. Very small values (less than 50) might not provide sufficient weight-of-evidence to identify significant connections between terms. Very large sample sizes can dilute the quality of the results and increase execution times.

      Default value is 100.

    • timeout string

      The length of time in milliseconds after which exploration will be halted and the results gathered so far are returned. This timeout is honored on a best-effort basis. Execution might overrun this timeout if, for example, a long pause is encountered while FieldData is loaded for a field.

    • use_significance boolean Required

      Filters associated terms so only those that are significantly associated with your query are included.

  • query object

    A seed query that identifies the documents of interest. Can be any valid Elasticsearch query.

    External documentation
  • vertices array[object]

    Specifies one or more fields that contain the terms you want to include in the graph as vertices.

    Hide vertices attributes Show vertices attributes object
    • exclude array[string]

      Prevents the specified terms from being included in the results.

    • field string Required

      Identifies a field in the documents of interest.

    • include array[object]

      Identifies the terms of interest that form the starting points from which you want to spider out.

      Hide include attributes Show include attributes object
      • boost number
      • term string Required
    • min_doc_count number

      Specifies how many documents must contain a pair of terms before it is considered to be a useful connection. This setting acts as a certainty threshold.

      Default value is 3.

    • shard_min_doc_count number

      Controls how many documents on a particular shard have to contain a pair of terms before the connection is returned for global consideration.

      Default value is 2.

    • size number

      Specifies the maximum number of vertex terms returned for each field.

      Default value is 5.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • connections array[object] Required
      Hide connections attributes Show connections attributes object
      • doc_count number Required
      • source number Required
      • target number Required
      • weight number Required
    • failures array[object] Required
      Hide failures attributes Show failures attributes object
      • index string
      • node string
      • reason object Required

        Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

        Hide reason attributes Show reason attributes object
        • type string Required

          The type of error

        • reason string | null

          A human-readable explanation of the error, in English.

        • stack_trace string

          The server stack trace. Present only if the error_trace=true parameter was sent with the request.

        • caused_by object

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

        • root_cause array[object]

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

        • suppressed array[object]

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

      • shard number
      • status string
      • primary boolean
    • timed_out boolean Required
    • took number Required
    • vertices array[object] Required
      Hide vertices attributes Show vertices attributes object
      • depth number Required
      • field string Required

        Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

      • term string Required
      • weight number Required
POST clicklogs/_graph/explore
{
  "query": {
    "match": {
      "query.raw": "midi"
    }
  },
  "vertices": [
    {
      "field": "product"
    }
  ],
  "connections": {
    "vertices": [
      {
        "field": "query.raw"
      }
    ]
  }
}
resp = client.graph.explore(
    index="clicklogs",
    query={
        "match": {
            "query.raw": "midi"
        }
    },
    vertices=[
        {
            "field": "product"
        }
    ],
    connections={
        "vertices": [
            {
                "field": "query.raw"
            }
        ]
    },
)
const response = await client.graph.explore({
  index: "clicklogs",
  query: {
    match: {
      "query.raw": "midi",
    },
  },
  vertices: [
    {
      field: "product",
    },
  ],
  connections: {
    vertices: [
      {
        field: "query.raw",
      },
    ],
  },
});
response = client.graph.explore(
  index: "clicklogs",
  body: {
    "query": {
      "match": {
        "query.raw": "midi"
      }
    },
    "vertices": [
      {
        "field": "product"
      }
    ],
    "connections": {
      "vertices": [
        {
          "field": "query.raw"
        }
      ]
    }
  }
)
$resp = $client->graph()->explore([
    "index" => "clicklogs",
    "body" => [
        "query" => [
            "match" => [
                "query.raw" => "midi",
            ],
        ],
        "vertices" => array(
            [
                "field" => "product",
            ],
        ),
        "connections" => [
            "vertices" => array(
                [
                    "field" => "query.raw",
                ],
            ),
        ],
    ],
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"query":{"match":{"query.raw":"midi"}},"vertices":[{"field":"product"}],"connections":{"vertices":[{"field":"query.raw"}]}}' "$ELASTICSEARCH_URL/clicklogs/_graph/explore"
client.graph().explore(e -> e
    .connections(c -> c
        .vertices(v -> v
            .field("query.raw")
        )
    )
    .index("clicklogs")
    .query(q -> q
        .match(m -> m
            .field("query.raw")
            .query(FieldValue.of("midi"))
        )
    )
    .vertices(v -> v
        .field("product")
    )
);
Request example
Run `POST clicklogs/_graph/explore` for a basic exploration An initial graph explore query typically begins with a query to identify strongly related terms. Seed the exploration with a query. This example is searching `clicklogs` for people who searched for the term `midi`.Identify the vertices to include in the graph. This example is looking for product codes that are significantly associated with searches for `midi`. Find the connections. This example is looking for other search terms that led people to click on the products that are associated with searches for `midi`.
{
  "query": {
    "match": {
      "query.raw": "midi"
    }
  },
  "vertices": [
    {
      "field": "product"
    }
  ],
  "connections": {
    "vertices": [
      {
        "field": "query.raw"
      }
    ]
  }
}





Create or update a component template Generally available

POST /_component_template/{name}

All methods and paths for this operation:

PUT /_component_template/{name}

POST /_component_template/{name}

Component templates are building blocks for constructing index templates that specify index mappings, settings, and aliases.

An index template can be composed of multiple component templates. To use a component template, specify it in an index template’s composed_of list. Component templates are only applied to new data streams and indices as part of a matching index template.

Settings and mappings specified directly in the index template or the create index request override any settings or mappings specified in a component template.

Component templates are only used during index creation. For data streams, this includes data stream creation and the creation of a stream’s backing indices. Changes to component templates do not affect existing indices, including a stream’s backing indices.

You can use C-style /* *\/ block comments in component templates. You can include comments anywhere in the request body except before the opening curly bracket.

Applying component templates

You cannot directly apply a component template to a data stream or index. To be applied, a component template must be included in an index template's composed_of list.

Required authorization

  • Cluster privileges: manage_index_templates

Path parameters

  • name string Required

    Name of the component template to create. Elasticsearch includes the following built-in component templates: logs-mappings; logs-settings; metrics-mappings; metrics-settings;synthetics-mapping; synthetics-settings. Elastic Agent uses these templates to configure backing indices for its data streams. If you use Elastic Agent and want to overwrite one of these templates, set the version for your replacement template higher than the current version. If you don’t use Elastic Agent and want to disable all built-in component and index templates, set stack.templates.enabled to false using the cluster update settings API.

Query parameters

  • create boolean

    If true, this request cannot replace or update existing component templates.

  • cause string

    User defined reason for create the component template.

  • master_timeout string

    Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.

    Values are -1 or 0.

application/json

Body Required

  • template object Required

    The template to be applied which includes mappings, settings, or aliases configuration.

    Hide template attributes Show template attributes object
    • aliases object
      Hide aliases attribute Show aliases attribute object
      • * object Additional properties
        Hide * attributes Show * attributes object
        • filter object

          Query used to limit documents the alias can access.

          External documentation
        • index_routing string

          Value used to route indexing operations to a specific shard. If specified, this overwrites the routing value for indexing operations.

        • is_hidden boolean

          If true, the alias is hidden. All indices for the alias must have the same is_hidden value.

          Default value is false.

        • is_write_index boolean

          If true, the index is the write index for the alias.

          Default value is false.

        • routing string

          Value used to route indexing and search operations to a specific shard.

        • search_routing string

          Value used to route search operations to a specific shard. If specified, this overwrites the routing value for search operations.

    • mappings object
      Hide mappings attributes Show mappings attributes object
      • all_field object
        Hide all_field attributes Show all_field attributes object
        • analyzer string Required
        • enabled boolean Required
        • omit_norms boolean Required
        • search_analyzer string Required
        • similarity string Required
        • store boolean Required
        • store_term_vector_offsets boolean Required
        • store_term_vector_payloads boolean Required
        • store_term_vector_positions boolean Required
        • store_term_vectors boolean Required
      • date_detection boolean
      • dynamic string

        Values are strict, runtime, true, or false.

      • dynamic_date_formats array[string]
      • dynamic_templates array[object]
      • _field_names object
        Hide _field_names attribute Show _field_names attribute object
        • enabled boolean Required
      • index_field object
        Hide index_field attribute Show index_field attribute object
        • enabled boolean Required
      • _meta object
        Hide _meta attribute Show _meta attribute object
        • * object Additional properties
      • numeric_detection boolean
      • properties object
      • _routing object
        Hide _routing attribute Show _routing attribute object
        • required boolean Required
      • _size object
        Hide _size attribute Show _size attribute object
        • enabled boolean Required
      • _source object
        Hide _source attributes Show _source attributes object
        • compress boolean
        • compress_threshold string
        • enabled boolean
        • excludes array[string]
        • includes array[string]
      • runtime object
        Hide runtime attribute Show runtime attribute object
        • * object Additional properties
          Hide * attributes Show * attributes object
          • fields object

            For type composite

          • fetch_fields array[object]

            For type lookup

          • format string

            A custom format for date type runtime fields.

      • enabled boolean
      • subobjects string

        Values are true or false.

      • _data_stream_timestamp object
        Hide _data_stream_timestamp attribute Show _data_stream_timestamp attribute object
        • enabled boolean Required
    • settings object Additional properties
      Index settings
    • defaults object Additional properties

      Default settings, included when the request's include_default is true.

      Index settings
    • data_stream string
    • lifecycle object

      Data stream lifecycle applicable if this is a data stream.

      Hide lifecycle attributes Show lifecycle attributes object
      • data_retention string

        If defined, every document added to this data stream will be stored at least for this time frame. Any time after this duration the document could be deleted. When empty, every document in this data stream will be stored indefinitely.

      • downsampling object

        The downsampling configuration to execute for the managed backing index after rollover.

        Hide downsampling attribute Show downsampling attribute object
        • rounds array[object] Required

          The list of downsampling rounds to execute as part of this downsampling configuration

      • enabled boolean

        If defined, it turns data stream lifecycle on/off (true/false) for this data stream. A data stream lifecycle that's disabled (enabled: false) will have no effect on the data stream.

        Default value is true.

  • version number

    Version number used to manage component templates externally. This number isn't automatically generated or incremented by Elasticsearch. To unset a version, replace the template without specifying a version.

  • _meta object

    Optional user metadata about the component template. It may have any contents. This map is not automatically generated by Elasticsearch. This information is stored in the cluster state, so keeping it short is preferable. To unset _meta, replace the template without specifying this information.

    Hide _meta attribute Show _meta attribute object
    • * object Additional properties
  • deprecated boolean

    Marks this index template as deprecated. When creating or updating a non-deprecated index template that uses deprecated components, Elasticsearch will emit a deprecation warning.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

POST /_component_template/{name}
PUT _component_template/template_1
{
  "template": {
    "settings": {
      "number_of_shards": 1
    },
    "mappings": {
      "_source": {
        "enabled": false
      },
      "properties": {
        "host_name": {
          "type": "keyword"
        },
        "created_at": {
          "type": "date",
          "format": "EEE MMM dd HH:mm:ss Z yyyy"
        }
      }
    }
  }
}
resp = client.cluster.put_component_template(
    name="template_1",
    template={
        "settings": {
            "number_of_shards": 1
        },
        "mappings": {
            "_source": {
                "enabled": False
            },
            "properties": {
                "host_name": {
                    "type": "keyword"
                },
                "created_at": {
                    "type": "date",
                    "format": "EEE MMM dd HH:mm:ss Z yyyy"
                }
            }
        }
    },
)
const response = await client.cluster.putComponentTemplate({
  name: "template_1",
  template: {
    settings: {
      number_of_shards: 1,
    },
    mappings: {
      _source: {
        enabled: false,
      },
      properties: {
        host_name: {
          type: "keyword",
        },
        created_at: {
          type: "date",
          format: "EEE MMM dd HH:mm:ss Z yyyy",
        },
      },
    },
  },
});
response = client.cluster.put_component_template(
  name: "template_1",
  body: {
    "template": {
      "settings": {
        "number_of_shards": 1
      },
      "mappings": {
        "_source": {
          "enabled": false
        },
        "properties": {
          "host_name": {
            "type": "keyword"
          },
          "created_at": {
            "type": "date",
            "format": "EEE MMM dd HH:mm:ss Z yyyy"
          }
        }
      }
    }
  }
)
$resp = $client->cluster()->putComponentTemplate([
    "name" => "template_1",
    "body" => [
        "template" => [
            "settings" => [
                "number_of_shards" => 1,
            ],
            "mappings" => [
                "_source" => [
                    "enabled" => false,
                ],
                "properties" => [
                    "host_name" => [
                        "type" => "keyword",
                    ],
                    "created_at" => [
                        "type" => "date",
                        "format" => "EEE MMM dd HH:mm:ss Z yyyy",
                    ],
                ],
            ],
        ],
    ],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"template":{"settings":{"number_of_shards":1},"mappings":{"_source":{"enabled":false},"properties":{"host_name":{"type":"keyword"},"created_at":{"type":"date","format":"EEE MMM dd HH:mm:ss Z yyyy"}}}}}' "$ELASTICSEARCH_URL/_component_template/template_1"
Request examples
{
  "template": {
    "settings": {
      "number_of_shards": 1
    },
    "mappings": {
      "_source": {
        "enabled": false
      },
      "properties": {
        "host_name": {
          "type": "keyword"
        },
        "created_at": {
          "type": "date",
          "format": "EEE MMM dd HH:mm:ss Z yyyy"
        }
      }
    }
  }
}
You can include index aliases in a component template. During index creation, the `{index}` placeholder in the alias name will be replaced with the actual index name that the template gets applied to.
{
  "template": null,
  "settings": {
    "number_of_shards": 1
  },
  "aliases": {
    "alias1": {},
    "alias2": {
      "filter": {
        "term": {
          "user.id": "kimchy"
        }
      },
      "routing": "shard-1"
    },
    "{index}-alias": {}
  }
}