Get field data cache information

GET /_cat/fielddata

Get the amount of heap memory currently used by the field data cache on every data node in the cluster.

IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the nodes stats API.

Query parameters

  • bytes string

    The unit used to display byte values.

    Values are b, kb, mb, gb, tb, or pb.

  • fields string | array[string]

    Comma-separated list of fields used to limit returned information.

  • h string | array[string]

    List of columns to appear in the response. Supports simple wildcards.

  • s string | array[string]

    List of columns that determine how the table should be sorted. Sorting defaults to ascending and can be changed by setting :asc or :desc as a suffix to the column name.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
GET /_cat/fielddata
curl \
 --request GET 'http://api.example.com/_cat/fielddata' \
 --header "Authorization: $API_KEY"
Response examples (200)
A successful response from `GET /_cat/fielddata?v=true&fields=body&format=json`. You can specify an individual field in the request body or URL path. This example retrieves heap memory size information for the `body` field.
[
  {
    "id": "Nqk-6inXQq-OxUfOUI8jNQ",
    "host": "127.0.0.1",
    "ip": "127.0.0.1",
    "node": "Nqk-6in",
    "field": "body",
    "size": "544b"
  }
]
A successful response from `GET /_cat/fielddata/body,soul?v=true&format=json`. You can specify a comma-separated list of fields in the request body or URL path. This example retrieves heap memory size information for the `body` and `soul` fields. To get information for all fields, run `GET /_cat/fielddata?v=true`.
[
  {
    "id": "Nqk-6inXQq-OxUfOUI8jNQ",
    "host": "1127.0.0.1",
    "ip": "127.0.0.1",
    "node": "Nqk-6in",
    "field": "body",
    "size": "544b"
  },
  {
    "id": "Nqk-6inXQq-OxUfOUI8jNQ",
    "host": "127.0.0.1",
    "ip": "127.0.0.1",
    "node": "Nqk-6in",
    "field": "soul",
    "size": "480b"
  }
]
















Get index information

GET /_cat/indices/{index}

Get high-level information about indices in a cluster, including backing indices for data streams.

Use this request to get the following information for each index in a cluster:

  • shard count
  • document count
  • deleted document count
  • primary store size
  • total store size of all shards, including shard replicas

These metrics are retrieved directly from Lucene, which Elasticsearch uses internally to power indexing and search. As a result, all document counts include hidden nested documents. To get an accurate count of Elasticsearch documents, use the cat count or count APIs.

CAT APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use an index endpoint.

Path parameters

  • index string | array[string] Required

    Comma-separated list of data streams, indices, and aliases used to limit the request. Supports wildcards (*). To target all data streams and indices, omit this parameter or use * or _all.

Query parameters

  • bytes string

    The unit used to display byte values.

    Values are b, kb, mb, gb, tb, or pb.

  • expand_wildcards string | array[string]

    The type of index that wildcard patterns can match.

    Supported values include:

    • all: Match any data stream or index, including hidden ones.
    • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
    • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
    • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
    • none: Wildcard expressions are not accepted.
  • health string

    The health status used to limit returned indices. By default, the response includes indices of any health status.

    Supported values include:

    • green (or GREEN): All shards are assigned.
    • yellow (or YELLOW): All primary shards are assigned, but one or more replica shards are unassigned. If a node in the cluster fails, some data could be unavailable until that node is repaired.
    • red (or RED): One or more primary shards are unassigned, so some data is unavailable. This can occur briefly during cluster startup as primary shards are assigned.

    Values are green, GREEN, yellow, YELLOW, red, or RED.

  • If true, the response includes information from segments that are not loaded into memory.

  • pri boolean

    If true, the response only includes information from primary shards.

  • time string

    The unit used to display time values.

    Values are nanos, micros, ms, s, m, h, or d.

  • Period to wait for a connection to the master node.

  • h string | array[string]

    List of columns to appear in the response. Supports simple wildcards.

  • s string | array[string]

    List of columns that determine how the table should be sorted. Sorting defaults to ascending and can be changed by setting :asc or :desc as a suffix to the column name.

Responses

GET /_cat/indices/{index}
curl \
 --request GET 'http://api.example.com/_cat/indices/{index}' \
 --header "Authorization: $API_KEY"
Response examples (200)
A successful response from `GET /_cat/indices/my-index-*?v=true&s=index&format=json`.
[
  {
    "health": "yellow",
    "status": "open",
    "index": "my-index-000001",
    "uuid": "u8FNjxh8Rfy_awN11oDKYQ",
    "pri": "1",
    "rep": "1",
    "docs.count": "1200",
    "docs.deleted": "0",
    "store.size": "88.1kb",
    "pri.store.size": "88.1kb",
    "dataset.size": "88.1kb"
  },
  {
    "health": "green",
    "status": "open",
    "index": "my-index-000002",
    "uuid": "nYFWZEO7TUiOjLQXBaYJpA ",
    "pri": "1",
    "rep": "0",
    "docs.count": "0",
    "docs.deleted": "0",
    "store.size": "260b",
    "pri.store.size": "260b",
    "dataset.size": "260b"
  }
]

















































































































































Get the cluster health status Added in 1.3.0

GET /_cluster/health/{index}

You can also use the API to get the health status of only specified data streams and indices. For data streams, the API retrieves the health status of the stream’s backing indices.

The cluster health status is: green, yellow or red. On the shard level, a red status indicates that the specific shard is not allocated in the cluster. Yellow means that the primary shard is allocated but replicas are not. Green means that all shards are allocated. The index level status is controlled by the worst shard status.

One of the main benefits of the API is the ability to wait until the cluster reaches a certain high watermark health level. The cluster status is controlled by the worst index status.

Path parameters

  • index string | array[string] Required

    Comma-separated list of data streams, indices, and index aliases used to limit the request. Wildcard expressions (*) are supported. To target all data streams and indices in a cluster, omit this parameter or use _all or *.

Query parameters

  • expand_wildcards string | array[string]

    Whether to expand wildcard expression to concrete indices that are open, closed or both.

    Supported values include:

    • all: Match any data stream or index, including hidden ones.
    • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
    • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
    • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
    • none: Wildcard expressions are not accepted.
  • level string

    Can be one of cluster, indices or shards. Controls the details level of the health information returned.

    Values are cluster, indices, or shards.

  • local boolean

    If true, the request retrieves information from the local node only. Defaults to false, which means information is retrieved from the master node.

  • Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.

  • timeout string

    Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.

  • wait_for_active_shards number | string

    A number controlling to how many active shards to wait for, all to wait for all shards in the cluster to be active, or 0 to not wait.

  • Can be one of immediate, urgent, high, normal, low, languid. Wait until all currently queued events with the given priority are processed.

    Values are immediate, urgent, high, normal, low, or languid.

  • wait_for_nodes string | number

    The request waits until the specified number N of nodes is available. It also accepts >=N, <=N, >N and <N. Alternatively, it is possible to use ge(N), le(N), gt(N) and lt(N) notation.

  • A boolean value which controls whether to wait (until the timeout provided) for the cluster to have no shard initializations. Defaults to false, which means it will not wait for initializing shards.

  • A boolean value which controls whether to wait (until the timeout provided) for the cluster to have no shard relocations. Defaults to false, which means it will not wait for relocating shards.

  • One of green, yellow or red. Will wait (until the timeout provided) until the status of the cluster changes to the one provided or better, i.e. green > yellow > red. By default, will not wait for any status.

    Supported values include:

    • green (or GREEN): All shards are assigned.
    • yellow (or YELLOW): All primary shards are assigned, but one or more replica shards are unassigned. If a node in the cluster fails, some data could be unavailable until that node is repaired.
    • red (or RED): One or more primary shards are unassigned, so some data is unavailable. This can occur briefly during cluster startup as primary shards are assigned.

    Values are green, GREEN, yellow, YELLOW, red, or RED.

Responses

GET /_cluster/health/{index}
curl \
 --request GET 'http://api.example.com/_cluster/health/{index}' \
 --header "Authorization: $API_KEY"
Response examples (200)
A successful response from `GET _cluster/health`. It is the health status of a quiet single node cluster with a single index with one shard and one replica.
{
  "cluster_name" : "testcluster",
  "status" : "yellow",
  "timed_out" : false,
  "number_of_nodes" : 1,
  "number_of_data_nodes" : 1,
  "active_primary_shards" : 1,
  "active_shards" : 1,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 1,
  "delayed_unassigned_shards": 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch": 0,
  "task_max_waiting_in_queue_millis": 0,
  "active_shards_percent_as_number": 50.0
}












































Get cluster repositories metering Technical preview

GET /_nodes/{node_id}/_repositories_metering

Get repositories metering information for a cluster. This API exposes monotonically non-decreasing counters and it is expected that clients would durably store the information needed to compute aggregations over a period of time. Additionally, the information exposed by this API is volatile, meaning that it will not be present after node restarts.

Path parameters

  • node_id string | array[string] Required

    Comma-separated list of node IDs or names used to limit returned information.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • _nodes object
      Hide _nodes attributes Show _nodes attributes object
      • failures array[object]
        Hide failures attributes Show failures attributes object
      • total number Required

        Total number of nodes selected by the request.

      • successful number Required

        Number of nodes that responded successfully to the request.

      • failed number Required

        Number of nodes that rejected the request or failed to respond. If this value is not 0, a reason for the rejection or failure is included in the response.

    • cluster_name string Required
    • nodes object Required

      Contains repositories metering information for the nodes selected by the request.

      Hide nodes attribute Show nodes attribute object
      • * object Additional properties
        Hide * attributes Show * attributes object
        • repository_name string Required
        • repository_type string Required

          Repository type.

        • repository_location object Required
          Hide repository_location attributes Show repository_location attributes object
        • Time unit for milliseconds

        • Time unit for milliseconds

        • archived boolean Required

          A flag that tells whether or not this object has been archived. When a repository is closed or updated the repository metering information is archived and kept for a certain period of time. This allows retrieving the repository metering information of previous repository instantiations.

        • request_counts object Required
          Hide request_counts attributes Show request_counts attributes object
          • Number of Get Blob Properties requests (Azure)

          • GetBlob number

            Number of Get Blob requests (Azure)

          • Number of List Blobs requests (Azure)

          • PutBlob number

            Number of Put Blob requests (Azure)

          • PutBlock number

            Number of Put Block (Azure)

          • Number of Put Block List requests

          • Number of get object requests (GCP, S3)

          • Number of list objects requests (GCP, S3)

          • Number of insert object requests, including simple, multipart and resumable uploads. Resumable uploads can perform multiple http requests to insert a single object but they are considered as a single request since they are billed as an individual operation. (GCP)

          • Number of PutObject requests (S3)

          • Number of Multipart requests, including CreateMultipartUpload, UploadPart and CompleteMultipartUpload requests (S3)

GET /_nodes/{node_id}/_repositories_metering
curl \
 --request GET 'http://api.example.com/_nodes/{node_id}/_repositories_metering' \
 --header "Authorization: $API_KEY"










































































































Create a connector Beta

POST /_connector

Connectors are Elasticsearch integrations that bring content from third-party data sources, which can be deployed on Elastic Cloud or hosted on your own infrastructure. Elastic managed connectors (Native connectors) are a managed service on Elastic Cloud. Self-managed connectors (Connector clients) are self-managed on your infrastructure.

application/json

Body

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • result string Required

      Values are created, updated, deleted, not_found, or noop.

    • id string Required
POST /_connector
curl \
 --request POST 'http://api.example.com/_connector' \
 --header "Authorization: $API_KEY" \
 --header "Content-Type: application/json" \
 --data '{"description":"string","index_name":"string","is_native":true,"language":"string","name":"string","service_type":"string"}'




































Activate the connector draft filter Technical preview

PUT /_connector/{connector_id}/_filtering/_activate

Activates the valid draft filtering for a connector.

Path parameters

  • connector_id string Required

    The unique identifier of the connector to be updated

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • result string Required

      Values are created, updated, deleted, not_found, or noop.

PUT /_connector/{connector_id}/_filtering/_activate
curl \
 --request PUT 'http://api.example.com/_connector/{connector_id}/_filtering/_activate' \
 --header "Authorization: $API_KEY"
























Update the connector index name Beta

PUT /_connector/{connector_id}/_index_name

Update the index_name field of a connector, specifying the index where the data ingested by the connector is stored.

Path parameters

  • connector_id string Required

    The unique identifier of the connector to be updated

application/json

Body Required

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • result string Required

      Values are created, updated, deleted, not_found, or noop.

PUT /_connector/{connector_id}/_index_name
curl \
 --request PUT 'http://api.example.com/_connector/{connector_id}/_index_name' \
 --header "Authorization: $API_KEY" \
 --header "Content-Type: application/json" \
 --data '"{\n    \"index_name\": \"data-from-my-google-drive\"\n}"'
Request example
{
    "index_name": "data-from-my-google-drive"
}
Response examples (200)
{
  "result": "updated"
}
















Path parameters

  • connector_id string Required

    The unique identifier of the connector to be updated

application/json

Body Required

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • result string Required

      Values are created, updated, deleted, not_found, or noop.

PUT /_connector/{connector_id}/_service_type
curl \
 --request PUT 'http://api.example.com/_connector/{connector_id}/_service_type' \
 --header "Authorization: $API_KEY" \
 --header "Content-Type: application/json" \
 --data '"{\n    \"service_type\": \"sharepoint_online\"\n}"'
Request example
{
    "service_type": "sharepoint_online"
}
Response examples (200)
{
  "result": "updated"
}


























































































Downsample an index Technical preview

POST /{index}/_downsample/{target_index}

Aggregate a time series (TSDS) index and store pre-computed statistical summaries (min, max, sum, value_count and avg) for each metric field grouped by a configured time interval. For example, a TSDS index that contains metrics sampled every 10 seconds can be downsampled to an hourly index. All documents within an hour interval are summarized and stored as a single document in the downsample index.

NOTE: Only indices in a time series data stream are supported. Neither field nor document level security can be defined on the source index. The source index must be read only (index.blocks.write: true).

Path parameters

  • index string Required

    Name of the time series index to downsample.

  • target_index string Required

    Name of the index to create.

application/json

Body Required

  • fixed_interval string Required

    A date histogram interval. Similar to Duration with additional units: w (week), M (month), q (quarter) and y (year)

Responses

POST /{index}/_downsample/{target_index}
curl \
 --request POST 'http://api.example.com/{index}/_downsample/{target_index}' \
 --header "Authorization: $API_KEY" \
 --header "Content-Type: application/json" \
 --data '"{\n  \"fixed_interval\": \"1d\"\n}"'
Request example
{
  "fixed_interval": "1d"
}

































































































































Get term vector information

GET /{index}/_termvectors/{id}

Get information and statistics about terms in the fields of a particular document.

You can retrieve term vectors for documents stored in the index or for artificial documents passed in the body of the request. You can specify the fields you are interested in through the fields parameter or by adding the fields to the request body. For example:

GET /my-index-000001/_termvectors/1?fields=message

Fields can be specified using wildcards, similar to the multi match query.

Term vectors are real-time by default, not near real-time. This can be changed by setting realtime parameter to false.

You can request three types of values: term information, term statistics, and field statistics. By default, all term information and field statistics are returned for all fields but term statistics are excluded.

Term information

  • term frequency in the field (always returned)
  • term positions (positions: true)
  • start and end offsets (offsets: true)
  • term payloads (payloads: true), as base64 encoded bytes

If the requested information wasn't stored in the index, it will be computed on the fly if possible. Additionally, term vectors could be computed for documents not even existing in the index, but instead provided by the user.


Start and end offsets assume UTF-16 encoding is being used. If you want to use these offsets in order to get the original text that produced this token, you should make sure that the string you are taking a sub-string of is also encoded using UTF-16.

Behaviour

The term and field statistics are not accurate. Deleted documents are not taken into account. The information is only retrieved for the shard the requested document resides in. The term and field statistics are therefore only useful as relative measures whereas the absolute numbers have no meaning in this context. By default, when requesting term vectors of artificial documents, a shard to get the statistics from is randomly selected. Use routing only to hit a particular shard.

Path parameters

  • index string Required

    The name of the index that contains the document.

  • id string Required

    A unique identifier for the document.

Query parameters

  • fields string | array[string]

    A comma-separated list or wildcard expressions of fields to include in the statistics. It is used as the default list unless a specific field list is provided in the completion_fields or fielddata_fields parameters.

  • If true, the response includes:

    • The document count (how many documents contain this field).
    • The sum of document frequencies (the sum of document frequencies for all terms in this field).
    • The sum of total term frequencies (the sum of total term frequencies of each term in this field).
  • offsets boolean

    If true, the response includes term offsets.

  • payloads boolean

    If true, the response includes term payloads.

  • positions boolean

    If true, the response includes term positions.

  • The node or shard the operation should be performed on. It is random by default.

  • realtime boolean

    If true, the request is real-time as opposed to near-real-time.

  • routing string

    A custom value that is used to route operations to a specific shard.

  • If true, the response includes:

    • The total term frequency (how often a term occurs in all documents).
    • The document frequency (the number of documents containing the current term).

    By default these values are not returned since term statistics can have a serious performance impact.

  • version number

    If true, returns the document version as part of a hit.

  • The version type.

    Supported values include:

    • internal: Use internal versioning that starts at 1 and increments with each update or delete.
    • external: Only index the document if the specified version is strictly higher than the version of the stored document or if there is no existing document.
    • external_gte: Only index the document if the specified version is equal or higher than the version of the stored document or if there is no existing document. NOTE: The external_gte version type is meant for special use cases and should be used with care. If used incorrectly, it can result in loss of data.
    • force: This option is deprecated because it can cause primary and replica shards to diverge.

    Values are internal, external, external_gte, or force.

application/json

Body

  • doc object

    An artificial document (a document not present in the index) for which you want to retrieve term vectors.

  • filter object
    Hide filter attributes Show filter attributes object
    • Ignore words which occur in more than this many docs. Defaults to unbounded.

    • The maximum number of terms that must be returned per field.

    • Ignore words with more than this frequency in the source doc. It defaults to unbounded.

    • The maximum word length above which words will be ignored. Defaults to unbounded.

    • Ignore terms which do not occur in at least this many docs.

    • Ignore words with less than this frequency in the source doc.

    • The minimum word length below which words will be ignored.

  • Override the default per-field analyzer. This is useful in order to generate term vectors in any fashion, especially when using artificial documents. When providing an analyzer for a field that already stores term vectors, the term vectors will be regenerated.

    Hide per_field_analyzer attribute Show per_field_analyzer attribute object
    • * string Additional properties
  • fields string | array[string]
  • If true, the response includes:

    • The document count (how many documents contain this field).
    • The sum of document frequencies (the sum of document frequencies for all terms in this field).
    • The sum of total term frequencies (the sum of total term frequencies of each term in this field).
  • offsets boolean

    If true, the response includes term offsets.

  • payloads boolean

    If true, the response includes term payloads.

  • positions boolean

    If true, the response includes term positions.

  • If true, the response includes:

    • The total term frequency (how often a term occurs in all documents).
    • The document frequency (the number of documents containing the current term).

    By default these values are not returned since term statistics can have a serious performance impact.

  • routing string
  • version number
  • Values are internal, external, external_gte, or force.

Responses

GET /{index}/_termvectors/{id}
curl \
 --request GET 'http://api.example.com/{index}/_termvectors/{id}' \
 --header "Authorization: $API_KEY" \
 --header "Content-Type: application/json" \
 --data '"{\n  \"fields\" : [\"text\"],\n  \"offsets\" : true,\n  \"payloads\" : true,\n  \"positions\" : true,\n  \"term_statistics\" : true,\n  \"field_statistics\" : true\n}"'
Run `GET /my-index-000001/_termvectors/1` to return all information and statistics for field `text` in document 1.
{
  "fields" : ["text"],
  "offsets" : true,
  "payloads" : true,
  "positions" : true,
  "term_statistics" : true,
  "field_statistics" : true
}
Run `GET /my-index-000001/_termvectors/1` to set per-field analyzers. A different analyzer than the one at the field may be provided by using the `per_field_analyzer` parameter.
{
  "doc" : {
    "fullname" : "John Doe",
    "text" : "test test test"
  },
  "fields": ["fullname"],
  "per_field_analyzer" : {
    "fullname": "keyword"
  }
}
Run `GET /imdb/_termvectors` to filter the terms returned based on their tf-idf scores. It returns the three most "interesting" keywords from the artificial document having the given "plot" field value. Notice that the keyword "Tony" or any stop words are not part of the response, as their tf-idf must be too low.
{
  "doc": {
    "plot": "When wealthy industrialist Tony Stark is forced to build an armored suit after a life-threatening incident, he ultimately decides to use its technology to fight against evil."
  },
  "term_statistics": true,
  "field_statistics": true,
  "positions": false,
  "offsets": false,
  "filter": {
    "max_num_terms": 3,
    "min_term_freq": 1,
    "min_doc_freq": 1
  }
}
Run `GET /my-index-000001/_termvectors/1`. Term vectors which are not explicitly stored in the index are automatically computed on the fly. This request returns all information and statistics for the fields in document 1, even though the terms haven't been explicitly stored in the index. Note that for the field text, the terms are not regenerated.
{
  "fields" : ["text", "some_field_without_term_vectors"],
  "offsets" : true,
  "positions" : true,
  "term_statistics" : true,
  "field_statistics" : true
}
Run `GET /my-index-000001/_termvectors`. Term vectors can be generated for artificial documents, that is for documents not present in the index. If dynamic mapping is turned on (default), the document fields not in the original mapping will be dynamically created.
{
  "doc" : {
    "fullname" : "John Doe",
    "text" : "test test test"
  }
}
Response examples (200)
A successful response from `GET /my-index-000001/_termvectors/1`.
{
  "_index": "my-index-000001",
  "_id": "1",
  "_version": 1,
  "found": true,
  "took": 6,
  "term_vectors": {
    "text": {
      "field_statistics": {
        "sum_doc_freq": 4,
        "doc_count": 2,
        "sum_ttf": 6
      },
      "terms": {
        "test": {
          "doc_freq": 2,
          "ttf": 4,
          "term_freq": 3,
          "tokens": [
            {
              "position": 0,
              "start_offset": 0,
              "end_offset": 4,
              "payload": "d29yZA=="
            },
            {
              "position": 1,
              "start_offset": 5,
              "end_offset": 9,
              "payload": "d29yZA=="
            },
            {
              "position": 2,
              "start_offset": 10,
              "end_offset": 14,
              "payload": "d29yZA=="
            }
          ]
        }
      }
    }
  }
}
A successful response from `GET /my-index-000001/_termvectors` with `per_field_analyzer` in the request body.
{
  "_index": "my-index-000001",
  "_version": 0,
  "found": true,
  "took": 6,
  "term_vectors": {
    "fullname": {
      "field_statistics": {
          "sum_doc_freq": 2,
          "doc_count": 4,
          "sum_ttf": 4
      },
      "terms": {
          "John Doe": {
            "term_freq": 1,
            "tokens": [
                {
                  "position": 0,
                  "start_offset": 0,
                  "end_offset": 8
                }
            ]
          }
      }
    }
  }
}
A successful response from `GET /my-index-000001/_termvectors` with a `filter` in the request body.
{
  "_index": "imdb",
  "_version": 0,
  "found": true,
  "term_vectors": {
      "plot": {
        "field_statistics": {
            "sum_doc_freq": 3384269,
            "doc_count": 176214,
            "sum_ttf": 3753460
        },
        "terms": {
            "armored": {
              "doc_freq": 27,
              "ttf": 27,
              "term_freq": 1,
              "score": 9.74725
            },
            "industrialist": {
              "doc_freq": 88,
              "ttf": 88,
              "term_freq": 1,
              "score": 8.590818
            },
            "stark": {
              "doc_freq": 44,
              "ttf": 47,
              "term_freq": 1,
              "score": 9.272792
            }
        }
      }
  }
}

















































EQL

Event Query Language (EQL) is a query language for event-based time series data, such as logs, metrics, and traces.

Learn more about EQL search





























































































































































































































































































































Get mapping definitions

GET /{index}/_mapping/field/{fields}

Retrieves mapping definitions for one or more fields. For data streams, the API retrieves field mappings for the stream’s backing indices.

This API is useful if you don't need a complete mapping or if an index mapping contains a large number of fields.

Path parameters

  • index string | array[string] Required

    Comma-separated list of data streams, indices, and aliases used to limit the request. Supports wildcards (*). To target all data streams and indices, omit this parameter or use * or _all.

  • fields string | array[string] Required

    Comma-separated list or wildcard expression of fields used to limit returned information. Supports wildcards (*).

Query parameters

  • If false, the request returns an error if any wildcard expression, index alias, or _all value targets only missing or closed indices. This behavior applies even if the request targets other open indices.

  • expand_wildcards string | array[string]

    Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports comma-separated values, such as open,hidden. Valid values are: all, open, closed, hidden, none.

    Supported values include:

    • all: Match any data stream or index, including hidden ones.
    • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
    • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
    • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
    • none: Wildcard expressions are not accepted.
  • If false, the request returns an error if it targets a missing or closed index.

  • If true, return all default settings in the response.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • * object Additional properties
      Hide * attribute Show * attribute object
      • mappings object Required
        Hide mappings attribute Show mappings attribute object
        • * object Additional properties
          Hide * attributes Show * attributes object
GET /{index}/_mapping/field/{fields}
curl \
 --request GET 'http://api.example.com/{index}/_mapping/field/{fields}' \
 --header "Authorization: $API_KEY"
Response examples (200)
A sucessful response from `GET publications/_mapping/field/title`, which returns the mapping of a field called `title`.
{
   "publications": {
      "mappings": {
          "title": {
             "full_name": "title",
             "mapping": {
                "title": {
                   "type": "text"
                }
             }
          }
       }
   }
}
A successful response from `GET publications/_mapping/field/author.id,abstract,name`. The get field mapping API also supports wildcard notation.
{
   "publications": {
      "mappings": {
        "author.id": {
           "full_name": "author.id",
           "mapping": {
              "id": {
                 "type": "text"
              }
           }
        },
        "abstract": {
           "full_name": "abstract",
           "mapping": {
              "abstract": {
                 "type": "text"
              }
           }
        }
     }
   }
}
A successful response from `GET publications/_mapping/field/a*`.
{
   "publications": {
      "mappings": {
         "author.name": {
            "full_name": "author.name",
            "mapping": {
               "name": {
                 "type": "text"
               }
            }
         },
         "abstract": {
            "full_name": "abstract",
            "mapping": {
               "abstract": {
                  "type": "text"
               }
            }
         },
         "author.id": {
            "full_name": "author.id",
            "mapping": {
               "id": {
                  "type": "text"
               }
            }
         }
      }
   }
}








































































Refresh an index

POST /{index}/_refresh

A refresh makes recent operations performed on one or more indices available for search. For data streams, the API runs the refresh operation on the stream’s backing indices.

By default, Elasticsearch periodically refreshes indices every second, but only on indices that have received one search request or more in the last 30 seconds. You can change this default interval with the index.refresh_interval setting.

Refresh requests are synchronous and do not return a response until the refresh operation completes.

Refreshes are resource-intensive. To ensure good cluster performance, it's recommended to wait for Elasticsearch's periodic refresh rather than performing an explicit refresh when possible.

If your application workflow indexes documents and then runs a search to retrieve the indexed document, it's recommended to use the index API's refresh=wait_for query parameter option. This option ensures the indexing operation waits for a periodic refresh before running the search.

Path parameters

  • index string | array[string] Required

    Comma-separated list of data streams, indices, and aliases used to limit the request. Supports wildcards (*). To target all data streams and indices, omit this parameter or use * or _all.

Query parameters

  • If false, the request returns an error if any wildcard expression, index alias, or _all value targets only missing or closed indices. This behavior applies even if the request targets other open indices.

  • expand_wildcards string | array[string]

    Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports comma-separated values, such as open,hidden. Valid values are: all, open, closed, hidden, none.

    Supported values include:

    • all: Match any data stream or index, including hidden ones.
    • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
    • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
    • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
    • none: Wildcard expressions are not accepted.
  • If false, the request returns an error if it targets a missing or closed index.

Responses

POST /{index}/_refresh
curl \
 --request POST 'http://api.example.com/{index}/_refresh' \
 --header "Authorization: $API_KEY"

Reload search analyzers Added in 7.3.0

GET /{index}/_reload_search_analyzers

Reload an index's search analyzers and their resources. For data streams, the API reloads search analyzers and resources for the stream's backing indices.

IMPORTANT: After reloading the search analyzers you should clear the request cache to make sure it doesn't contain responses derived from the previous versions of the analyzer.

You can use the reload search analyzers API to pick up changes to synonym files used in the synonym_graph or synonym token filter of a search analyzer. To be eligible, the token filter must have an updateable flag of true and only be used in search analyzers.

NOTE: This API does not perform a reload for each shard of an index. Instead, it performs a reload for each node containing index shards. As a result, the total shard count returned by the API can differ from the number of index shards. Because reloading affects every node with an index shard, it is important to update the synonym file on every data node in the cluster--including nodes that don't contain a shard replica--before using this API. This ensures the synonym file is updated everywhere in the cluster in case shards are relocated in the future.

External documentation

Path parameters

  • index string | array[string] Required

    A comma-separated list of index names to reload analyzers for

Query parameters

  • Whether to ignore if a wildcard indices expression resolves into no concrete indices. (This includes _all string or when no indices have been specified)

  • expand_wildcards string | array[string]

    Whether to expand wildcard expression to concrete indices that are open, closed or both.

    Supported values include:

    • all: Match any data stream or index, including hidden ones.
    • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
    • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
    • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
    • none: Wildcard expressions are not accepted.
  • Whether specified concrete indices should be ignored when unavailable (missing or closed)

  • resource string

    Changed resource to reload analyzers from if applicable

Responses

GET /{index}/_reload_search_analyzers
curl \
 --request GET 'http://api.example.com/{index}/_reload_search_analyzers' \
 --header "Authorization: $API_KEY"

























































































































Query parameters

  • Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.

  • timeout string

    Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.

Responses

GET /_ilm/policy
curl \
 --request GET 'http://api.example.com/_ilm/policy' \
 --header "Authorization: $API_KEY"
Response examples (200)
A successful response when retrieving a lifecycle policy.
{
  "my_policy": {
    "version": 1,
    "modified_date": 82392349,
    "policy": {
      "phases": {
        "warm": {
          "min_age": "10d",
          "actions": {
            "forcemerge": {
              "max_num_segments": 1
            }
          }
        },
        "delete": {
          "min_age": "30d",
          "actions": {
            "delete": {
              "delete_searchable_snapshot": true
            }
          }
        }
      }
    },
    "in_use_by" : {
      "indices" : [],
      "data_streams" : [],
      "composable_templates" : []
    }
  }
}





































Get an inference endpoint Added in 8.11.0

GET /_inference/{inference_id}

Path parameters

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • endpoints array[object] Required
      Hide endpoints attributes Show endpoints attributes object
      • Hide chunking_settings attributes Show chunking_settings attributes object
        • The maximum size of a chunk in words. This value cannot be higher than 300 or lower than 20 (for sentence strategy) or 10 (for word strategy).

        • overlap number

          The number of overlapping words for chunks. It is applicable only to a word chunking strategy. This value cannot be higher than half the max_chunk_size value.

        • The number of overlapping sentences for chunks. It is applicable only for a sentence chunking strategy. It can be either 1 or 0.

        • strategy string

          The chunking strategy: sentence or word.

      • service string Required

        The service type

      • service_settings object Required
      • inference_id string Required

        The inference Id

      • task_type string Required

        Values are sparse_embedding, text_embedding, rerank, completion, or chat_completion.

GET /_inference/{inference_id}
curl \
 --request GET 'http://api.example.com/_inference/{inference_id}' \
 --header "Authorization: $API_KEY"




































































Create a Google Vertex AI inference endpoint Added in 8.15.0

PUT /_inference/{task_type}/{googlevertexai_inference_id}

Create an inference endpoint to perform an inference task with the googlevertexai service.

When you create an inference endpoint, the associated machine learning model is automatically deployed if it is not already running. After creating the endpoint, wait for the model deployment to complete before using it. To verify the deployment status, use the get trained model statistics API. Look for "state": "fully_allocated" in the response and ensure that the "allocation_count" matches the "target_allocation_count". Avoid creating multiple endpoints for the same model unless required, as each endpoint consumes significant resources.

Path parameters

  • task_type string Required

    The type of the inference task that the model will perform.

    Values are rerank or text_embedding.

  • The unique identifier of the inference endpoint.

application/json

Body

  • Hide chunking_settings attributes Show chunking_settings attributes object
    • The maximum size of a chunk in words. This value cannot be higher than 300 or lower than 20 (for sentence strategy) or 10 (for word strategy).

    • overlap number

      The number of overlapping words for chunks. It is applicable only to a word chunking strategy. This value cannot be higher than half the max_chunk_size value.

    • The number of overlapping sentences for chunks. It is applicable only for a sentence chunking strategy. It can be either 1 or 0.

    • strategy string

      The chunking strategy: sentence or word.

  • service string Required

    Value is googlevertexai.

  • service_settings object Required
    Hide service_settings attributes Show service_settings attributes object
    • location string Required

      The name of the location to use for the inference task. Refer to the Google documentation for the list of supported locations.

      External documentation
    • model_id string Required

      The name of the model to use for the inference task. Refer to the Google documentation for the list of supported models.

      External documentation
    • project_id string Required

      The name of the project to use for the inference task.

    • Hide rate_limit attribute Show rate_limit attribute object
    • service_account_json string Required

      A valid service account in JSON format for the Google Vertex AI API.

  • Hide task_settings attributes Show task_settings attributes object
    • For a text_embedding task, truncate inputs longer than the maximum token length automatically.

    • top_n number

      For a rerank task, the number of the top N documents that should be returned.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • Hide chunking_settings attributes Show chunking_settings attributes object
      • The maximum size of a chunk in words. This value cannot be higher than 300 or lower than 20 (for sentence strategy) or 10 (for word strategy).

      • overlap number

        The number of overlapping words for chunks. It is applicable only to a word chunking strategy. This value cannot be higher than half the max_chunk_size value.

      • The number of overlapping sentences for chunks. It is applicable only for a sentence chunking strategy. It can be either 1 or 0.

      • strategy string

        The chunking strategy: sentence or word.

    • service string Required

      The service type

    • service_settings object Required
    • inference_id string Required

      The inference Id

    • task_type string Required

      Values are sparse_embedding, text_embedding, rerank, completion, or chat_completion.

PUT /_inference/{task_type}/{googlevertexai_inference_id}
curl \
 --request PUT 'http://api.example.com/_inference/{task_type}/{googlevertexai_inference_id}' \
 --header "Authorization: $API_KEY" \
 --header "Content-Type: application/json" \
 --data '"{\n    \"service\": \"googlevertexai\",\n    \"service_settings\": {\n        \"service_account_json\": \"service-account-json\",\n        \"model_id\": \"model-id\",\n        \"location\": \"location\",\n        \"project_id\": \"project-id\"\n    }\n}"'
Request examples
Run `PUT _inference/text_embedding/google_vertex_ai_embeddings` to create an inference endpoint to perform a `text_embedding` task type.
{
    "service": "googlevertexai",
    "service_settings": {
        "service_account_json": "service-account-json",
        "model_id": "model-id",
        "location": "location",
        "project_id": "project-id"
    }
}
Run `PUT _inference/rerank/google_vertex_ai_rerank` to create an inference endpoint to perform a `rerank` task type.
{
    "service": "googlevertexai",
    "service_settings": {
        "service_account_json": "service-account-json",
        "project_id": "project-id"
    }
}









































































































































































































































































































































































































Get anomaly detection job stats Added in 5.5.0

GET /_ml/anomaly_detectors/{job_id}/_stats

Path parameters

  • job_id string Required

    Identifier for the anomaly detection job. It can be a job identifier, a group name, a comma-separated list of jobs, or a wildcard expression. If you do not specify one of these options, the API returns information for all anomaly detection jobs.

Query parameters

  • Specifies what to do when the request:

    1. Contains wildcard expressions and there are no jobs that match.
    2. Contains the _all string or no identifiers and there are no matches.
    3. Contains wildcard expressions and there are only partial matches.

    If true, the API returns an empty jobs array when there are no matches and the subset of results when there are partial matches. If false, the API returns a 404 status code when there are no matches or only partial matches.

Responses

GET /_ml/anomaly_detectors/{job_id}/_stats
curl \
 --request GET 'http://api.example.com/_ml/anomaly_detectors/{job_id}/_stats' \
 --header "Authorization: $API_KEY"
























Get anomaly records for an anomaly detection job Added in 5.4.0

GET /_ml/anomaly_detectors/{job_id}/results/records

Records contain the detailed analytical results. They describe the anomalous activity that has been identified in the input data based on the detector configuration. There can be many anomaly records depending on the characteristics and size of the input data. In practice, there are often too many to be able to manually process them. The machine learning features therefore perform a sophisticated aggregation of the anomaly records into buckets. The number of record results depends on the number of anomalies found in each bucket, which relates to the number of time series being modeled and the number of detectors.

Path parameters

  • job_id string Required

    Identifier for the anomaly detection job.

Query parameters

  • desc boolean

    If true, the results are sorted in descending order.

  • end string | number

    Returns records with timestamps earlier than this time. The default value means results are not limited to specific timestamps.

  • If true, the output excludes interim results.

  • from number

    Skips the specified number of records.

  • Returns records with anomaly scores greater or equal than this value.

  • size number

    Specifies the maximum number of records to obtain.

  • sort string

    Specifies the sort field for the requested records.

  • start string | number

    Returns records with timestamps after this time. The default value means results are not limited to specific timestamps.

application/json

Body

  • desc boolean

    Refer to the description for the desc query parameter.

  • end string | number

    A date and time, either as a string whose format can depend on the context (defaulting to ISO 8601), or a number of milliseconds since the Epoch. Elasticsearch accepts both as input, but will generally output a string representation.

  • Refer to the description for the exclude_interim query parameter.

  • page object
    Hide page attributes Show page attributes object
    • from number

      Skips the specified number of items.

    • size number

      Specifies the maximum number of items to obtain.

  • Refer to the description for the record_score query parameter.

  • sort string

    Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

  • start string | number

    A date and time, either as a string whose format can depend on the context (defaulting to ISO 8601), or a number of milliseconds since the Epoch. Elasticsearch accepts both as input, but will generally output a string representation.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • count number Required
    • records array[object] Required
      Hide records attributes Show records attributes object
      • actual array[number]

        The actual value for the bucket.

      • Hide anomaly_score_explanation attributes Show anomaly_score_explanation attributes object
      • Time unit for seconds

      • The field used to split the data. In particular, this property is used for analyzing the splits with respect to their own history. It is used for finding unusual values in the context of the split.

      • The value of by_field_name.

      • causes array[object]

        For population analysis, an over field must be specified in the detector. This property contains an array of anomaly records that are the causes for the anomaly that has been identified for the over field. This sub-resource contains the most anomalous records for the over_field_name. For scalability reasons, a maximum of the 10 most significant causes of the anomaly are returned. As part of the core analytical modeling, these low-level anomaly records are aggregated for their parent over field record. The causes resource contains similar elements to the record resource, namely actual, typical, geo_results.actual_point, geo_results.typical_point, *_field_name and *_field_value. Probability and scores are not applicable to causes.

        Hide causes attributes Show causes attributes object
      • detector_index number Required

        A unique identifier for the detector.

      • Certain functions require a field to operate on, for example, sum(). For those functions, this value is the name of the field to be analyzed.

      • function string

        The function in which the anomaly occurs, as specified in the detector configuration. For example, max.

      • The description of the function in which the anomaly occurs, as specified in the detector configuration.

      • Hide geo_results attributes Show geo_results attributes object
        • The actual value for the bucket formatted as a geo_point.

        • The typical value for the bucket formatted as a geo_point.

      • influencers array[object]

        If influencers were specified in the detector configuration, this array contains influencers that contributed to or were to blame for an anomaly.

        Hide influencers attributes Show influencers attributes object
      • initial_record_score number Required

        A normalized score between 0-100, which is based on the probability of the anomalousness of this record. This is the initial value that was calculated at the time the bucket was processed.

      • is_interim boolean Required

        If true, this is an interim result. In other words, the results are calculated based on partial input data.

      • job_id string Required

        Identifier for the anomaly detection job.

      • The field used to split the data. In particular, this property is used for analyzing the splits with respect to the history of all splits. It is used for finding unusual values in the population of all splits.

      • The value of over_field_name.

      • The field used to segment the analysis. When you use this property, you have completely independent baselines for each value of this field.

      • The value of partition_field_name.

      • probability number Required

        The probability of the individual anomaly occurring, in the range 0 to 1. For example, 0.0000772031. This value can be held to a high precision of over 300 decimal places, so the record_score is provided as a human-readable and friendly interpretation of this.

      • record_score number Required

        A normalized score between 0-100, which is based on the probability of the anomalousness of this record. Unlike initial_record_score, this value will be updated by a re-normalization process as new data is analyzed.

      • result_type string Required

        Internal. This is always set to record.

      • Time unit for milliseconds

      • typical array[number]

        The typical value for the bucket, according to analytical modeling.

GET /_ml/anomaly_detectors/{job_id}/results/records
curl \
 --request GET 'http://api.example.com/_ml/anomaly_detectors/{job_id}/results/records' \
 --header "Authorization: $API_KEY" \
 --header "Content-Type: application/json" \
 --data '{"desc":true,"":"string","exclude_interim":true,"page":{"from":42.0,"size":42.0},"record_score":42.0,"sort":"string"}'
















































Update a filter Added in 6.4.0

POST /_ml/filters/{filter_id}/_update

Updates the description of a filter, adds items, or removes items from the list.

Path parameters

  • filter_id string Required

    A string that uniquely identifies a filter.

application/json

Body Required

Responses

POST /_ml/filters/{filter_id}/_update
curl \
 --request POST 'http://api.example.com/_ml/filters/{filter_id}/_update' \
 --header "Authorization: $API_KEY" \
 --header "Content-Type: application/json" \
 --data '{"add_items":["string"],"description":"string","remove_items":["string"]}'













































































































































































































Create or update a query rule Added in 8.15.0

PUT /_query_rules/{ruleset_id}/_rule/{rule_id}

Create or update a query rule within a query ruleset.

IMPORTANT: Due to limitations within pinned queries, you can only pin documents using ids or docs, but cannot use both in single rule. It is advised to use one or the other in query rulesets, to avoid errors. Additionally, pinned queries have a maximum limit of 100 pinned hits. If multiple matching rules pin more than 100 documents, only the first 100 documents are pinned in the order they are specified in the ruleset.

Path parameters

  • ruleset_id string Required

    The unique identifier of the query ruleset containing the rule to be created or updated.

  • rule_id string Required

    The unique identifier of the query rule within the specified ruleset to be created or updated.

application/json

Body Required

  • type string Required

    Values are pinned or exclude.

  • criteria object | array[object] Required

    The criteria that must be met for the rule to be applied. If multiple criteria are specified for a rule, all criteria must be met for the rule to be applied.

    One of:
    Hide attributes Show attributes
    • type string Required

      Values are global, exact, exact_fuzzy, fuzzy, prefix, suffix, contains, lt, lte, gt, gte, or always.

    • metadata string

      The metadata field to match against. This metadata will be used to match against match_criteria sent in the rule. It is required for all criteria types except always.

    • values array[object]

      The values to match against the metadata field. Only one value must match for the criteria to be met. It is required for all criteria types except always.

  • actions object Required
    Hide actions attributes Show actions attributes object
    • ids array[string]

      The unique document IDs of the documents to apply the rule to. Only one of ids or docs may be specified and at least one must be specified.

    • docs array[object]

      The documents to apply the rule to. Only one of ids or docs may be specified and at least one must be specified. There is a maximum value of 100 documents in a rule. You can specify the following attributes for each document:

      • _index: The index of the document to pin.
      • _id: The unique document ID.
      Hide docs attributes Show docs attributes object
  • priority number

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • result string Required

      Values are created, updated, deleted, not_found, or noop.

PUT /_query_rules/{ruleset_id}/_rule/{rule_id}
curl \
 --request PUT 'http://api.example.com/_query_rules/{ruleset_id}/_rule/{rule_id}' \
 --header "Authorization: $API_KEY" \
 --header "Content-Type: application/json" \
 --data '"{\n  \"match_criteria\": {\n    \"query_string\": \"puggles\"\n  }\n}"'
Request example
Run `POST _query_rules/my-ruleset/_test` to test a ruleset. Provide the match criteria that you want to test against.
{
  "match_criteria": {
    "query_string": "puggles"
  }
}

























































Search rolled-up data Deprecated Technical preview

POST /{index}/_rollup_search

The rollup search endpoint is needed because, internally, rolled-up documents utilize a different document structure than the original data. It rewrites standard Query DSL into a format that matches the rollup documents then takes the response and rewrites it back to what a client would expect given the original query.

The request body supports a subset of features from the regular search API. The following functionality is not available:

size: Because rollups work on pre-aggregated data, no search hits can be returned and so size must be set to zero or omitted entirely. highlighter, suggestors, post_filter, profile, explain: These are similarly disallowed.

Searching both historical rollup and non-rollup data

The rollup search API has the capability to search across both "live" non-rollup data and the aggregated rollup data. This is done by simply adding the live indices to the URI. For example:

GET sensor-1,sensor_rollup/_rollup_search
{
  "size": 0,
  "aggregations": {
     "max_temperature": {
      "max": {
        "field": "temperature"
      }
    }
  }
}

The rollup search endpoint does two things when the search runs:

  • The original request is sent to the non-rollup index unaltered.
  • A rewritten version of the original request is sent to the rollup index.

When the two responses are received, the endpoint rewrites the rollup response and merges the two together. During the merging process, if there is any overlap in buckets between the two responses, the buckets from the non-rollup index are used.

Path parameters

  • index string | array[string] Required

    A comma-separated list of data streams and indices used to limit the request. This parameter has the following rules:

    • At least one data stream, index, or wildcard expression must be specified. This target can include a rollup or non-rollup index. For data streams, the stream's backing indices can only serve as non-rollup indices. Omitting the parameter or using _all are not permitted.
    • Multiple non-rollup indices may be specified.
    • Only one rollup index may be specified. If more than one are supplied, an exception occurs.
    • Wildcard expressions (*) may be used. If they match more than one rollup index, an exception occurs. However, you can use an expression to match multiple non-rollup indices or data streams.

Query parameters

  • Indicates whether hits.total should be rendered as an integer or an object in the rest search response

  • typed_keys boolean

    Specify whether aggregation and suggester names should be prefixed by their respective types in the response

application/json

Body Required

Responses

POST /{index}/_rollup_search
curl \
 --request POST 'http://api.example.com/{index}/_rollup_search' \
 --header "Authorization: $API_KEY" \
 --header "Content-Type: application/json" \
 --data '"{\n  \"size\": 0,\n  \"aggregations\": {\n    \"max_temperature\": {\n      \"max\": {\n        \"field\": \"temperature\"\n      }\n    }\n  }\n}"'
Request example
Search rolled up data stored in `sensor_rollup` with `GET /sensor_rollup/_rollup_search`
{
  "size": 0,
  "aggregations": {
    "max_temperature": {
      "max": {
        "field": "temperature"
      }
    }
  }
}
Response examples (200)
An abbreviated response from `GET /sensor_rollup/_rollup_search` with a `max` aggregation on a `temperature` field. The response provides some metadata about the request (`took`, `_shards`), the search hits (which is always empty for rollup searches), and the aggregation response.
{
  "took" : 102,
  "timed_out" : false,
  "terminated_early" : false,
  "_shards" : {} ,
  "hits" : {
    "total" : {
        "value": 0,
        "relation": "eq"
    },
    "max_score" : 0.0,
    "hits" : [ ]
  },
  "aggregations" : {
    "max_temperature" : {
      "value" : 202.0
    }
  }
}






































































































Count search results

POST /_count

Get the number of documents matching a query.

The query can be provided either by using a simple query string as a parameter, or by defining Query DSL within the request body. The query is optional. When no query is provided, the API uses match_all to count all the documents.

The count API supports multi-target syntax. You can run a single count API search across multiple data streams and indices.

The operation is broadcast across all shards. For each shard ID group, a replica is chosen and the search is run against it. This means that replicas increase the scalability of the count.

Query parameters

  • If false, the request returns an error if any wildcard expression, index alias, or _all value targets only missing or closed indices. This behavior applies even if the request targets other open indices. For example, a request targeting foo*,bar* returns an error if an index starts with foo but no index starts with bar.

  • analyzer string

    The analyzer to use for the query string. This parameter can be used only when the q query string parameter is specified.

  • If true, wildcard and prefix queries are analyzed. This parameter can be used only when the q query string parameter is specified.

  • The default operator for query string query: AND or OR. This parameter can be used only when the q query string parameter is specified.

    Values are and, AND, or, or OR.

  • df string

    The field to use as a default when no field prefix is given in the query string. This parameter can be used only when the q query string parameter is specified.

  • expand_wildcards string | array[string]

    The type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. It supports comma-separated values, such as open,hidden.

    Supported values include:

    • all: Match any data stream or index, including hidden ones.
    • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
    • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
    • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
    • none: Wildcard expressions are not accepted.
  • ignore_throttled boolean Deprecated

    If true, concrete, expanded, or aliased indices are ignored when frozen.

  • If false, the request returns an error if it targets a missing or closed index.

  • lenient boolean

    If true, format-based query failures (such as providing text to a numeric field) in the query string will be ignored. This parameter can be used only when the q query string parameter is specified.

  • The minimum _score value that documents must have to be included in the result.

  • The node or shard the operation should be performed on. By default, it is random.

  • routing string

    A custom value used to route operations to a specific shard.

  • The maximum number of documents to collect for each shard. If a query reaches this limit, Elasticsearch terminates the query early. Elasticsearch collects documents before sorting.

    IMPORTANT: Use with caution. Elasticsearch applies this parameter to each shard handling the request. When possible, let Elasticsearch perform early termination automatically. Avoid specifying this parameter for requests that target data streams with backing indices across multiple data tiers.

  • q string

    The query in Lucene query string syntax. This parameter cannot be used with a request body.

application/json

Body

Responses

POST /_count
curl \
 --request POST 'http://api.example.com/_count' \
 --header "Authorization: $API_KEY" \
 --header "Content-Type: application/json" \
 --data '"{\n  \"query\" : {\n    \"term\" : { \"user.id\" : \"kimchy\" }\n  }\n}"'
Request example
Run `GET /my-index-000001/_count?q=user:kimchy`. Alternatively, run `GET /my-index-000001/_count` with the same query in the request body. Both requests count the number of documents in `my-index-000001` with a `user.id` of `kimchy`.
{
  "query" : {
    "term" : { "user.id" : "kimchy" }
  }
}
Response examples (200)
A successful response from `GET /my-index-000001/_count?q=user:kimchy`.
{
  "count": 1,
  "_shards": {
    "total": 1,
    "successful": 1,
    "skipped": 0,
    "failed": 0
  }
}








Explain a document match result

GET /{index}/_explain/{id}

Get information about why a specific document matches, or doesn't match, a query. It computes a score explanation for a query and a specific document.

Path parameters

  • index string Required

    Index names that are used to limit the request. Only a single index name can be provided to this parameter.

  • id string Required

    The document identifier.

Query parameters

  • analyzer string

    The analyzer to use for the query string. This parameter can be used only when the q query string parameter is specified.

  • If true, wildcard and prefix queries are analyzed. This parameter can be used only when the q query string parameter is specified.

  • The default operator for query string query: AND or OR. This parameter can be used only when the q query string parameter is specified.

    Values are and, AND, or, or OR.

  • df string

    The field to use as default where no field prefix is given in the query string. This parameter can be used only when the q query string parameter is specified.

  • lenient boolean

    If true, format-based query failures (such as providing text to a numeric field) in the query string will be ignored. This parameter can be used only when the q query string parameter is specified.

  • The node or shard the operation should be performed on. It is random by default.

  • routing string

    A custom value used to route operations to a specific shard.

  • _source boolean | string | array[string]

    True or false to return the _source field or not or a list of fields to return.

  • _source_excludes string | array[string]

    A comma-separated list of source fields to exclude from the response. You can also use this parameter to exclude fields from the subset specified in _source_includes query parameter. If the _source parameter is false, this parameter is ignored.

  • _source_includes string | array[string]

    A comma-separated list of source fields to include in the response. If this parameter is specified, only these source fields are returned. You can exclude fields from this subset using the _source_excludes query parameter. If the _source parameter is false, this parameter is ignored.

  • stored_fields string | array[string]

    A comma-separated list of stored fields to return in the response.

  • q string

    The query in the Lucene query string syntax.

application/json

Body

Responses

GET /{index}/_explain/{id}
curl \
 --request GET 'http://api.example.com/{index}/_explain/{id}' \
 --header "Authorization: $API_KEY" \
 --header "Content-Type: application/json" \
 --data '"{\n  \"query\" : {\n    \"match\" : { \"message\" : \"elasticsearch\" }\n  }\n}"'
Request example
Run `GET /my-index-000001/_explain/0` with the request body. Alternatively, run `GET /my-index-000001/_explain/0?q=message:elasticsearch`
{
  "query" : {
    "match" : { "message" : "elasticsearch" }
  }
}
Response examples (200)
A successful response from `GET /my-index-000001/_explain/0`.
{
  "_index":"my-index-000001",
  "_id":"0",
  "matched":true,
  "explanation":{
      "value":1.6943598,
      "description":"weight(message:elasticsearch in 0) [PerFieldSimilarity], result of:",
      "details":[
        {
            "value":1.6943598,
            "description":"score(freq=1.0), computed as boost * idf * tf from:",
            "details":[
              {
                  "value":2.2,
                  "description":"boost",
                  "details":[]
              },
              {
                  "value":1.3862944,
                  "description":"idf, computed as log(1 + (N - n + 0.5) / (n + 0.5)) from:",
                  "details":[
                    {
                        "value":1,
                        "description":"n, number of documents containing term",
                        "details":[]
                    },
                    {
                        "value":5,
                        "description":"N, total number of documents with field",
                        "details":[]
                    }
                  ]
              },
              {
                  "value":0.5555556,
                  "description":"tf, computed as freq / (freq + k1 * (1 - b + b * dl / avgdl)) from:",
                  "details":[
                    {
                        "value":1.0,
                        "description":"freq, occurrences of term within document",
                        "details":[]
                    },
                    {
                        "value":1.2,
                        "description":"k1, term saturation parameter",
                        "details":[]
                    },
                    {
                        "value":0.75,
                        "description":"b, length normalization parameter",
                        "details":[]
                    },
                    {
                        "value":3.0,
                        "description":"dl, length of field",
                        "details":[]
                    },
                    {
                        "value":5.4,
                        "description":"avgdl, average length of field",
                        "details":[]
                    }
                  ]
              }
            ]
        }
      ]
  }
}

Explain a document match result

POST /{index}/_explain/{id}

Get information about why a specific document matches, or doesn't match, a query. It computes a score explanation for a query and a specific document.

Path parameters

  • index string Required

    Index names that are used to limit the request. Only a single index name can be provided to this parameter.

  • id string Required

    The document identifier.

Query parameters

  • analyzer string

    The analyzer to use for the query string. This parameter can be used only when the q query string parameter is specified.

  • If true, wildcard and prefix queries are analyzed. This parameter can be used only when the q query string parameter is specified.

  • The default operator for query string query: AND or OR. This parameter can be used only when the q query string parameter is specified.

    Values are and, AND, or, or OR.

  • df string

    The field to use as default where no field prefix is given in the query string. This parameter can be used only when the q query string parameter is specified.

  • lenient boolean

    If true, format-based query failures (such as providing text to a numeric field) in the query string will be ignored. This parameter can be used only when the q query string parameter is specified.

  • The node or shard the operation should be performed on. It is random by default.

  • routing string

    A custom value used to route operations to a specific shard.

  • _source boolean | string | array[string]

    True or false to return the _source field or not or a list of fields to return.

  • _source_excludes string | array[string]

    A comma-separated list of source fields to exclude from the response. You can also use this parameter to exclude fields from the subset specified in _source_includes query parameter. If the _source parameter is false, this parameter is ignored.

  • _source_includes string | array[string]

    A comma-separated list of source fields to include in the response. If this parameter is specified, only these source fields are returned. You can exclude fields from this subset using the _source_excludes query parameter. If the _source parameter is false, this parameter is ignored.

  • stored_fields string | array[string]

    A comma-separated list of stored fields to return in the response.

  • q string

    The query in the Lucene query string syntax.

application/json

Body

Responses

POST /{index}/_explain/{id}
curl \
 --request POST 'http://api.example.com/{index}/_explain/{id}' \
 --header "Authorization: $API_KEY" \
 --header "Content-Type: application/json" \
 --data '"{\n  \"query\" : {\n    \"match\" : { \"message\" : \"elasticsearch\" }\n  }\n}"'
Request example
Run `GET /my-index-000001/_explain/0` with the request body. Alternatively, run `GET /my-index-000001/_explain/0?q=message:elasticsearch`
{
  "query" : {
    "match" : { "message" : "elasticsearch" }
  }
}
Response examples (200)
A successful response from `GET /my-index-000001/_explain/0`.
{
  "_index":"my-index-000001",
  "_id":"0",
  "matched":true,
  "explanation":{
      "value":1.6943598,
      "description":"weight(message:elasticsearch in 0) [PerFieldSimilarity], result of:",
      "details":[
        {
            "value":1.6943598,
            "description":"score(freq=1.0), computed as boost * idf * tf from:",
            "details":[
              {
                  "value":2.2,
                  "description":"boost",
                  "details":[]
              },
              {
                  "value":1.3862944,
                  "description":"idf, computed as log(1 + (N - n + 0.5) / (n + 0.5)) from:",
                  "details":[
                    {
                        "value":1,
                        "description":"n, number of documents containing term",
                        "details":[]
                    },
                    {
                        "value":5,
                        "description":"N, total number of documents with field",
                        "details":[]
                    }
                  ]
              },
              {
                  "value":0.5555556,
                  "description":"tf, computed as freq / (freq + k1 * (1 - b + b * dl / avgdl)) from:",
                  "details":[
                    {
                        "value":1.0,
                        "description":"freq, occurrences of term within document",
                        "details":[]
                    },
                    {
                        "value":1.2,
                        "description":"k1, term saturation parameter",
                        "details":[]
                    },
                    {
                        "value":0.75,
                        "description":"b, length normalization parameter",
                        "details":[]
                    },
                    {
                        "value":3.0,
                        "description":"dl, length of field",
                        "details":[]
                    },
                    {
                        "value":5.4,
                        "description":"avgdl, average length of field",
                        "details":[]
                    }
                  ]
              }
            ]
        }
      ]
  }
}








































































































Search a vector tile Added in 7.15.0

POST /{index}/_mvt/{field}/{zoom}/{x}/{y}

Search a vector tile for geospatial values. Before using this API, you should be familiar with the Mapbox vector tile specification. The API returns results as a binary mapbox vector tile.

Internally, Elasticsearch translates a vector tile search API request into a search containing:

  • A geo_bounding_box query on the <field>. The query uses the <zoom>/<x>/<y> tile as a bounding box.
  • A geotile_grid or geohex_grid aggregation on the <field>. The grid_agg parameter determines the aggregation type. The aggregation uses the <zoom>/<x>/<y> tile as a bounding box.
  • Optionally, a geo_bounds aggregation on the <field>. The search only includes this aggregation if the exact_bounds parameter is true.
  • If the optional parameter with_labels is true, the internal search will include a dynamic runtime field that calls the getLabelPosition function of the geometry doc value. This enables the generation of new point features containing suggested geometry labels, so that, for example, multi-polygons will have only one label.

For example, Elasticsearch may translate a vector tile search API request with a grid_agg argument of geotile and an exact_bounds argument of true into the following search

GET my-index/_search
{
  "size": 10000,
  "query": {
    "geo_bounding_box": {
      "my-geo-field": {
        "top_left": {
          "lat": -40.979898069620134,
          "lon": -45
        },
        "bottom_right": {
          "lat": -66.51326044311186,
          "lon": 0
        }
      }
    }
  },
  "aggregations": {
    "grid": {
      "geotile_grid": {
        "field": "my-geo-field",
        "precision": 11,
        "size": 65536,
        "bounds": {
          "top_left": {
            "lat": -40.979898069620134,
            "lon": -45
          },
          "bottom_right": {
            "lat": -66.51326044311186,
            "lon": 0
          }
        }
      }
    },
    "bounds": {
      "geo_bounds": {
        "field": "my-geo-field",
        "wrap_longitude": false
      }
    }
  }
}

The API returns results as a binary Mapbox vector tile. Mapbox vector tiles are encoded as Google Protobufs (PBF). By default, the tile contains three layers:

  • A hits layer containing a feature for each <field> value matching the geo_bounding_box query.
  • An aggs layer containing a feature for each cell of the geotile_grid or geohex_grid. The layer only contains features for cells with matching data.
  • A meta layer containing:
    • A feature containing a bounding box. By default, this is the bounding box of the tile.
    • Value ranges for any sub-aggregations on the geotile_grid or geohex_grid.
    • Metadata for the search.

The API only returns features that can display at its zoom level. For example, if a polygon feature has no area at its zoom level, the API omits it. The API returns errors as UTF-8 encoded JSON.

IMPORTANT: You can specify several options for this API as either a query parameter or request body parameter. If you specify both parameters, the query parameter takes precedence.

Grid precision for geotile

For a grid_agg of geotile, you can use cells in the aggs layer as tiles for lower zoom levels. grid_precision represents the additional zoom levels available through these cells. The final precision is computed by as follows: <zoom> + grid_precision. For example, if <zoom> is 7 and grid_precision is 8, then the geotile_grid aggregation will use a precision of 15. The maximum final precision is 29. The grid_precision also determines the number of cells for the grid as follows: (2^grid_precision) x (2^grid_precision). For example, a value of 8 divides the tile into a grid of 256 x 256 cells. The aggs layer only contains features for cells with matching data.

Grid precision for geohex

For a grid_agg of geohex, Elasticsearch uses <zoom> and grid_precision to calculate a final precision as follows: <zoom> + grid_precision.

This precision determines the H3 resolution of the hexagonal cells produced by the geohex aggregation. The following table maps the H3 resolution for each precision. For example, if <zoom> is 3 and grid_precision is 3, the precision is 6. At a precision of 6, hexagonal cells have an H3 resolution of 2. If <zoom> is 3 and grid_precision is 4, the precision is 7. At a precision of 7, hexagonal cells have an H3 resolution of 3.

Precision Unique tile bins H3 resolution Unique hex bins Ratio
1 4 0 122 30.5
2 16 0 122 7.625
3 64 1 842 13.15625
4 256 1 842 3.2890625
5 1024 2 5882 5.744140625
6 4096 2 5882 1.436035156
7 16384 3 41162 2.512329102
8 65536 3 41162 0.6280822754
9 262144 4 288122 1.099098206
10 1048576 4 288122 0.2747745514
11 4194304 5 2016842 0.4808526039
12 16777216 6 14117882 0.8414913416
13 67108864 6 14117882 0.2103728354
14 268435456 7 98825162 0.3681524172
15 1073741824 8 691776122 0.644266719
16 4294967296 8 691776122 0.1610666797
17 17179869184 9 4842432842 0.2818666889
18 68719476736 10 33897029882 0.4932667053
19 274877906944 11 237279209162 0.8632167343
20 1099511627776 11 237279209162 0.2158041836
21 4398046511104 12 1660954464122 0.3776573213
22 17592186044416 13 11626681248842 0.6609003122
23 70368744177664 13 11626681248842 0.165225078
24 281474976710656 14 81386768741882 0.2891438866
25 1125899906842620 15 569707381193162 0.5060018015
26 4503599627370500 15 569707381193162 0.1265004504
27 18014398509482000 15 569707381193162 0.03162511259
28 72057594037927900 15 569707381193162 0.007906278149
29 288230376151712000 15 569707381193162 0.001976569537

Hexagonal cells don't align perfectly on a vector tile. Some cells may intersect more than one vector tile. To compute the H3 resolution for each precision, Elasticsearch compares the average density of hexagonal bins at each resolution with the average density of tile bins at each zoom level. Elasticsearch uses the H3 resolution that is closest to the corresponding geotile density.

External documentation

Path parameters

  • index string | array[string] Required

    Comma-separated list of data streams, indices, or aliases to search

  • field string Required

    Field containing geospatial data to return

  • zoom number Required

    Zoom level for the vector tile to search

  • x number Required

    X coordinate for the vector tile to search

  • y number Required

    Y coordinate for the vector tile to search

Query parameters

  • If false, the meta layer's feature is the bounding box of the tile. If true, the meta layer's feature is a bounding box resulting from a geo_bounds aggregation. The aggregation runs on values that intersect the // tile with wrap_longitude set to false. The resulting bounding box may be larger than the vector tile.

  • extent number

    The size, in pixels, of a side of the tile. Vector tiles are square with equal sides.

  • grid_agg string

    Aggregation used to create a grid for field.

    Values are geotile or geohex.

  • Additional zoom levels available through the aggs layer. For example, if is 7 and grid_precision is 8, you can zoom in up to level 15. Accepts 0-8. If 0, results don't include the aggs layer.

  • Determines the geometry type for features in the aggs layer. In the aggs layer, each feature represents a geotile_grid cell. If 'grid' each feature is a Polygon of the cells bounding box. If 'point' each feature is a Point that is the centroid of the cell.

    Values are grid, point, or centroid.

  • size number

    Maximum number of features to return in the hits layer. Accepts 0-10000. If 0, results don't include the hits layer.

  • If true, the hits and aggs layers will contain additional point features representing suggested label positions for the original features.

    • Point and MultiPoint features will have one of the points selected.
    • Polygon and MultiPolygon features will have a single point generated, either the centroid, if it is within the polygon, or another point within the polygon selected from the sorted triangle-tree.
    • LineString features will likewise provide a roughly central point selected from the triangle-tree.
    • The aggregation results will provide one central point for each aggregation bucket.

    All attributes from the original features will also be copied to the new label features. In addition, the new features will be distinguishable using the tag _mvt_label_position.

application/json

Body

  • aggs object

    Sub-aggregations for the geotile_grid.

    It supports the following aggregation types:

    • avg
    • boxplot
    • cardinality
    • extended stats
    • max
    • median absolute deviation
    • min
    • percentile
    • percentile-rank
    • stats
    • sum
    • value count

    The aggregation names can't start with _mvt_. The _mvt_ prefix is reserved for internal aggregations.

  • buffer number

    The size, in pixels, of a clipping buffer outside the tile. This allows renderers to avoid outline artifacts from geometries that extend past the extent of the tile.

  • If false, the meta layer's feature is the bounding box of the tile. If true, the meta layer's feature is a bounding box resulting from a geo_bounds aggregation. The aggregation runs on values that intersect the <zoom>/<x>/<y> tile with wrap_longitude set to false. The resulting bounding box may be larger than the vector tile.

  • extent number

    The size, in pixels, of a side of the tile. Vector tiles are square with equal sides.

  • fields string | array[string]
  • grid_agg string

    Values are geotile or geohex.

  • Additional zoom levels available through the aggs layer. For example, if <zoom> is 7 and grid_precision is 8, you can zoom in up to level 15. Accepts 0-8. If 0, results don't include the aggs layer.

  • Values are grid, point, or centroid.

  • query object

    An Elasticsearch Query DSL (Domain Specific Language) object that defines a query.

    External documentation
  • Hide runtime_mappings attribute Show runtime_mappings attribute object
    • * object Additional properties
      Hide * attributes Show * attributes object
      • fields object

        For type composite

        Hide fields attribute Show fields attribute object
        • * object Additional properties
          Hide * attribute Show * attribute object
          • type string Required

            Values are boolean, composite, date, double, geo_point, geo_shape, ip, keyword, long, or lookup.

      • fetch_fields array[object]

        For type lookup

        Hide fetch_fields attributes Show fetch_fields attributes object
        • field string Required

          Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

        • format string
      • format string

        A custom format for date type runtime fields.

      • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

      • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

      • script object
        Hide script attributes Show script attributes object
        • source string | object

          One of:
        • id string
        • params object

          Specifies any named parameters that are passed into the script as variables. Use parameters instead of hard-coded values to decrease compile time.

          Hide params attribute Show params attribute object
          • * object Additional properties
        • lang string

          Any of:

          Values are painless, expression, mustache, or java.

        • options object
          Hide options attribute Show options attribute object
          • * string Additional properties
      • type string Required

        Values are boolean, composite, date, double, geo_point, geo_shape, ip, keyword, long, or lookup.

  • size number

    The maximum number of features to return in the hits layer. Accepts 0-10000. If 0, results don't include the hits layer.

  • sort string | object | array[string | object]

    One of:

    Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

  • track_total_hits boolean | number

    Number of hits matching the query to count accurately. If true, the exact number of hits is returned at the cost of some performance. If false, the response does not include the total number of hits matching the query. Defaults to 10,000 hits.

  • If true, the hits and aggs layers will contain additional point features representing suggested label positions for the original features.

    • Point and MultiPoint features will have one of the points selected.
    • Polygon and MultiPolygon features will have a single point generated, either the centroid, if it is within the polygon, or another point within the polygon selected from the sorted triangle-tree.
    • LineString features will likewise provide a roughly central point selected from the triangle-tree.
    • The aggregation results will provide one central point for each aggregation bucket.

    All attributes from the original features will also be copied to the new label features. In addition, the new features will be distinguishable using the tag _mvt_label_position.

Responses

POST /{index}/_mvt/{field}/{zoom}/{x}/{y}
curl \
 --request POST 'http://api.example.com/{index}/_mvt/{field}/{zoom}/{x}/{y}' \
 --header "Authorization: $API_KEY" \
 --header "Content-Type: application/json" \
 --data '"{\n  \"grid_agg\": \"geotile\",\n  \"grid_precision\": 2,\n  \"fields\": [\n    \"name\",\n    \"price\"\n  ],\n  \"query\": {\n    \"term\": {\n      \"included\": true\n    }\n  },\n  \"aggs\": {\n    \"min_price\": {\n      \"min\": {\n        \"field\": \"price\"\n      }\n    },\n    \"max_price\": {\n      \"max\": {\n        \"field\": \"price\"\n      }\n    },\n    \"avg_price\": {\n      \"avg\": {\n        \"field\": \"price\"\n      }\n    }\n  }\n}"'
Request example
Run `GET museums/_mvt/location/13/4207/2692` to search an index for `location` values that intersect the `13/4207/2692` vector tile.
{
  "grid_agg": "geotile",
  "grid_precision": 2,
  "fields": [
    "name",
    "price"
  ],
  "query": {
    "term": {
      "included": true
    }
  },
  "aggs": {
    "min_price": {
      "min": {
        "field": "price"
      }
    },
    "max_price": {
      "max": {
        "field": "price"
      }
    },
    "avg_price": {
      "avg": {
        "field": "price"
      }
    }
  }
}
Response examples (200)
A successful response from `GET museums/_mvt/location/13/4207/2692`. It returns results as a binary vector tile. When decoded into JSON, the tile contains the following data.
{
  "hits": {
    "extent": 4096,
    "version": 2,
    "features": [
      {
        "geometry": {
          "type": "Point",
          "coordinates": [
            3208,
            3864
          ]
        },
        "properties": {
          "_id": "1",
          "_index": "museums",
          "name": "NEMO Science Museum",
          "price": 1750
        },
        "type": 1
      },
      {
        "geometry": {
          "type": "Point",
          "coordinates": [
            3429,
            3496
          ]
        },
        "properties": {
          "_id": "3",
          "_index": "museums",
          "name": "Nederlands Scheepvaartmuseum",
          "price": 1650
        },
        "type": 1
      },
      {
        "geometry": {
          "type": "Point",
          "coordinates": [
            3429,
            3496
          ]
        },
        "properties": {
          "_id": "4",
          "_index": "museums",
          "name": "Amsterdam Centre for Architecture",
          "price": 0
        },
        "type": 1
      }
    ]
  },
  "aggs": {
    "extent": 4096,
    "version": 2,
    "features": [
      {
        "geometry": {
          "type": "Polygon",
          "coordinates": [
            [
              [
                3072,
                3072
              ],
              [
                4096,
                3072
              ],
              [
                4096,
                4096
              ],
              [
                3072,
                4096
              ],
              [
                3072,
                3072
              ]
            ]
          ]
        },
        "properties": {
          "_count": 3,
          "max_price.value": 1750.0,
          "min_price.value": 0.0,
          "avg_price.value": 1133.3333333333333
        },
        "type": 3
      }
    ]
  },
  "meta": {
    "extent": 4096,
    "version": 2,
    "features": [
      {
        "geometry": {
          "type": "Polygon",
          "coordinates": [
            [
              [
                0,
                0
              ],
              [
                4096,
                0
              ],
              [
                4096,
                4096
              ],
              [
                0,
                4096
              ],
              [
                0,
                0
              ]
            ]
          ]
        },
        "properties": {
          "_shards.failed": 0,
          "_shards.skipped": 0,
          "_shards.successful": 1,
          "_shards.total": 1,
          "aggregations._count.avg": 3.0,
          "aggregations._count.count": 1,
          "aggregations._count.max": 3.0,
          "aggregations._count.min": 3.0,
          "aggregations._count.sum": 3.0,
          "aggregations.avg_price.avg": 1133.3333333333333,
          "aggregations.avg_price.count": 1,
          "aggregations.avg_price.max": 1133.3333333333333,
          "aggregations.avg_price.min": 1133.3333333333333,
          "aggregations.avg_price.sum": 1133.3333333333333,
          "aggregations.max_price.avg": 1750.0,
          "aggregations.max_price.count": 1,
          "aggregations.max_price.max": 1750.0,
          "aggregations.max_price.min": 1750.0,
          "aggregations.max_price.sum": 1750.0,
          "aggregations.min_price.avg": 0.0,
          "aggregations.min_price.count": 1,
          "aggregations.min_price.max": 0.0,
          "aggregations.min_price.min": 0.0,
          "aggregations.min_price.sum": 0.0,
          "hits.max_score": 0.0,
          "hits.total.relation": "eq",
          "hits.total.value": 3,
          "timed_out": false,
          "took": 2
        },
        "type": 3
      }
    ]
  }
}

























































Render a search application query Technical preview

POST /_application/search_application/{name}/_render_query

Generate an Elasticsearch query using the specified query parameters and the search template associated with the search application or a default template if none is specified. If a parameter used in the search template is not specified in params, the parameter's default value will be used. The API returns the specific Elasticsearch query that would be generated and run by calling the search application search API.

You must have read privileges on the backing alias of the search application.

Path parameters

  • name string Required

    The name of the search application to render teh query for.

application/json

Body

  • params object
    Hide params attribute Show params attribute object
    • * object Additional properties

Responses

POST /_application/search_application/{name}/_render_query
curl \
 --request POST 'http://api.example.com/_application/search_application/{name}/_render_query' \
 --header "Authorization: $API_KEY" \
 --header "Content-Type: application/json" \
 --data '"{\n  \"params\": {\n    \"query_string\": \"my first query\",\n    \"text_fields\": [\n      {\n        \"name\": \"title\",\n        \"boost\": 5\n      },\n      {\n        \"name\": \"description\",\n        \"boost\": 1\n      }\n    ]\n  }\n}"'
Request example
Run `POST _application/search_application/my-app/_render_query` to generate a query for a search application called `my-app` that uses the search template.
{
  "params": {
    "query_string": "my first query",
    "text_fields": [
      {
        "name": "title",
        "boost": 5
      },
      {
        "name": "description",
        "boost": 1
      }
    ]
  }
}
Response examples (200)
A successful response for generating a query for a search application. The `from`, `size`, and `explain` parameters were not specified in the request, so the default values specified in the search template are used.
{
  "from": 0,
  "size": 10,
  "query": {
    "multi_match": {
      "query": "my first query",
      "fields": [
        "description^1.0",
        "title^5.0"
      ]
    }
  },
  "explain": false
}



















































































































































































































































































































































































































































































Restore a snapshot Added in 0.0.0

POST /_snapshot/{repository}/{snapshot}/_restore

Restore a snapshot of a cluster or data streams and indices.

You can restore a snapshot only to a running cluster with an elected master node. The snapshot repository must be registered and available to the cluster. The snapshot and cluster versions must be compatible.

To restore a snapshot, the cluster's global metadata must be writable. Ensure there are't any cluster blocks that prevent writes. The restore operation ignores index blocks.

Before you restore a data stream, ensure the cluster contains a matching index template with data streams enabled. To check, use the index management feature in Kibana or the get index template API:

GET _index_template/*?filter_path=index_templates.name,index_templates.index_template.index_patterns,index_templates.index_template.data_stream

If no such template exists, you can create one or restore a cluster state that contains one. Without a matching index template, a data stream can't roll over or create backing indices.

If your snapshot contains data from App Search or Workplace Search, you must restore the Enterprise Search encryption key before you restore the snapshot.

External documentation

Path parameters

  • repository string Required

    The name of the repository to restore a snapshot from.

  • snapshot string Required

    The name of the snapshot to restore.

Query parameters

  • The period to wait for the master node. If the master node is not available before the timeout expires, the request fails and returns an error. To indicate that the request should never timeout, set it to -1.

  • If true, the request returns a response when the restore operation completes. The operation is complete when it finishes all attempts to recover primary shards for restored indices. This applies even if one or more of the recovery attempts fail.

    If false, the request returns a response when the restore operation initializes.

application/json

Body

  • feature_states array[string]

    The feature states to restore. If include_global_state is true, the request restores all feature states in the snapshot by default. If include_global_state is false, the request restores no feature states by default. Note that specifying an empty array will result in the default behavior. To restore no feature states, regardless of the include_global_state value, specify an array containing only the value none (["none"]).

  • The index settings to not restore from the snapshot. You can't use this option to ignore index.number_of_shards.

    For data streams, this option applies only to restored backing indices. New backing indices are configured using the data stream's matching index template.

  • If true, the request ignores any index or data stream in indices that's missing from the snapshot. If false, the request returns an error for any missing index or data stream.

  • If true, the request restores aliases for any restored data streams and indices. If false, the request doesn’t restore aliases.

  • If true, restore the cluster state. The cluster state includes:

    • Persistent cluster settings
    • Index templates
    • Legacy index templates
    • Ingest pipelines
    • Index lifecycle management (ILM) policies
    • Stored scripts
    • For snapshots taken after 7.12.0, feature states

    If include_global_state is true, the restore operation merges the legacy index templates in your cluster with the templates contained in the snapshot, replacing any existing ones whose name matches one in the snapshot. It completely removes all persistent settings, non-legacy index templates, ingest pipelines, and ILM lifecycle policies that exist in your cluster and replaces them with the corresponding items from the snapshot.

    Use the feature_states parameter to configure how feature states are restored.

    If include_global_state is true and a snapshot was created without a global state then the restore request will fail.

  • Hide index_settings attributes Show index_settings attributes object
    • index object
    • mode string
    • Hide soft_deletes attributes Show soft_deletes attributes object
      • enabled boolean

        Indicates whether soft deletes are enabled on the index.

      • Hide retention_lease attribute Show retention_lease attribute object
        • period string Required

          A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

    • sort object
      Hide sort attributes Show sort attributes object
    • Values are true, false, or checksum.

    • codec string
    • routing_partition_size number | string

      Some APIs will return values such as numbers also as a string (notably epoch timestamps). This behavior is used to capture this behavior while keeping the semantics of the field type.

      Depending on the target language, code generators can keep the union or remove it and leniently parse strings to the target type.

    • auto_expand_replicas string | null

      One of:
    • merge object
      Hide merge attribute Show merge attribute object
      • Hide scheduler attributes Show scheduler attributes object
        • max_thread_count number | string

          Some APIs will return values such as numbers also as a string (notably epoch timestamps). This behavior is used to capture this behavior while keeping the semantics of the field type.

          Depending on the target language, code generators can keep the union or remove it and leniently parse strings to the target type.

        • max_merge_count number | string

          Some APIs will return values such as numbers also as a string (notably epoch timestamps). This behavior is used to capture this behavior while keeping the semantics of the field type.

          Depending on the target language, code generators can keep the union or remove it and leniently parse strings to the target type.

    • A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

    • blocks object
      Hide blocks attributes Show blocks attributes object
      • read_only boolean | string

        Some APIs will return values such as numbers also as a string (notably epoch timestamps). This behavior is used to capture this behavior while keeping the semantics of the field type.

        Depending on the target language, code generators can keep the union or remove it and leniently parse strings to the target type.

      • read_only_allow_delete boolean | string

        Some APIs will return values such as numbers also as a string (notably epoch timestamps). This behavior is used to capture this behavior while keeping the semantics of the field type.

        Depending on the target language, code generators can keep the union or remove it and leniently parse strings to the target type.

      • read boolean | string

        Some APIs will return values such as numbers also as a string (notably epoch timestamps). This behavior is used to capture this behavior while keeping the semantics of the field type.

        Depending on the target language, code generators can keep the union or remove it and leniently parse strings to the target type.

      • write boolean | string

        Some APIs will return values such as numbers also as a string (notably epoch timestamps). This behavior is used to capture this behavior while keeping the semantics of the field type.

        Depending on the target language, code generators can keep the union or remove it and leniently parse strings to the target type.

      • metadata boolean | string

        Some APIs will return values such as numbers also as a string (notably epoch timestamps). This behavior is used to capture this behavior while keeping the semantics of the field type.

        Depending on the target language, code generators can keep the union or remove it and leniently parse strings to the target type.

    • analyze object
      Hide analyze attribute Show analyze attribute object
      • max_token_count number | string

        Some APIs will return values such as numbers also as a string (notably epoch timestamps). This behavior is used to capture this behavior while keeping the semantics of the field type.

        Depending on the target language, code generators can keep the union or remove it and leniently parse strings to the target type.

    • Hide highlight attribute Show highlight attribute object
    • routing object
      Hide routing attributes Show routing attributes object
    • A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

    • Hide lifecycle attributes Show lifecycle attributes object
      • name string
      • indexing_complete boolean | string

        Some APIs will return values such as numbers also as a string (notably epoch timestamps). This behavior is used to capture this behavior while keeping the semantics of the field type.

        Depending on the target language, code generators can keep the union or remove it and leniently parse strings to the target type.

      • If specified, this is the timestamp used to calculate the index age for its phase transitions. Use this setting if you create a new index that contains old data and want to use the original creation date to calculate the index age. Specified as a Unix epoch value in milliseconds.

      • Set to true to parse the origination date from the index name. This origination date is used to calculate the index age for its phase transitions. The index name must match the pattern .*-{date_format}-\d+, where the date_format is yyyy.MM.dd and the trailing digits are optional. An index that was rolled over would normally match the full format, for example logs-2016.10.31-000002). If the index name doesn’t match the pattern, index creation fails.

      • step object
        Hide step attribute Show step attribute object
        • A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

      • The index alias to update when the index rolls over. Specify when using a policy that contains a rollover action. When the index rolls over, the alias is updated to reflect that the index is no longer the write index. For more information about rolling indices, see Rollover.

      • prefer_ilm boolean | string

        Preference for the system that manages a data stream backing index (preferring ILM when both ILM and DLM are applicable for an index).

    • creation_date number | string

      Some APIs will return values such as numbers also as a string (notably epoch timestamps). This behavior is used to capture this behavior while keeping the semantics of the field type.

      Depending on the target language, code generators can keep the union or remove it and leniently parse strings to the target type.

    • creation_date_string string | number

      A date and time, either as a string whose format can depend on the context (defaulting to ISO 8601), or a number of milliseconds since the Epoch. Elasticsearch accepts both as input, but will generally output a string representation.

    • uuid string
    • version object
      Hide version attributes Show version attributes object
    • translog object
      Hide translog attributes Show translog attributes object
    • Hide query_string attribute Show query_string attribute object
      • lenient boolean | string Required

        Some APIs will return values such as numbers also as a string (notably epoch timestamps). This behavior is used to capture this behavior while keeping the semantics of the field type.

        Depending on the target language, code generators can keep the union or remove it and leniently parse strings to the target type.

    • analysis object
      Hide analysis attributes Show analysis attributes object
    • settings object
    • Hide time_series attributes Show time_series attributes object
      • end_time string | number

        A date and time, either as a string whose format can depend on the context (defaulting to ISO 8601), or a number of milliseconds since the Epoch. Elasticsearch accepts both as input, but will generally output a string representation.

      • start_time string | number

        A date and time, either as a string whose format can depend on the context (defaulting to ISO 8601), or a number of milliseconds since the Epoch. Elasticsearch accepts both as input, but will generally output a string representation.

    • queries object
      Hide queries attribute Show queries attribute object
      • cache object
        Hide cache attribute Show cache attribute object
    • Configure custom similarity settings to customize how search results are scored.

    • mapping object
      Hide mapping attributes Show mapping attributes object
      • coerce boolean
      • Hide total_fields attributes Show total_fields attributes object
        • limit number | string

          The maximum number of fields in an index. Field and object mappings, as well as field aliases count towards this limit. The limit is in place to prevent mappings and searches from becoming too large. Higher values can lead to performance degradations and memory issues, especially in clusters with a high load or few resources.

        • ignore_dynamic_beyond_limit boolean | string

          This setting determines what happens when a dynamically mapped field would exceed the total fields limit. When set to false (the default), the index request of the document that tries to add a dynamic field to the mapping will fail with the message Limit of total fields [X] has been exceeded. When set to true, the index request will not fail. Instead, fields that would exceed the limit are not added to the mapping, similar to dynamic: false. The fields that were not added to the mapping will be added to the _ignored field.

      • depth object
        Hide depth attribute Show depth attribute object
        • limit number

          The maximum depth for a field, which is measured as the number of inner objects. For instance, if all fields are defined at the root object level, then the depth is 1. If there is one object mapping, then the depth is 2, etc.

      • Hide nested_fields attribute Show nested_fields attribute object
        • limit number

          The maximum number of distinct nested mappings in an index. The nested type should only be used in special cases, when arrays of objects need to be queried independently of each other. To safeguard against poorly designed mappings, this setting limits the number of unique nested types per index.

      • Hide nested_objects attribute Show nested_objects attribute object
        • limit number

          The maximum number of nested JSON objects that a single document can contain across all nested types. This limit helps to prevent out of memory errors when a document contains too many nested objects.

      • Hide field_name_length attribute Show field_name_length attribute object
        • limit number

          Setting for the maximum length of a field name. This setting isn’t really something that addresses mappings explosion but might still be useful if you want to limit the field length. It usually shouldn’t be necessary to set this setting. The default is okay unless a user starts to add a huge number of fields with really long names. Default is Long.MAX_VALUE (no limit).

      • Hide dimension_fields attribute Show dimension_fields attribute object
        • limit number

          [preview] This functionality is in technical preview and may be changed or removed in a future release. Elastic will work to fix any issues, but features in technical preview are not subject to the support SLA of official GA features.

      • source object
        Hide source attribute Show source attribute object
        • mode string Required

          Values are disabled, stored, or synthetic.

    • Hide indexing.slowlog attributes Show indexing.slowlog attributes object
      • level string
      • source number
      • reformat boolean
      • Hide threshold attribute Show threshold attribute object
        • index object
          Hide index attributes Show index attributes object
          • warn string

            A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

          • info string

            A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

          • debug string

            A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

          • trace string

            A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

    • Hide indexing_pressure attribute Show indexing_pressure attribute object
      • memory object Required
        Hide memory attribute Show memory attribute object
        • limit number

          Number of outstanding bytes that may be consumed by indexing requests. When this limit is reached or exceeded, the node will reject new coordinating and primary operations. When replica operations consume 1.5x this limit, the node will reject new replica operations. Defaults to 10% of the heap.

    • store object
      Hide store attributes Show store attributes object
      • type string Required

        Any of:

        Values are fs, niofs, mmapfs, or hybridfs.

      • allow_mmap boolean

        You can restrict the use of the mmapfs and the related hybridfs store type via the setting node.store.allow_mmap. This is a boolean setting indicating whether or not memory-mapping is allowed. The default is to allow it. This setting is useful, for example, if you are in an environment where you can not control the ability to create a lot of memory maps so you need disable the ability to use memory-mapping.

  • indices string | array[string]
  • partial boolean

    If false, the entire restore operation will fail if one or more indices included in the snapshot do not have all primary shards available.

    If true, it allows restoring a partial snapshot of indices with unavailable shards. Only shards that were successfully included in the snapshot will be restored. All missing shards will be recreated as empty.

  • A rename pattern to apply to restored data streams and indices. Data streams and indices matching the rename pattern will be renamed according to rename_replacement.

    The rename pattern is applied as defined by the regular expression that supports referencing the original text, according to the appendReplacement logic.

    External documentation
  • The rename replacement string that is used with the rename_pattern.

Responses

POST /_snapshot/{repository}/{snapshot}/_restore
curl \
 --request POST 'http://api.example.com/_snapshot/{repository}/{snapshot}/_restore' \
 --header "Authorization: $API_KEY" \
 --header "Content-Type: application/json" \
 --data '"{\n  \"indices\": \"index_1,index_2\",\n  \"ignore_unavailable\": true,\n  \"include_global_state\": false,\n  \"rename_pattern\": \"index_(.+)\",\n  \"rename_replacement\": \"restored_index_$1\",\n  \"include_aliases\": false\n}"'
Request examples
Run `POST /_snapshot/my_repository/snapshot_2/_restore?wait_for_completion=true`. It restores `index_1` and `index_2` from `snapshot_2`. The `rename_pattern` and `rename_replacement` parameters indicate any index matching the regular expression `index_(.+)` will be renamed using the pattern `restored_index_$1`. For example, `index_1` will be renamed to `restored_index_1`.
{
  "indices": "index_1,index_2",
  "ignore_unavailable": true,
  "include_global_state": false,
  "rename_pattern": "index_(.+)",
  "rename_replacement": "restored_index_$1",
  "include_aliases": false
}
Close `index_1` then run `POST /_snapshot/my_repository/snapshot_2/_restore?wait_for_completion=true` to restore an index in-place. For example, you might want to perform this type of restore operation when no alternative options surface after the cluster allocation explain API reports `no_valid_shard_copy`.
{
  "indices": "index_1"
}













































Query parameters

  • The period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. To indicate that the request should never timeout, set it to -1.

  • timeout string

    The period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error. To indicate that the request should never timeout, set it to -1.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
GET /_slm/status
curl \
 --request GET 'http://api.example.com/_slm/status' \
 --header "Authorization: $API_KEY"
Response examples (200)
A successful response from `GET _slm/status`.
{
  "operation_mode": "RUNNING"
}