Authentication

The API accepts 3 different authentication methods:

Api key auth (http_api_key)

Elasticsearch APIs support key-based authentication. You must create an API key and use the encoded value in the request header. For example:

curl -X GET "${ES_URL}/_cat/indices?v=true" \
  -H "Authorization: ApiKey ${API_KEY}"

To get API keys, use the /_security/api_key APIs.

Basic auth (http)

Basic auth tokens are constructed with the Basic keyword, followed by a space, followed by a base64-encoded string of your username:password (separated by a : colon).

Example: send a Authorization: Basic aGVsbG86aGVsbG8= HTTP header with your requests to authenticate with the API.

Bearer auth (http)

Elasticsearch APIs support the use of bearer tokens in the Authorization HTTP header to authenticate with the API. For examples, refer to Token-based authentication services










Delete an autoscaling policy Added in 7.11.0

DELETE /_autoscaling/policy/{name}

NOTE: This feature is designed for indirect use by Elasticsearch Service, Elastic Cloud Enterprise, and Elastic Cloud on Kubernetes. Direct use is not supported.

External documentation

Path parameters

  • name string Required

    the name of the autoscaling policy

Query parameters

  • Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.

  • timeout string

    Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

DELETE /_autoscaling/policy/{name}
curl \
 --request DELETE 'http://api.example.com/_autoscaling/policy/{name}' \
 --header "Authorization: $API_KEY"
Response examples (200)
This may be a response to either `DELETE /_autoscaling/policy/my_autoscaling_policy` or `DELETE /_autoscaling/policy/*`.
{
  "acknowledged": true
}




















Create a behavioral analytics collection event Deprecated Technical preview

POST /_application/analytics/{collection_name}/event/{event_type} External documentation

Path parameters

  • collection_name string Required

    The name of the behavioral analytics collection.

  • event_type string Required

    The analytics event type.

    Values are page_view, search, or search_click.

Query parameters

  • debug boolean

    Whether the response type has to include more details

application/json

Body Required

object object

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
POST /_application/analytics/{collection_name}/event/{event_type}
curl \
 --request POST 'http://api.example.com/_application/analytics/{collection_name}/event/{event_type}' \
 --header "Authorization: $API_KEY" \
 --header "Content-Type: application/json" \
 --data '"{\n  \"session\": {\n    \"id\": \"1797ca95-91c9-4e2e-b1bd-9c38e6f386a9\"\n  },\n  \"user\": {\n    \"id\": \"5f26f01a-bbee-4202-9298-81261067abbd\"\n  },\n  \"search\":{\n    \"query\": \"search term\",\n    \"results\": {\n      \"items\": [\n        {\n          \"document\": {\n            \"id\": \"123\",\n            \"index\": \"products\"\n          }\n        }\n      ],\n      \"total_results\": 10\n    },\n    \"sort\": {\n      \"name\": \"relevance\"\n    },\n    \"search_application\": \"website\"\n  },\n  \"document\":{\n    \"id\": \"123\",\n    \"index\": \"products\"\n  }\n}"'
Request example
Run `POST _application/analytics/my_analytics_collection/event/search_click` to send a `search_click` event to an analytics collection called `my_analytics_collection`.
{
  "session": {
    "id": "1797ca95-91c9-4e2e-b1bd-9c38e6f386a9"
  },
  "user": {
    "id": "5f26f01a-bbee-4202-9298-81261067abbd"
  },
  "search":{
    "query": "search term",
    "results": {
      "items": [
        {
          "document": {
            "id": "123",
            "index": "products"
          }
        }
      ],
      "total_results": 10
    },
    "sort": {
      "name": "relevance"
    },
    "search_application": "website"
  },
  "document":{
    "id": "123",
    "index": "products"
  }
}

Compact and aligned text (CAT)

The compact and aligned text (CAT) APIs aim are intended only for human consumption using the Kibana console or command line. They are not intended for use by applications. For application consumption, it's recommend to use a corresponding JSON API. All the cat commands accept a query string parameter help to see all the headers and info they provide, and the /_cat command alone lists all the available commands.













Get shard allocation information

GET /_cat/allocation/{node_id}

Get a snapshot of the number of shards allocated to each data node and their disk space.

IMPORTANT: CAT APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications.

Path parameters

  • node_id string | array[string] Required

    A comma-separated list of node identifiers or names used to limit the returned information.

Query parameters

  • bytes string

    The unit used to display byte values.

    Values are b, kb, mb, gb, tb, or pb.

  • h string | array[string]

    List of columns to appear in the response. Supports simple wildcards.

  • s string | array[string]

    List of columns that determine how the table should be sorted. Sorting defaults to ascending and can be changed by setting :asc or :desc as a suffix to the column name.

  • local boolean

    If true, the request computes the list of selected nodes from the local cluster state. If false the list of selected nodes are computed from the cluster state of the master node. In both cases the coordinating node will send requests for further information to each selected node.

  • Period to wait for a connection to the master node.

Responses

GET /_cat/allocation/{node_id}
curl \
 --request GET 'http://api.example.com/_cat/allocation/{node_id}' \
 --header "Authorization: $API_KEY"
Response examples (200)
A successful response from `GET /_cat/allocation?v=true&format=json`. It shows a single shard is allocated to the one node available.
[
  {
    "shards": "1",
    "shards.undesired": "0",
    "write_load.forecast": "0.0",
    "disk.indices.forecast": "260b",
    "disk.indices": "260b",
    "disk.used": "47.3gb",
    "disk.avail": "43.4gb",
    "disk.total": "100.7gb",
    "disk.percent": "46",
    "host": "127.0.0.1",
    "ip": "127.0.0.1",
    "node": "CSUXak2",
    "node.role": "himrst"
  }
]




Get component templates Added in 5.1.0

GET /_cat/component_templates/{name}

Get information about component templates in a cluster. Component templates are building blocks for constructing index templates that specify index mappings, settings, and aliases.

IMPORTANT: CAT APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the get component template API.

Path parameters

  • name string Required

    The name of the component template. It accepts wildcard expressions. If it is omitted, all component templates are returned.

Query parameters

  • h string | array[string]

    List of columns to appear in the response. Supports simple wildcards.

  • s string | array[string]

    List of columns that determine how the table should be sorted. Sorting defaults to ascending and can be changed by setting :asc or :desc as a suffix to the column name.

  • local boolean

    If true, the request computes the list of selected nodes from the local cluster state. If false the list of selected nodes are computed from the cluster state of the master node. In both cases the coordinating node will send requests for further information to each selected node.

  • The period to wait for a connection to the master node.

Responses

GET /_cat/component_templates/{name}
curl \
 --request GET 'http://api.example.com/_cat/component_templates/{name}' \
 --header "Authorization: $API_KEY"
Response examples (200)
A successful response from `GET _cat/component_templates/my-template-*?v=true&s=name&format=json`.
[
  {
    "name": "my-template-1",
    "version": "null",
    "alias_count": "0",
    "mapping_count": "0",
    "settings_count": "1",
    "metadata_count": "0",
    "included_in": "[my-index-template]"
  },
    {
    "name": "my-template-2",
    "version": null,
    "alias_count": "0",
    "mapping_count": "3",
    "settings_count": "0",
    "metadata_count": "0",
    "included_in": "[my-index-template]"
  }
]

Get a document count

GET /_cat/count

Get quick access to a document count for a data stream, an index, or an entire cluster. The document count only includes live documents, not deleted documents which have not yet been removed by the merge process.

IMPORTANT: CAT APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the count API.

Query parameters

  • h string | array[string]

    List of columns to appear in the response. Supports simple wildcards.

  • s string | array[string]

    List of columns that determine how the table should be sorted. Sorting defaults to ascending and can be changed by setting :asc or :desc as a suffix to the column name.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • epoch number | string

      Some APIs will return values such as numbers also as a string (notably epoch timestamps). This behavior is used to capture this behavior while keeping the semantics of the field type.

      Depending on the target language, code generators can keep the union or remove it and leniently parse strings to the target type.

    • Time of day, expressed as HH:MM:SS

    • count string

      the document count

GET /_cat/count
curl \
 --request GET 'http://api.example.com/_cat/count' \
 --header "Authorization: $API_KEY"
Response examples (200)
A successful response from `GET /_cat/count/my-index-000001?v=true&format=json`. It retrieves the document count for the `my-index-000001` data stream or index.
[
  {
    "epoch": "1475868259",
    "timestamp": "15:24:20",
    "count": "120"
  }
]
A successful response from `GET /_cat/count?v=true&format=json`. It retrieves the document count for all data streams and indices in the cluster.
[
  {
    "epoch": "1475868259",
    "timestamp": "15:24:20",
    "count": "121"
  }
]








Get field data cache information

GET /_cat/fielddata/{fields}

Get the amount of heap memory currently used by the field data cache on every data node in the cluster.

IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the nodes stats API.

Path parameters

  • fields string | array[string] Required

    Comma-separated list of fields used to limit returned information. To retrieve all fields, omit this parameter.

Query parameters

  • bytes string

    The unit used to display byte values.

    Values are b, kb, mb, gb, tb, or pb.

  • fields string | array[string]

    Comma-separated list of fields used to limit returned information.

  • h string | array[string]

    List of columns to appear in the response. Supports simple wildcards.

  • s string | array[string]

    List of columns that determine how the table should be sorted. Sorting defaults to ascending and can be changed by setting :asc or :desc as a suffix to the column name.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
GET /_cat/fielddata/{fields}
curl \
 --request GET 'http://api.example.com/_cat/fielddata/{fields}' \
 --header "Authorization: $API_KEY"
Response examples (200)
A successful response from `GET /_cat/fielddata?v=true&fields=body&format=json`. You can specify an individual field in the request body or URL path. This example retrieves heap memory size information for the `body` field.
[
  {
    "id": "Nqk-6inXQq-OxUfOUI8jNQ",
    "host": "127.0.0.1",
    "ip": "127.0.0.1",
    "node": "Nqk-6in",
    "field": "body",
    "size": "544b"
  }
]
A successful response from `GET /_cat/fielddata/body,soul?v=true&format=json`. You can specify a comma-separated list of fields in the request body or URL path. This example retrieves heap memory size information for the `body` and `soul` fields. To get information for all fields, run `GET /_cat/fielddata?v=true`.
[
  {
    "id": "Nqk-6inXQq-OxUfOUI8jNQ",
    "host": "1127.0.0.1",
    "ip": "127.0.0.1",
    "node": "Nqk-6in",
    "field": "body",
    "size": "544b"
  },
  {
    "id": "Nqk-6inXQq-OxUfOUI8jNQ",
    "host": "127.0.0.1",
    "ip": "127.0.0.1",
    "node": "Nqk-6in",
    "field": "soul",
    "size": "480b"
  }
]

Get the cluster health status

GET /_cat/health

IMPORTANT: CAT APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the cluster health API. This API is often used to check malfunctioning clusters. To help you track cluster health alongside log files and alerting systems, the API returns timestamps in two formats: HH:MM:SS, which is human-readable but includes no date information; Unix epoch time, which is machine-sortable and includes date information. The latter format is useful for cluster recoveries that take multiple days. You can use the cat health API to verify cluster health across multiple nodes. You also can use the API to track the recovery of a large cluster over a longer period of time.

Query parameters

  • time string

    The unit used to display time values.

    Values are nanos, micros, ms, s, m, h, or d.

  • ts boolean

    If true, returns HH:MM:SS and Unix epoch timestamps.

  • h string | array[string]

    List of columns to appear in the response. Supports simple wildcards.

  • s string | array[string]

    List of columns that determine how the table should be sorted. Sorting defaults to ascending and can be changed by setting :asc or :desc as a suffix to the column name.

Responses

GET /_cat/health
curl \
 --request GET 'http://api.example.com/_cat/health' \
 --header "Authorization: $API_KEY"
Response examples (200)
A successful response from `GET /_cat/health?v=true&format=json`. By default, it returns `HH:MM:SS` and Unix epoch timestamps.
[
  {
    "epoch": "1475871424",
    "timestamp": "16:17:04",
    "cluster": "elasticsearch",
    "status": "green",
    "node.total": "1",
    "node.data": "1",
    "shards": "1",
    "pri": "1",
    "relo": "0",
    "init": "0",
    "unassign": "0",
    "unassign.pri": "0",
    "pending_tasks": "0",
    "max_task_wait_time": "-",
    "active_shards_percent": "100.0%"
  }
]

Responses

GET /_cat
curl \
 --request GET 'http://api.example.com/_cat' \
 --header "Authorization: $API_KEY"




















Get datafeeds Added in 7.7.0

GET /_cat/ml/datafeeds

Get configuration and usage information about datafeeds. This API returns a maximum of 10,000 datafeeds. If the Elasticsearch security features are enabled, you must have monitor_ml, monitor, manage_ml, or manage cluster privileges to use this API.

IMPORTANT: CAT APIs are only intended for human consumption using the Kibana console or command line. They are not intended for use by applications. For application consumption, use the get datafeed statistics API.

Query parameters

  • Specifies what to do when the request:

    • Contains wildcard expressions and there are no datafeeds that match.
    • Contains the _all string or no identifiers and there are no matches.
    • Contains wildcard expressions and there are only partial matches.

    If true, the API returns an empty datafeeds array when there are no matches and the subset of results when there are partial matches. If false, the API returns a 404 status code when there are no matches or only partial matches.

  • h string | array[string]

    Comma-separated list of column names to display.

    Supported values include:

    • ae (or assignment_explanation): For started datafeeds only, contains messages relating to the selection of a node.
    • bc (or buckets.count, bucketsCount): The number of buckets processed.
    • id: A numerical character string that uniquely identifies the datafeed.
    • na (or node.address, nodeAddress): For started datafeeds only, the network address of the node where the datafeed is started.
    • ne (or node.ephemeral_id, nodeEphemeralId): For started datafeeds only, the ephemeral ID of the node where the datafeed is started.
    • ni (or node.id, nodeId): For started datafeeds only, the unique identifier of the node where the datafeed is started.
    • nn (or node.name, nodeName): For started datafeeds only, the name of the node where the datafeed is started.
    • sba (or search.bucket_avg, searchBucketAvg): The average search time per bucket, in milliseconds.
    • sc (or search.count, searchCount): The number of searches run by the datafeed.
    • seah (or search.exp_avg_hour, searchExpAvgHour): The exponential average search time per hour, in milliseconds.
    • st (or search.time, searchTime): The total time the datafeed spent searching, in milliseconds.
    • s (or state): The status of the datafeed: starting, started, stopping, or stopped. If starting, the datafeed has been requested to start but has not yet started. If started, the datafeed is actively receiving data. If stopping, the datafeed has been requested to stop gracefully and is completing its final action. If stopped, the datafeed is stopped and will not receive data until it is re-started.
  • s string | array[string]

    Comma-separated list of column names or column aliases used to sort the response.

    Supported values include:

    • ae (or assignment_explanation): For started datafeeds only, contains messages relating to the selection of a node.
    • bc (or buckets.count, bucketsCount): The number of buckets processed.
    • id: A numerical character string that uniquely identifies the datafeed.
    • na (or node.address, nodeAddress): For started datafeeds only, the network address of the node where the datafeed is started.
    • ne (or node.ephemeral_id, nodeEphemeralId): For started datafeeds only, the ephemeral ID of the node where the datafeed is started.
    • ni (or node.id, nodeId): For started datafeeds only, the unique identifier of the node where the datafeed is started.
    • nn (or node.name, nodeName): For started datafeeds only, the name of the node where the datafeed is started.
    • sba (or search.bucket_avg, searchBucketAvg): The average search time per bucket, in milliseconds.
    • sc (or search.count, searchCount): The number of searches run by the datafeed.
    • seah (or search.exp_avg_hour, searchExpAvgHour): The exponential average search time per hour, in milliseconds.
    • st (or search.time, searchTime): The total time the datafeed spent searching, in milliseconds.
    • s (or state): The status of the datafeed: starting, started, stopping, or stopped. If starting, the datafeed has been requested to start but has not yet started. If started, the datafeed is actively receiving data. If stopping, the datafeed has been requested to stop gracefully and is completing its final action. If stopped, the datafeed is stopped and will not receive data until it is re-started.
  • time string

    The unit used to display time values.

    Values are nanos, micros, ms, s, m, h, or d.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • id string

      The datafeed identifier.

    • state string

      Values are started, stopped, starting, or stopping.

    • For started datafeeds only, contains messages relating to the selection of a node.

    • The number of buckets processed.

    • The number of searches run by the datafeed.

    • The total time the datafeed spent searching, in milliseconds.

    • The average search time per bucket, in milliseconds.

    • The exponential average search time per hour, in milliseconds.

    • node.id string

      The unique identifier of the assigned node. For started datafeeds only, this information pertains to the node upon which the datafeed is started.

    • The name of the assigned node. For started datafeeds only, this information pertains to the node upon which the datafeed is started.

    • The ephemeral identifier of the assigned node. For started datafeeds only, this information pertains to the node upon which the datafeed is started.

    • The network address of the assigned node. For started datafeeds only, this information pertains to the node upon which the datafeed is started.

GET /_cat/ml/datafeeds
curl \
 --request GET 'http://api.example.com/_cat/ml/datafeeds' \
 --header "Authorization: $API_KEY"
Response examples (200)
A successful response from `GET _cat/ml/datafeeds?v=true&format=json`.
[
  {
    "id": "datafeed-high_sum_total_sales",
    "state": "stopped",
    "buckets.count": "743",
    "search.count": "7"
  },
  {
    "id": "datafeed-low_request_rate",
    "state": "stopped",
    "buckets.count": "1457",
    "search.count": "3"
  },
  {
    "id": "datafeed-response_code_rates",
    "state": "stopped",
    "buckets.count": "1460",
    "search.count": "18"
  },
  {
    "id": "datafeed-url_scanning",
    "state": "stopped",
    "buckets.count": "1460",
    "search.count": "18"
  }
]

Get datafeeds Added in 7.7.0

GET /_cat/ml/datafeeds/{datafeed_id}

Get configuration and usage information about datafeeds. This API returns a maximum of 10,000 datafeeds. If the Elasticsearch security features are enabled, you must have monitor_ml, monitor, manage_ml, or manage cluster privileges to use this API.

IMPORTANT: CAT APIs are only intended for human consumption using the Kibana console or command line. They are not intended for use by applications. For application consumption, use the get datafeed statistics API.

Path parameters

  • datafeed_id string Required

    A numerical character string that uniquely identifies the datafeed.

Query parameters

  • Specifies what to do when the request:

    • Contains wildcard expressions and there are no datafeeds that match.
    • Contains the _all string or no identifiers and there are no matches.
    • Contains wildcard expressions and there are only partial matches.

    If true, the API returns an empty datafeeds array when there are no matches and the subset of results when there are partial matches. If false, the API returns a 404 status code when there are no matches or only partial matches.

  • h string | array[string]

    Comma-separated list of column names to display.

    Supported values include:

    • ae (or assignment_explanation): For started datafeeds only, contains messages relating to the selection of a node.
    • bc (or buckets.count, bucketsCount): The number of buckets processed.
    • id: A numerical character string that uniquely identifies the datafeed.
    • na (or node.address, nodeAddress): For started datafeeds only, the network address of the node where the datafeed is started.
    • ne (or node.ephemeral_id, nodeEphemeralId): For started datafeeds only, the ephemeral ID of the node where the datafeed is started.
    • ni (or node.id, nodeId): For started datafeeds only, the unique identifier of the node where the datafeed is started.
    • nn (or node.name, nodeName): For started datafeeds only, the name of the node where the datafeed is started.
    • sba (or search.bucket_avg, searchBucketAvg): The average search time per bucket, in milliseconds.
    • sc (or search.count, searchCount): The number of searches run by the datafeed.
    • seah (or search.exp_avg_hour, searchExpAvgHour): The exponential average search time per hour, in milliseconds.
    • st (or search.time, searchTime): The total time the datafeed spent searching, in milliseconds.
    • s (or state): The status of the datafeed: starting, started, stopping, or stopped. If starting, the datafeed has been requested to start but has not yet started. If started, the datafeed is actively receiving data. If stopping, the datafeed has been requested to stop gracefully and is completing its final action. If stopped, the datafeed is stopped and will not receive data until it is re-started.
  • s string | array[string]

    Comma-separated list of column names or column aliases used to sort the response.

    Supported values include:

    • ae (or assignment_explanation): For started datafeeds only, contains messages relating to the selection of a node.
    • bc (or buckets.count, bucketsCount): The number of buckets processed.
    • id: A numerical character string that uniquely identifies the datafeed.
    • na (or node.address, nodeAddress): For started datafeeds only, the network address of the node where the datafeed is started.
    • ne (or node.ephemeral_id, nodeEphemeralId): For started datafeeds only, the ephemeral ID of the node where the datafeed is started.
    • ni (or node.id, nodeId): For started datafeeds only, the unique identifier of the node where the datafeed is started.
    • nn (or node.name, nodeName): For started datafeeds only, the name of the node where the datafeed is started.
    • sba (or search.bucket_avg, searchBucketAvg): The average search time per bucket, in milliseconds.
    • sc (or search.count, searchCount): The number of searches run by the datafeed.
    • seah (or search.exp_avg_hour, searchExpAvgHour): The exponential average search time per hour, in milliseconds.
    • st (or search.time, searchTime): The total time the datafeed spent searching, in milliseconds.
    • s (or state): The status of the datafeed: starting, started, stopping, or stopped. If starting, the datafeed has been requested to start but has not yet started. If started, the datafeed is actively receiving data. If stopping, the datafeed has been requested to stop gracefully and is completing its final action. If stopped, the datafeed is stopped and will not receive data until it is re-started.
  • time string

    The unit used to display time values.

    Values are nanos, micros, ms, s, m, h, or d.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • id string

      The datafeed identifier.

    • state string

      Values are started, stopped, starting, or stopping.

    • For started datafeeds only, contains messages relating to the selection of a node.

    • The number of buckets processed.

    • The number of searches run by the datafeed.

    • The total time the datafeed spent searching, in milliseconds.

    • The average search time per bucket, in milliseconds.

    • The exponential average search time per hour, in milliseconds.

    • node.id string

      The unique identifier of the assigned node. For started datafeeds only, this information pertains to the node upon which the datafeed is started.

    • The name of the assigned node. For started datafeeds only, this information pertains to the node upon which the datafeed is started.

    • The ephemeral identifier of the assigned node. For started datafeeds only, this information pertains to the node upon which the datafeed is started.

    • The network address of the assigned node. For started datafeeds only, this information pertains to the node upon which the datafeed is started.

GET /_cat/ml/datafeeds/{datafeed_id}
curl \
 --request GET 'http://api.example.com/_cat/ml/datafeeds/{datafeed_id}' \
 --header "Authorization: $API_KEY"
Response examples (200)
A successful response from `GET _cat/ml/datafeeds?v=true&format=json`.
[
  {
    "id": "datafeed-high_sum_total_sales",
    "state": "stopped",
    "buckets.count": "743",
    "search.count": "7"
  },
  {
    "id": "datafeed-low_request_rate",
    "state": "stopped",
    "buckets.count": "1457",
    "search.count": "3"
  },
  {
    "id": "datafeed-response_code_rates",
    "state": "stopped",
    "buckets.count": "1460",
    "search.count": "18"
  },
  {
    "id": "datafeed-url_scanning",
    "state": "stopped",
    "buckets.count": "1460",
    "search.count": "18"
  }
]

Get anomaly detection jobs Added in 7.7.0

GET /_cat/ml/anomaly_detectors

Get configuration and usage information for anomaly detection jobs. This API returns a maximum of 10,000 jobs. If the Elasticsearch security features are enabled, you must have monitor_ml, monitor, manage_ml, or manage cluster privileges to use this API.

IMPORTANT: CAT APIs are only intended for human consumption using the Kibana console or command line. They are not intended for use by applications. For application consumption, use the get anomaly detection job statistics API.

Query parameters

  • Specifies what to do when the request:

    • Contains wildcard expressions and there are no jobs that match.
    • Contains the _all string or no identifiers and there are no matches.
    • Contains wildcard expressions and there are only partial matches.

    If true, the API returns an empty jobs array when there are no matches and the subset of results when there are partial matches. If false, the API returns a 404 status code when there are no matches or only partial matches.

  • bytes string

    The unit used to display byte values.

    Values are b, kb, mb, gb, tb, or pb.

  • h string | array[string]

    Comma-separated list of column names to display.

    Supported values include:

    • assignment_explanation (or ae): For open anomaly detection jobs only, contains messages relating to the selection of a node to run the job.
    • buckets.count (or bc, bucketsCount): The number of bucket results produced by the job.
    • buckets.time.exp_avg (or btea, bucketsTimeExpAvg): Exponential moving average of all bucket processing times, in milliseconds.
    • buckets.time.exp_avg_hour (or bteah, bucketsTimeExpAvgHour): Exponentially-weighted moving average of bucket processing times calculated in a 1 hour time window, in milliseconds.
    • buckets.time.max (or btmax, bucketsTimeMax): Maximum among all bucket processing times, in milliseconds.
    • buckets.time.min (or btmin, bucketsTimeMin): Minimum among all bucket processing times, in milliseconds.
    • buckets.time.total (or btt, bucketsTimeTotal): Sum of all bucket processing times, in milliseconds.
    • data.buckets (or db, dataBuckets): The number of buckets processed.
    • data.earliest_record (or der, dataEarliestRecord): The timestamp of the earliest chronologically input document.
    • data.empty_buckets (or deb, dataEmptyBuckets): The number of buckets which did not contain any data.
    • data.input_bytes (or dib, dataInputBytes): The number of bytes of input data posted to the anomaly detection job.
    • data.input_fields (or dif, dataInputFields): The total number of fields in input documents posted to the anomaly detection job. This count includes fields that are not used in the analysis. However, be aware that if you are using a datafeed, it extracts only the required fields from the documents it retrieves before posting them to the job.
    • data.input_records (or dir, dataInputRecords): The number of input documents posted to the anomaly detection job.
    • data.invalid_dates (or did, dataInvalidDates): The number of input documents with either a missing date field or a date that could not be parsed.
    • data.last (or dl, dataLast): The timestamp at which data was last analyzed, according to server time.
    • data.last_empty_bucket (or dleb, dataLastEmptyBucket): The timestamp of the last bucket that did not contain any data.
    • data.last_sparse_bucket (or dlsb, dataLastSparseBucket): The timestamp of the last bucket that was considered sparse.
    • data.latest_record (or dlr, dataLatestRecord): The timestamp of the latest chronologically input document.
    • data.missing_fields (or dmf, dataMissingFields): The number of input documents that are missing a field that the anomaly detection job is configured to analyze. Input documents with missing fields are still processed because it is possible that not all fields are missing.
    • data.out_of_order_timestamps (or doot, dataOutOfOrderTimestamps): The number of input documents that have a timestamp chronologically preceding the start of the current anomaly detection bucket offset by the latency window. This information is applicable only when you provide data to the anomaly detection job by using the post data API. These out of order documents are discarded, since jobs require time series data to be in ascending chronological order.
    • data.processed_fields (or dpf, dataProcessedFields): The total number of fields in all the documents that have been processed by the anomaly detection job. Only fields that are specified in the detector configuration object contribute to this count. The timestamp is not included in this count.
    • data.processed_records (or dpr, dataProcessedRecords): The number of input documents that have been processed by the anomaly detection job. This value includes documents with missing fields, since they are nonetheless analyzed. If you use datafeeds and have aggregations in your search query, the processed record count is the number of aggregation results processed, not the number of Elasticsearch documents.
    • data.sparse_buckets (or dsb, dataSparseBuckets): The number of buckets that contained few data points compared to the expected number of data points.
    • forecasts.memory.avg (or fmavg, forecastsMemoryAvg): The average memory usage in bytes for forecasts related to the anomaly detection job.
    • forecasts.memory.max (or fmmax, forecastsMemoryMax): The maximum memory usage in bytes for forecasts related to the anomaly detection job.
    • forecasts.memory.min (or fmmin, forecastsMemoryMin): The minimum memory usage in bytes for forecasts related to the anomaly detection job.
    • forecasts.memory.total (or fmt, forecastsMemoryTotal): The total memory usage in bytes for forecasts related to the anomaly detection job.
    • forecasts.records.avg (or fravg, forecastsRecordsAvg): The average number of model_forecast` documents written for forecasts related to the anomaly detection job.
    • forecasts.records.max (or frmax, forecastsRecordsMax): The maximum number of model_forecast documents written for forecasts related to the anomaly detection job.
    • forecasts.records.min (or frmin, forecastsRecordsMin): The minimum number of model_forecast documents written for forecasts related to the anomaly detection job.
    • forecasts.records.total (or frt, forecastsRecordsTotal): The total number of model_forecast documents written for forecasts related to the anomaly detection job.
    • forecasts.time.avg (or ftavg, forecastsTimeAvg): The average runtime in milliseconds for forecasts related to the anomaly detection job.
    • forecasts.time.max (or ftmax, forecastsTimeMax): The maximum runtime in milliseconds for forecasts related to the anomaly detection job.
    • forecasts.time.min (or ftmin, forecastsTimeMin): The minimum runtime in milliseconds for forecasts related to the anomaly detection job.
    • forecasts.time.total (or ftt, forecastsTimeTotal): The total runtime in milliseconds for forecasts related to the anomaly detection job.
    • forecasts.total (or ft, forecastsTotal): The number of individual forecasts currently available for the job.
    • id: Identifier for the anomaly detection job.
    • model.bucket_allocation_failures (or mbaf, modelBucketAllocationFailures): The number of buckets for which new entities in incoming data were not processed due to insufficient model memory.
    • model.by_fields (or mbf, modelByFields): The number of by field values that were analyzed by the models. This value is cumulative for all detectors in the job.
    • model.bytes (or mb, modelBytes): The number of bytes of memory used by the models. This is the maximum value since the last time the model was persisted. If the job is closed, this value indicates the latest size.
    • model.bytes_exceeded (or mbe, modelBytesExceeded): The number of bytes over the high limit for memory usage at the last allocation failure.
    • model.categorization_status (or mcs, modelCategorizationStatus): The status of categorization for the job: ok or warn. If ok, categorization is performing acceptably well (or not being used at all). If warn, categorization is detecting a distribution of categories that suggests the input data is inappropriate for categorization. Problems could be that there is only one category, more than 90% of categories are rare, the number of categories is greater than 50% of the number of categorized documents, there are no frequently matched categories, or more than 50% of categories are dead.
    • model.categorized_doc_count (or mcdc, modelCategorizedDocCount): The number of documents that have had a field categorized.
    • model.dead_category_count (or mdcc, modelDeadCategoryCount): The number of categories created by categorization that will never be assigned again because another category’s definition makes it a superset of the dead category. Dead categories are a side effect of the way categorization has no prior training.
    • model.failed_category_count (or mdcc, modelFailedCategoryCount): The number of times that categorization wanted to create a new category but couldn’t because the job had hit its model memory limit. This count does not track which specific categories failed to be created. Therefore, you cannot use this value to determine the number of unique categories that were missed.
    • model.frequent_category_count (or mfcc, modelFrequentCategoryCount): The number of categories that match more than 1% of categorized documents.
    • model.log_time (or mlt, modelLogTime): The timestamp when the model stats were gathered, according to server time.
    • model.memory_limit (or mml, modelMemoryLimit): The timestamp when the model stats were gathered, according to server time.
    • model.memory_status (or mms, modelMemoryStatus): The status of the mathematical models: ok, soft_limit, or hard_limit. If ok, the models stayed below the configured value. If soft_limit, the models used more than 60% of the configured memory limit and older unused models will be pruned to free up space. Additionally, in categorization jobs no further category examples will be stored. If hard_limit, the models used more space than the configured memory limit. As a result, not all incoming data was processed.
    • model.over_fields (or mof, modelOverFields): The number of over field values that were analyzed by the models. This value is cumulative for all detectors in the job.
    • model.partition_fields (or mpf, modelPartitionFields): The number of partition field values that were analyzed by the models. This value is cumulative for all detectors in the job.
    • model.rare_category_count (or mrcc, modelRareCategoryCount): The number of categories that match just one categorized document.
    • model.timestamp (or mt, modelTimestamp): The timestamp of the last record when the model stats were gathered.
    • model.total_category_count (or mtcc, modelTotalCategoryCount): The number of categories created by categorization.
    • node.address (or na, nodeAddress): The network address of the node that runs the job. This information is available only for open jobs.
    • node.ephemeral_id (or ne, nodeEphemeralId): The ephemeral ID of the node that runs the job. This information is available only for open jobs.
    • node.id (or ni, nodeId): The unique identifier of the node that runs the job. This information is available only for open jobs.
    • node.name (or nn, nodeName): The name of the node that runs the job. This information is available only for open jobs.
    • opened_time (or ot): For open jobs only, the elapsed time for which the job has been open.
    • state (or s): The status of the anomaly detection job: closed, closing, failed, opened, or opening. If closed, the job finished successfully with its model state persisted. The job must be opened before it can accept further data. If closing, the job close action is in progress and has not yet completed. A closing job cannot accept further data. If failed, the job did not finish successfully due to an error. This situation can occur due to invalid input data, a fatal error occurring during the analysis, or an external interaction such as the process being killed by the Linux out of memory (OOM) killer. If the job had irrevocably failed, it must be force closed and then deleted. If the datafeed can be corrected, the job can be closed and then re-opened. If opened, the job is available to receive and process data. If opening, the job open action is in progress and has not yet completed.
  • s string | array[string]

    Comma-separated list of column names or column aliases used to sort the response.

    Supported values include:

    • assignment_explanation (or ae): For open anomaly detection jobs only, contains messages relating to the selection of a node to run the job.
    • buckets.count (or bc, bucketsCount): The number of bucket results produced by the job.
    • buckets.time.exp_avg (or btea, bucketsTimeExpAvg): Exponential moving average of all bucket processing times, in milliseconds.
    • buckets.time.exp_avg_hour (or bteah, bucketsTimeExpAvgHour): Exponentially-weighted moving average of bucket processing times calculated in a 1 hour time window, in milliseconds.
    • buckets.time.max (or btmax, bucketsTimeMax): Maximum among all bucket processing times, in milliseconds.
    • buckets.time.min (or btmin, bucketsTimeMin): Minimum among all bucket processing times, in milliseconds.
    • buckets.time.total (or btt, bucketsTimeTotal): Sum of all bucket processing times, in milliseconds.
    • data.buckets (or db, dataBuckets): The number of buckets processed.
    • data.earliest_record (or der, dataEarliestRecord): The timestamp of the earliest chronologically input document.
    • data.empty_buckets (or deb, dataEmptyBuckets): The number of buckets which did not contain any data.
    • data.input_bytes (or dib, dataInputBytes): The number of bytes of input data posted to the anomaly detection job.
    • data.input_fields (or dif, dataInputFields): The total number of fields in input documents posted to the anomaly detection job. This count includes fields that are not used in the analysis. However, be aware that if you are using a datafeed, it extracts only the required fields from the documents it retrieves before posting them to the job.
    • data.input_records (or dir, dataInputRecords): The number of input documents posted to the anomaly detection job.
    • data.invalid_dates (or did, dataInvalidDates): The number of input documents with either a missing date field or a date that could not be parsed.
    • data.last (or dl, dataLast): The timestamp at which data was last analyzed, according to server time.
    • data.last_empty_bucket (or dleb, dataLastEmptyBucket): The timestamp of the last bucket that did not contain any data.
    • data.last_sparse_bucket (or dlsb, dataLastSparseBucket): The timestamp of the last bucket that was considered sparse.
    • data.latest_record (or dlr, dataLatestRecord): The timestamp of the latest chronologically input document.
    • data.missing_fields (or dmf, dataMissingFields): The number of input documents that are missing a field that the anomaly detection job is configured to analyze. Input documents with missing fields are still processed because it is possible that not all fields are missing.
    • data.out_of_order_timestamps (or doot, dataOutOfOrderTimestamps): The number of input documents that have a timestamp chronologically preceding the start of the current anomaly detection bucket offset by the latency window. This information is applicable only when you provide data to the anomaly detection job by using the post data API. These out of order documents are discarded, since jobs require time series data to be in ascending chronological order.
    • data.processed_fields (or dpf, dataProcessedFields): The total number of fields in all the documents that have been processed by the anomaly detection job. Only fields that are specified in the detector configuration object contribute to this count. The timestamp is not included in this count.
    • data.processed_records (or dpr, dataProcessedRecords): The number of input documents that have been processed by the anomaly detection job. This value includes documents with missing fields, since they are nonetheless analyzed. If you use datafeeds and have aggregations in your search query, the processed record count is the number of aggregation results processed, not the number of Elasticsearch documents.
    • data.sparse_buckets (or dsb, dataSparseBuckets): The number of buckets that contained few data points compared to the expected number of data points.
    • forecasts.memory.avg (or fmavg, forecastsMemoryAvg): The average memory usage in bytes for forecasts related to the anomaly detection job.
    • forecasts.memory.max (or fmmax, forecastsMemoryMax): The maximum memory usage in bytes for forecasts related to the anomaly detection job.
    • forecasts.memory.min (or fmmin, forecastsMemoryMin): The minimum memory usage in bytes for forecasts related to the anomaly detection job.
    • forecasts.memory.total (or fmt, forecastsMemoryTotal): The total memory usage in bytes for forecasts related to the anomaly detection job.
    • forecasts.records.avg (or fravg, forecastsRecordsAvg): The average number of model_forecast` documents written for forecasts related to the anomaly detection job.
    • forecasts.records.max (or frmax, forecastsRecordsMax): The maximum number of model_forecast documents written for forecasts related to the anomaly detection job.
    • forecasts.records.min (or frmin, forecastsRecordsMin): The minimum number of model_forecast documents written for forecasts related to the anomaly detection job.
    • forecasts.records.total (or frt, forecastsRecordsTotal): The total number of model_forecast documents written for forecasts related to the anomaly detection job.
    • forecasts.time.avg (or ftavg, forecastsTimeAvg): The average runtime in milliseconds for forecasts related to the anomaly detection job.
    • forecasts.time.max (or ftmax, forecastsTimeMax): The maximum runtime in milliseconds for forecasts related to the anomaly detection job.
    • forecasts.time.min (or ftmin, forecastsTimeMin): The minimum runtime in milliseconds for forecasts related to the anomaly detection job.
    • forecasts.time.total (or ftt, forecastsTimeTotal): The total runtime in milliseconds for forecasts related to the anomaly detection job.
    • forecasts.total (or ft, forecastsTotal): The number of individual forecasts currently available for the job.
    • id: Identifier for the anomaly detection job.
    • model.bucket_allocation_failures (or mbaf, modelBucketAllocationFailures): The number of buckets for which new entities in incoming data were not processed due to insufficient model memory.
    • model.by_fields (or mbf, modelByFields): The number of by field values that were analyzed by the models. This value is cumulative for all detectors in the job.
    • model.bytes (or mb, modelBytes): The number of bytes of memory used by the models. This is the maximum value since the last time the model was persisted. If the job is closed, this value indicates the latest size.
    • model.bytes_exceeded (or mbe, modelBytesExceeded): The number of bytes over the high limit for memory usage at the last allocation failure.
    • model.categorization_status (or mcs, modelCategorizationStatus): The status of categorization for the job: ok or warn. If ok, categorization is performing acceptably well (or not being used at all). If warn, categorization is detecting a distribution of categories that suggests the input data is inappropriate for categorization. Problems could be that there is only one category, more than 90% of categories are rare, the number of categories is greater than 50% of the number of categorized documents, there are no frequently matched categories, or more than 50% of categories are dead.
    • model.categorized_doc_count (or mcdc, modelCategorizedDocCount): The number of documents that have had a field categorized.
    • model.dead_category_count (or mdcc, modelDeadCategoryCount): The number of categories created by categorization that will never be assigned again because another category’s definition makes it a superset of the dead category. Dead categories are a side effect of the way categorization has no prior training.
    • model.failed_category_count (or mdcc, modelFailedCategoryCount): The number of times that categorization wanted to create a new category but couldn’t because the job had hit its model memory limit. This count does not track which specific categories failed to be created. Therefore, you cannot use this value to determine the number of unique categories that were missed.
    • model.frequent_category_count (or mfcc, modelFrequentCategoryCount): The number of categories that match more than 1% of categorized documents.
    • model.log_time (or mlt, modelLogTime): The timestamp when the model stats were gathered, according to server time.
    • model.memory_limit (or mml, modelMemoryLimit): The timestamp when the model stats were gathered, according to server time.
    • model.memory_status (or mms, modelMemoryStatus): The status of the mathematical models: ok, soft_limit, or hard_limit. If ok, the models stayed below the configured value. If soft_limit, the models used more than 60% of the configured memory limit and older unused models will be pruned to free up space. Additionally, in categorization jobs no further category examples will be stored. If hard_limit, the models used more space than the configured memory limit. As a result, not all incoming data was processed.
    • model.over_fields (or mof, modelOverFields): The number of over field values that were analyzed by the models. This value is cumulative for all detectors in the job.
    • model.partition_fields (or mpf, modelPartitionFields): The number of partition field values that were analyzed by the models. This value is cumulative for all detectors in the job.
    • model.rare_category_count (or mrcc, modelRareCategoryCount): The number of categories that match just one categorized document.
    • model.timestamp (or mt, modelTimestamp): The timestamp of the last record when the model stats were gathered.
    • model.total_category_count (or mtcc, modelTotalCategoryCount): The number of categories created by categorization.
    • node.address (or na, nodeAddress): The network address of the node that runs the job. This information is available only for open jobs.
    • node.ephemeral_id (or ne, nodeEphemeralId): The ephemeral ID of the node that runs the job. This information is available only for open jobs.
    • node.id (or ni, nodeId): The unique identifier of the node that runs the job. This information is available only for open jobs.
    • node.name (or nn, nodeName): The name of the node that runs the job. This information is available only for open jobs.
    • opened_time (or ot): For open jobs only, the elapsed time for which the job has been open.
    • state (or s): The status of the anomaly detection job: closed, closing, failed, opened, or opening. If closed, the job finished successfully with its model state persisted. The job must be opened before it can accept further data. If closing, the job close action is in progress and has not yet completed. A closing job cannot accept further data. If failed, the job did not finish successfully due to an error. This situation can occur due to invalid input data, a fatal error occurring during the analysis, or an external interaction such as the process being killed by the Linux out of memory (OOM) killer. If the job had irrevocably failed, it must be force closed and then deleted. If the datafeed can be corrected, the job can be closed and then re-opened. If opened, the job is available to receive and process data. If opening, the job open action is in progress and has not yet completed.
  • time string

    The unit used to display time values.

    Values are nanos, micros, ms, s, m, h, or d.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • id string
    • state string

      Values are closing, closed, opened, failed, or opening.

    • For open jobs only, the amount of time the job has been opened.

    • For open anomaly detection jobs only, contains messages relating to the selection of a node to run the job.

    • The number of input documents that have been processed by the anomaly detection job. This value includes documents with missing fields, since they are nonetheless analyzed. If you use datafeeds and have aggregations in your search query, the processed_record_count is the number of aggregation results processed, not the number of Elasticsearch documents.

    • The total number of fields in all the documents that have been processed by the anomaly detection job. Only fields that are specified in the detector configuration object contribute to this count. The timestamp is not included in this count.

    • The number of input documents posted to the anomaly detection job.

    • The total number of fields in input documents posted to the anomaly detection job. This count includes fields that are not used in the analysis. However, be aware that if you are using a datafeed, it extracts only the required fields from the documents it retrieves before posting them to the job.

    • The number of input documents with either a missing date field or a date that could not be parsed.

    • The number of input documents that are missing a field that the anomaly detection job is configured to analyze. Input documents with missing fields are still processed because it is possible that not all fields are missing. If you are using datafeeds or posting data to the job in JSON format, a high missing_field_count is often not an indication of data issues. It is not necessarily a cause for concern.

    • The number of input documents that have a timestamp chronologically preceding the start of the current anomaly detection bucket offset by the latency window. This information is applicable only when you provide data to the anomaly detection job by using the post data API. These out of order documents are discarded, since jobs require time series data to be in ascending chronological order.

    • The number of buckets which did not contain any data. If your data contains many empty buckets, consider increasing your bucket_span or using functions that are tolerant to gaps in data such as mean, non_null_sum or non_zero_count.

    • The number of buckets that contained few data points compared to the expected number of data points. If your data contains many sparse buckets, consider using a longer bucket_span.

    • The total number of buckets processed.

    • The timestamp of the earliest chronologically input document.

    • The timestamp of the latest chronologically input document.

    • The timestamp at which data was last analyzed, according to server time.

    • The timestamp of the last bucket that did not contain any data.

    • The timestamp of the last bucket that was considered sparse.

    • Values are ok, soft_limit, or hard_limit.

    • The upper limit for model memory usage, checked on increasing values.

    • The number of by field values that were analyzed by the models. This value is cumulative for all detectors in the job.

    • The number of over field values that were analyzed by the models. This value is cumulative for all detectors in the job.

    • The number of partition field values that were analyzed by the models. This value is cumulative for all detectors in the job.

    • The number of buckets for which new entities in incoming data were not processed due to insufficient model memory. This situation is also signified by a hard_limit: memory_status property value.

    • Values are ok or warn.

    • The number of documents that have had a field categorized.

    • The number of categories created by categorization.

    • The number of categories that match more than 1% of categorized documents.

    • The number of categories that match just one categorized document.

    • The number of categories created by categorization that will never be assigned again because another category’s definition makes it a superset of the dead category. Dead categories are a side effect of the way categorization has no prior training.

    • The number of times that categorization wanted to create a new category but couldn’t because the job had hit its model_memory_limit. This count does not track which specific categories failed to be created. Therefore you cannot use this value to determine the number of unique categories that were missed.

    • The timestamp when the model stats were gathered, according to server time.

    • The timestamp of the last record when the model stats were gathered.

    • The number of individual forecasts currently available for the job. A value of one or more indicates that forecasts exist.

    • The minimum memory usage in bytes for forecasts related to the anomaly detection job.

    • The maximum memory usage in bytes for forecasts related to the anomaly detection job.

    • The average memory usage in bytes for forecasts related to the anomaly detection job.

    • The total memory usage in bytes for forecasts related to the anomaly detection job.

    • The minimum number of model_forecast documents written for forecasts related to the anomaly detection job.

    • The maximum number of model_forecast documents written for forecasts related to the anomaly detection job.

    • The average number of model_forecast documents written for forecasts related to the anomaly detection job.

    • The total number of model_forecast documents written for forecasts related to the anomaly detection job.

    • The minimum runtime in milliseconds for forecasts related to the anomaly detection job.

    • The maximum runtime in milliseconds for forecasts related to the anomaly detection job.

    • The average runtime in milliseconds for forecasts related to the anomaly detection job.

    • The total runtime in milliseconds for forecasts related to the anomaly detection job.

    • node.id string
    • The name of the assigned node.

    • The network address of the assigned node.

    • The number of bucket results produced by the job.

    • The sum of all bucket processing times, in milliseconds.

    • The minimum of all bucket processing times, in milliseconds.

    • The maximum of all bucket processing times, in milliseconds.

    • The exponential moving average of all bucket processing times, in milliseconds.

    • The exponential moving average of bucket processing times calculated in a one hour time window, in milliseconds.

GET /_cat/ml/anomaly_detectors
curl \
 --request GET 'http://api.example.com/_cat/ml/anomaly_detectors' \
 --header "Authorization: $API_KEY"
Response examples (200)
A successful response from `GET _cat/ml/anomaly_detectors?h=id,s,dpr,mb&v=true&format=json`.
[
  {
    "id": "high_sum_total_sales",
    "s": "closed",
    "dpr": "14022",
    "mb": "1.5mb"
  },
  {
    "id": "low_request_rate",
    "s": "closed",
    "dpr": "1216",
    "mb": "40.5kb"
  },
  {
    "id": "response_code_rates",
    "s": "closed",
    "dpr": "28146",
    "mb": "132.7kb"
  },
  {
    "id": "url_scanning",
    "s": "closed",
    "dpr": "28146",
    "mb": "501.6kb"
  }
]




Get trained models Added in 7.7.0

GET /_cat/ml/trained_models

Get configuration and usage information about inference trained models.

IMPORTANT: CAT APIs are only intended for human consumption using the Kibana console or command line. They are not intended for use by applications. For application consumption, use the get trained models statistics API.

Query parameters

  • Specifies what to do when the request: contains wildcard expressions and there are no models that match; contains the _all string or no identifiers and there are no matches; contains wildcard expressions and there are only partial matches. If true, the API returns an empty array when there are no matches and the subset of results when there are partial matches. If false, the API returns a 404 status code when there are no matches or only partial matches.

  • bytes string

    The unit used to display byte values.

    Values are b, kb, mb, gb, tb, or pb.

  • h string | array[string]

    A comma-separated list of column names to display.

    Supported values include:

    • create_time (or ct): The time when the trained model was created.
    • created_by (or c, createdBy): Information on the creator of the trained model.
    • data_frame_analytics_id (or df, dataFrameAnalytics, dfid): Identifier for the data frame analytics job that created the model. Only displayed if it is still available.
    • description (or d): The description of the trained model.
    • heap_size (or hs, modelHeapSize): The estimated heap size to keep the trained model in memory.
    • id: Identifier for the trained model.
    • ingest.count (or ic, ingestCount): The total number of documents that are processed by the model.
    • ingest.current (or icurr, ingestCurrent): The total number of document that are currently being handled by the trained model.
    • ingest.failed (or if, ingestFailed): The total number of failed ingest attempts with the trained model.
    • ingest.pipelines (or ip, ingestPipelines): The total number of ingest pipelines that are referencing the trained model.
    • ingest.time (or it, ingestTime): The total time that is spent processing documents with the trained model.
    • license (or l): The license level of the trained model.
    • operations (or o, modelOperations): The estimated number of operations to use the trained model. This number helps measuring the computational complexity of the model.
    • version (or v): The Elasticsearch version number in which the trained model was created.
  • s string | array[string]

    A comma-separated list of column names or aliases used to sort the response.

    Supported values include:

    • create_time (or ct): The time when the trained model was created.
    • created_by (or c, createdBy): Information on the creator of the trained model.
    • data_frame_analytics_id (or df, dataFrameAnalytics, dfid): Identifier for the data frame analytics job that created the model. Only displayed if it is still available.
    • description (or d): The description of the trained model.
    • heap_size (or hs, modelHeapSize): The estimated heap size to keep the trained model in memory.
    • id: Identifier for the trained model.
    • ingest.count (or ic, ingestCount): The total number of documents that are processed by the model.
    • ingest.current (or icurr, ingestCurrent): The total number of document that are currently being handled by the trained model.
    • ingest.failed (or if, ingestFailed): The total number of failed ingest attempts with the trained model.
    • ingest.pipelines (or ip, ingestPipelines): The total number of ingest pipelines that are referencing the trained model.
    • ingest.time (or it, ingestTime): The total time that is spent processing documents with the trained model.
    • license (or l): The license level of the trained model.
    • operations (or o, modelOperations): The estimated number of operations to use the trained model. This number helps measuring the computational complexity of the model.
    • version (or v): The Elasticsearch version number in which the trained model was created.
  • from number

    Skips the specified number of transforms.

  • size number

    The maximum number of transforms to display.

  • time string

    Unit used to display time values.

    Values are nanos, micros, ms, s, m, h, or d.

Responses

GET /_cat/ml/trained_models
curl \
 --request GET 'http://api.example.com/_cat/ml/trained_models' \
 --header "Authorization: $API_KEY"
Response examples (200)
A successful response from `GET _cat/ml/trained_models?v=true&format=json`.
[
  {
    "id": "ddddd-1580216177138",
    "heap_size": "0b",
    "operations": "196",
    "create_time": "2025-03-25T00:01:38.662Z",
    "type": "pytorch",
    "ingest.pipelines": "0",
    "data_frame.id": "__none__"
  },
  {
    "id": "lang_ident_model_1",
    "heap_size": "1mb",
    "operations": "39629",
    "create_time": "2019-12-05T12:28:34.594Z",
    "type": "lang_ident",
    "ingest.pipelines": "0",
    "data_frame.id": "__none__"
  }
]

Get trained models Added in 7.7.0

GET /_cat/ml/trained_models/{model_id}

Get configuration and usage information about inference trained models.

IMPORTANT: CAT APIs are only intended for human consumption using the Kibana console or command line. They are not intended for use by applications. For application consumption, use the get trained models statistics API.

Path parameters

  • model_id string Required

    A unique identifier for the trained model.

Query parameters

  • Specifies what to do when the request: contains wildcard expressions and there are no models that match; contains the _all string or no identifiers and there are no matches; contains wildcard expressions and there are only partial matches. If true, the API returns an empty array when there are no matches and the subset of results when there are partial matches. If false, the API returns a 404 status code when there are no matches or only partial matches.

  • bytes string

    The unit used to display byte values.

    Values are b, kb, mb, gb, tb, or pb.

  • h string | array[string]

    A comma-separated list of column names to display.

    Supported values include:

    • create_time (or ct): The time when the trained model was created.
    • created_by (or c, createdBy): Information on the creator of the trained model.
    • data_frame_analytics_id (or df, dataFrameAnalytics, dfid): Identifier for the data frame analytics job that created the model. Only displayed if it is still available.
    • description (or d): The description of the trained model.
    • heap_size (or hs, modelHeapSize): The estimated heap size to keep the trained model in memory.
    • id: Identifier for the trained model.
    • ingest.count (or ic, ingestCount): The total number of documents that are processed by the model.
    • ingest.current (or icurr, ingestCurrent): The total number of document that are currently being handled by the trained model.
    • ingest.failed (or if, ingestFailed): The total number of failed ingest attempts with the trained model.
    • ingest.pipelines (or ip, ingestPipelines): The total number of ingest pipelines that are referencing the trained model.
    • ingest.time (or it, ingestTime): The total time that is spent processing documents with the trained model.
    • license (or l): The license level of the trained model.
    • operations (or o, modelOperations): The estimated number of operations to use the trained model. This number helps measuring the computational complexity of the model.
    • version (or v): The Elasticsearch version number in which the trained model was created.
  • s string | array[string]

    A comma-separated list of column names or aliases used to sort the response.

    Supported values include:

    • create_time (or ct): The time when the trained model was created.
    • created_by (or c, createdBy): Information on the creator of the trained model.
    • data_frame_analytics_id (or df, dataFrameAnalytics, dfid): Identifier for the data frame analytics job that created the model. Only displayed if it is still available.
    • description (or d): The description of the trained model.
    • heap_size (or hs, modelHeapSize): The estimated heap size to keep the trained model in memory.
    • id: Identifier for the trained model.
    • ingest.count (or ic, ingestCount): The total number of documents that are processed by the model.
    • ingest.current (or icurr, ingestCurrent): The total number of document that are currently being handled by the trained model.
    • ingest.failed (or if, ingestFailed): The total number of failed ingest attempts with the trained model.
    • ingest.pipelines (or ip, ingestPipelines): The total number of ingest pipelines that are referencing the trained model.
    • ingest.time (or it, ingestTime): The total time that is spent processing documents with the trained model.
    • license (or l): The license level of the trained model.
    • operations (or o, modelOperations): The estimated number of operations to use the trained model. This number helps measuring the computational complexity of the model.
    • version (or v): The Elasticsearch version number in which the trained model was created.
  • from number

    Skips the specified number of transforms.

  • size number

    The maximum number of transforms to display.

  • time string

    Unit used to display time values.

    Values are nanos, micros, ms, s, m, h, or d.

Responses

GET /_cat/ml/trained_models/{model_id}
curl \
 --request GET 'http://api.example.com/_cat/ml/trained_models/{model_id}' \
 --header "Authorization: $API_KEY"
Response examples (200)
A successful response from `GET _cat/ml/trained_models?v=true&format=json`.
[
  {
    "id": "ddddd-1580216177138",
    "heap_size": "0b",
    "operations": "196",
    "create_time": "2025-03-25T00:01:38.662Z",
    "type": "pytorch",
    "ingest.pipelines": "0",
    "data_frame.id": "__none__"
  },
  {
    "id": "lang_ident_model_1",
    "heap_size": "1mb",
    "operations": "39629",
    "create_time": "2019-12-05T12:28:34.594Z",
    "type": "lang_ident",
    "ingest.pipelines": "0",
    "data_frame.id": "__none__"
  }
]












Get plugin information

GET /_cat/plugins

Get a list of plugins running on each node of a cluster. IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the nodes info API.

Query parameters

  • h string | array[string]

    List of columns to appear in the response. Supports simple wildcards.

  • s string | array[string]

    List of columns that determine how the table should be sorted. Sorting defaults to ascending and can be changed by setting :asc or :desc as a suffix to the column name.

  • Include bootstrap plugins in the response

  • local boolean

    If true, the request computes the list of selected nodes from the local cluster state. If false the list of selected nodes are computed from the cluster state of the master node. In both cases the coordinating node will send requests for further information to each selected node.

  • Period to wait for a connection to the master node.

Responses

GET /_cat/plugins
curl \
 --request GET 'http://api.example.com/_cat/plugins' \
 --header "Authorization: $API_KEY"
Response examples (200)
A successful response from `GET /_cat/plugins?v=true&s=component&h=name,component,version,description&format=json`.
[
  { "name": "U7321H6", "component": "analysis-icu", "version": "8.17.0", "description": "The ICU Analysis plugin integrates the Lucene ICU module into Elasticsearch, adding ICU-related analysis components."},
  {"name": "U7321H6", "component": "analysis-kuromoji",   "verison":  "8.17.0", description: "The Japanese (kuromoji) Analysis plugin integrates Lucene kuromoji analysis module into elasticsearch."},
  {"name" "U7321H6", "component": "analysis-nori", "version":         "8.17.0", "description": "The Korean (nori) Analysis plugin integrates Lucene nori analysis module into elasticsearch."},
  {"name": "U7321H6", "component": "analysis-phonetic",   "verison":  "8.17.0", "description": "The Phonetic Analysis plugin integrates phonetic token filter analysis with elasticsearch."},
  {"name": "U7321H6", "component": "analysis-smartcn",   "verison":  "8.17.0", "description": "Smart Chinese Analysis plugin integrates Lucene Smart Chinese analysis module into elasticsearch."},
  {"name": "U7321H6", "component": "analysis-stempel",   "verison":  "8.17.0", "description": "The Stempel (Polish) Analysis plugin integrates Lucene stempel (polish) analysis module into elasticsearch."},
  {"name": "U7321H6", "component": "analysis-ukrainian",   "verison":  "8.17.0", "description": "The Ukrainian Analysis plugin integrates the Lucene UkrainianMorfologikAnalyzer into elasticsearch."},
  {"name": "U7321H6", "component": "discovery-azure-classic",   "verison":  "8.17.0", "description": "The Azure Classic Discovery plugin allows to use Azure Classic API for the unicast discovery mechanism"},
  {"name": "U7321H6", "component": "discovery-ec2",   "verison":  "8.17.0", "description": "The EC2 discovery plugin allows to use AWS API for the unicast discovery mechanism."},
  {"name": "U7321H6", "component": "discovery-gce",   "verison":  "8.17.0", "description": "The Google Compute Engine (GCE) Discovery plugin allows to use GCE API for the unicast discovery mechanism."},
  {"name": "U7321H6", "component": "mapper-annotated-text",   "verison":  "8.17.0", "description": "The Mapper Annotated_text plugin adds support for text fields with markup used to inject annotation tokens into the index."},
  {"name": "U7321H6", "component": "mapper-murmur3",   "verison":  "8.17.0", "description": "The Mapper Murmur3 plugin allows to compute hashes of a field's values at index-time and to store them in the index."},
  {"name": "U7321H6", "component": "mapper-size",   "verison":  "8.17.0", "description": "The Mapper Size plugin allows document to record their uncompressed size at index time."},
  {"name": "U7321H6", "component": "store-smb",   "verison":  "8.17.0", "description": "The Store SMB plugin adds support for SMB stores."}
]

Get shard recovery information

GET /_cat/recovery

Get information about ongoing and completed shard recoveries. Shard recovery is the process of initializing a shard copy, such as restoring a primary shard from a snapshot or syncing a replica shard from a primary shard. When a shard recovery completes, the recovered shard is available for search and indexing. For data streams, the API returns information about the stream’s backing indices. IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the index recovery API.

Query parameters

  • If true, the response only includes ongoing shard recoveries.

  • bytes string

    The unit used to display byte values.

    Values are b, kb, mb, gb, tb, or pb.

  • detailed boolean

    If true, the response includes detailed information about shard recoveries.

  • index string | array[string]

    Comma-separated list or wildcard expression of index names to limit the returned information

  • h string | array[string]

    List of columns to appear in the response. Supports simple wildcards.

  • s string | array[string]

    List of columns that determine how the table should be sorted. Sorting defaults to ascending and can be changed by setting :asc or :desc as a suffix to the column name.

  • time string

    Unit used to display time values.

    Values are nanos, micros, ms, s, m, h, or d.

Responses

GET /_cat/recovery
curl \
 --request GET 'http://api.example.com/_cat/recovery' \
 --header "Authorization: $API_KEY"
A successful response from `GET _cat/recovery?v=true&format=json`. In this example, the source and target nodes are the same because the recovery type is `store`, meaning they were read from local storage on node start.
[
  {
    "index": "my-index-000001 ",
    "shard": "0",
    "time": "13ms",
    "type": "store",
    "stage": "done",
    "source_host": "n/a",
    "source_node": "n/a",
    "target_host": "127.0.0.1",
    "target_node": "node-0",
    "repository": "n/a",
    "snapshot": "n/a",
    "files": "0",
    "files_recovered": "0",
    "files_percent": "100.0%",
    "files_total": "13",
    "bytes": "0b",
    "bytes_recovered": "0b",
    "bytes_percent": "100.0%",
    "bytes_total": "9928b",
    "translog_ops": "0",
    "translog_ops_recovered": "0",
    "translog_ops_percent": "100.0%"
  }
]
A successful response from `GET _cat/recovery?v=true&h=i,s,t,ty,st,shost,thost,f,fp,b,bp&format=json`. You can retrieve information about an ongoing recovery for example when you increase the replica count of an index and bring another node online to host the replicas. In this example, the recovery type is `peer`, meaning the shard recovered from another node. The `files` and `bytes` are real-time measurements.
[
  {
    "i": "my-index-000001",
    "s": "0",
    "t": "1252ms",
    "ty": "peer",
    "st": "done",
    "shost": "192.168.1.1",
    "thost": "192.168.1.1",
    "f": "0",
    "fp": "100.0%",
    "b": "0b",
    "bp": "100.0%",
  }
]
A successful response from `GET _cat/recovery?v=true&h=i,s,t,ty,st,rep,snap,f,fp,b,bp&format=json`. You can restore backups of an index using the snapshot and restore API. You can use the cat recovery API to get information about a snapshot recovery.
[
  {
    "i": "my-index-000001",
    "s": "0",
    "t": "1978ms",
    "ty": "snapshot",
    "st": "done",
    "rep": "my-repo",
    "snap": "snap-1",
    "f": "79",
    "fp": "8.0%",
    "b": "12086",
    "bp": "9.0%"
  }
]

Get shard recovery information

GET /_cat/recovery/{index}

Get information about ongoing and completed shard recoveries. Shard recovery is the process of initializing a shard copy, such as restoring a primary shard from a snapshot or syncing a replica shard from a primary shard. When a shard recovery completes, the recovered shard is available for search and indexing. For data streams, the API returns information about the stream’s backing indices. IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the index recovery API.

Path parameters

  • index string | array[string] Required

    A comma-separated list of data streams, indices, and aliases used to limit the request. Supports wildcards (*). To target all data streams and indices, omit this parameter or use * or _all.

Query parameters

  • If true, the response only includes ongoing shard recoveries.

  • bytes string

    The unit used to display byte values.

    Values are b, kb, mb, gb, tb, or pb.

  • detailed boolean

    If true, the response includes detailed information about shard recoveries.

  • index string | array[string]

    Comma-separated list or wildcard expression of index names to limit the returned information

  • h string | array[string]

    List of columns to appear in the response. Supports simple wildcards.

  • s string | array[string]

    List of columns that determine how the table should be sorted. Sorting defaults to ascending and can be changed by setting :asc or :desc as a suffix to the column name.

  • time string

    Unit used to display time values.

    Values are nanos, micros, ms, s, m, h, or d.

Responses

GET /_cat/recovery/{index}
curl \
 --request GET 'http://api.example.com/_cat/recovery/{index}' \
 --header "Authorization: $API_KEY"
A successful response from `GET _cat/recovery?v=true&format=json`. In this example, the source and target nodes are the same because the recovery type is `store`, meaning they were read from local storage on node start.
[
  {
    "index": "my-index-000001 ",
    "shard": "0",
    "time": "13ms",
    "type": "store",
    "stage": "done",
    "source_host": "n/a",
    "source_node": "n/a",
    "target_host": "127.0.0.1",
    "target_node": "node-0",
    "repository": "n/a",
    "snapshot": "n/a",
    "files": "0",
    "files_recovered": "0",
    "files_percent": "100.0%",
    "files_total": "13",
    "bytes": "0b",
    "bytes_recovered": "0b",
    "bytes_percent": "100.0%",
    "bytes_total": "9928b",
    "translog_ops": "0",
    "translog_ops_recovered": "0",
    "translog_ops_percent": "100.0%"
  }
]
A successful response from `GET _cat/recovery?v=true&h=i,s,t,ty,st,shost,thost,f,fp,b,bp&format=json`. You can retrieve information about an ongoing recovery for example when you increase the replica count of an index and bring another node online to host the replicas. In this example, the recovery type is `peer`, meaning the shard recovered from another node. The `files` and `bytes` are real-time measurements.
[
  {
    "i": "my-index-000001",
    "s": "0",
    "t": "1252ms",
    "ty": "peer",
    "st": "done",
    "shost": "192.168.1.1",
    "thost": "192.168.1.1",
    "f": "0",
    "fp": "100.0%",
    "b": "0b",
    "bp": "100.0%",
  }
]
A successful response from `GET _cat/recovery?v=true&h=i,s,t,ty,st,rep,snap,f,fp,b,bp&format=json`. You can restore backups of an index using the snapshot and restore API. You can use the cat recovery API to get information about a snapshot recovery.
[
  {
    "i": "my-index-000001",
    "s": "0",
    "t": "1978ms",
    "ty": "snapshot",
    "st": "done",
    "rep": "my-repo",
    "snap": "snap-1",
    "f": "79",
    "fp": "8.0%",
    "b": "12086",
    "bp": "9.0%"
  }
]

Get snapshot repository information Added in 2.1.0

GET /_cat/repositories

Get a list of snapshot repositories for a cluster. IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the get snapshot repository API.

Query parameters

  • h string | array[string]

    List of columns to appear in the response. Supports simple wildcards.

  • s string | array[string]

    List of columns that determine how the table should be sorted. Sorting defaults to ascending and can be changed by setting :asc or :desc as a suffix to the column name.

  • local boolean

    If true, the request computes the list of selected nodes from the local cluster state. If false the list of selected nodes are computed from the cluster state of the master node. In both cases the coordinating node will send requests for further information to each selected node.

  • Period to wait for a connection to the master node.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • id string

      The unique repository identifier.

    • type string

      The repository type.

GET /_cat/repositories
curl \
 --request GET 'http://api.example.com/_cat/repositories' \
 --header "Authorization: $API_KEY"
Response examples (200)
A successful response from `GET /_cat/repositories?v=true&format=json`.
[
  {
    "id": "repo1",
    "type": "fs"
  },
  {
    "id": "repo2",
    "type": "s3"
  }
]








Get shard information

GET /_cat/shards

Get information about the shards in a cluster. For data streams, the API returns information about the backing indices. IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications.

Query parameters

  • bytes string

    The unit used to display byte values.

    Values are b, kb, mb, gb, tb, or pb.

  • h string | array[string]

    List of columns to appear in the response. Supports simple wildcards.

  • s string | array[string]

    List of columns that determine how the table should be sorted. Sorting defaults to ascending and can be changed by setting :asc or :desc as a suffix to the column name.

  • Period to wait for a connection to the master node.

  • time string

    Unit used to display time values.

    Values are nanos, micros, ms, s, m, h, or d.

Responses

GET /_cat/shards
curl \
 --request GET 'http://api.example.com/_cat/shards' \
 --header "Authorization: $API_KEY"
A successful response from `GET _cat/shards?format=json`.
[
  {
    "index": "my-index-000001",
    "shard": "0",
    "prirep": "p",
    "state": "STARTED",
    "docs": "3014",
    "store": "31.1mb",
    "dataset": "249b",
    "ip": "192.168.56.10",
    "node": "H5dfFeA"
  }
]
A successful response from `GET _cat/shards/my-index-*?format=json`. It returns information for any data streams or indices beginning with `my-index-`.
[
  {
    "index": "my-index-000001",
    "shard": "0",
    "prirep": "p",
    "state": "STARTED",
    "docs": "3014",
    "store": "31.1mb",
    "dataset": "249b",
    "ip": "192.168.56.10",
    "node": "H5dfFeA"
  }
]
A successful response from `GET _cat/shards?format=json`. The `RELOCATING` value in the `state` column indicates the index shard is relocating.
[
  {
    "index": "my-index-000001",
    "shard": "0",
    "prirep": "p",
    "state": "RELOCATING",
    "docs": "3014",
    "store": "31.1mb",
    "dataset": "249b",
    "ip": "192.168.56.10",
    "node": "H5dfFeA -> -> 192.168.56.30 bGG90GE"
  }
]
A successful response from `GET _cat/shards?format=json`. Before a shard is available for use, it goes through an `INITIALIZING` state. You can use the cat shards API to see which shards are initializing.
[
  {
    "index": "my-index-000001",
    "shard": "0",
    "prirep": "p",
    "state": "STARTED",
    "docs": "3014",
    "store": "31.1mb",
    "dataset": "249b",
    "ip": "192.168.56.10",
    "node": "H5dfFeA"
  },
  {
    "index": "my-index-000001",
    "shard": "0",
    "prirep": "r",
    "state": "INITIALIZING",
    "docs": "0",
    "store": "14.3mb",
    "dataset": "249b",
    "ip": "192.168.56.30",
    "node": "bGG90GE"
  }
]
A successful response from `GET _cat/shards?h=index,shard,prirep,state,unassigned.reason&format=json`. It includes the `unassigned.reason` column, which indicates why a shard is unassigned.
[
  {
    "index": "my-index-000001",
    "shard": "0",
    "prirep": "p",
    "state": "STARTED",
    "unassigned.reason": "3014 31.1mb 192.168.56.10 H5dfFeA"
  },
  {
    "index": "my-index-000001",
    "shard": "0",
    "prirep": "r",
    "state": "STARTED",
    "unassigned.reason": "3014 31.1mb 192.168.56.30 bGG90GE"
  },
  {
    "index": "my-index-000001",
    "shard": "0",
    "prirep": "r",
    "state": "STARTED",
    "unassigned.reason": "3014 31.1mb 192.168.56.20 I8hydUG"
  },
  {
    "index": "my-index-000001",
    "shard": "0",
    "prirep": "r",
    "state": "UNASSIGNED",
    "unassigned.reason": "ALLOCATION_FAILED"
  }
]

Get shard information

GET /_cat/shards/{index}

Get information about the shards in a cluster. For data streams, the API returns information about the backing indices. IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications.

Path parameters

  • index string | array[string] Required

    A comma-separated list of data streams, indices, and aliases used to limit the request. Supports wildcards (*). To target all data streams and indices, omit this parameter or use * or _all.

Query parameters

  • bytes string

    The unit used to display byte values.

    Values are b, kb, mb, gb, tb, or pb.

  • h string | array[string]

    List of columns to appear in the response. Supports simple wildcards.

  • s string | array[string]

    List of columns that determine how the table should be sorted. Sorting defaults to ascending and can be changed by setting :asc or :desc as a suffix to the column name.

  • Period to wait for a connection to the master node.

  • time string

    Unit used to display time values.

    Values are nanos, micros, ms, s, m, h, or d.

Responses

GET /_cat/shards/{index}
curl \
 --request GET 'http://api.example.com/_cat/shards/{index}' \
 --header "Authorization: $API_KEY"
A successful response from `GET _cat/shards?format=json`.
[
  {
    "index": "my-index-000001",
    "shard": "0",
    "prirep": "p",
    "state": "STARTED",
    "docs": "3014",
    "store": "31.1mb",
    "dataset": "249b",
    "ip": "192.168.56.10",
    "node": "H5dfFeA"
  }
]
A successful response from `GET _cat/shards/my-index-*?format=json`. It returns information for any data streams or indices beginning with `my-index-`.
[
  {
    "index": "my-index-000001",
    "shard": "0",
    "prirep": "p",
    "state": "STARTED",
    "docs": "3014",
    "store": "31.1mb",
    "dataset": "249b",
    "ip": "192.168.56.10",
    "node": "H5dfFeA"
  }
]
A successful response from `GET _cat/shards?format=json`. The `RELOCATING` value in the `state` column indicates the index shard is relocating.
[
  {
    "index": "my-index-000001",
    "shard": "0",
    "prirep": "p",
    "state": "RELOCATING",
    "docs": "3014",
    "store": "31.1mb",
    "dataset": "249b",
    "ip": "192.168.56.10",
    "node": "H5dfFeA -> -> 192.168.56.30 bGG90GE"
  }
]
A successful response from `GET _cat/shards?format=json`. Before a shard is available for use, it goes through an `INITIALIZING` state. You can use the cat shards API to see which shards are initializing.
[
  {
    "index": "my-index-000001",
    "shard": "0",
    "prirep": "p",
    "state": "STARTED",
    "docs": "3014",
    "store": "31.1mb",
    "dataset": "249b",
    "ip": "192.168.56.10",
    "node": "H5dfFeA"
  },
  {
    "index": "my-index-000001",
    "shard": "0",
    "prirep": "r",
    "state": "INITIALIZING",
    "docs": "0",
    "store": "14.3mb",
    "dataset": "249b",
    "ip": "192.168.56.30",
    "node": "bGG90GE"
  }
]
A successful response from `GET _cat/shards?h=index,shard,prirep,state,unassigned.reason&format=json`. It includes the `unassigned.reason` column, which indicates why a shard is unassigned.
[
  {
    "index": "my-index-000001",
    "shard": "0",
    "prirep": "p",
    "state": "STARTED",
    "unassigned.reason": "3014 31.1mb 192.168.56.10 H5dfFeA"
  },
  {
    "index": "my-index-000001",
    "shard": "0",
    "prirep": "r",
    "state": "STARTED",
    "unassigned.reason": "3014 31.1mb 192.168.56.30 bGG90GE"
  },
  {
    "index": "my-index-000001",
    "shard": "0",
    "prirep": "r",
    "state": "STARTED",
    "unassigned.reason": "3014 31.1mb 192.168.56.20 I8hydUG"
  },
  {
    "index": "my-index-000001",
    "shard": "0",
    "prirep": "r",
    "state": "UNASSIGNED",
    "unassigned.reason": "ALLOCATION_FAILED"
  }
]

Get snapshot information Added in 2.1.0

GET /_cat/snapshots

Get information about the snapshots stored in one or more repositories. A snapshot is a backup of an index or running Elasticsearch cluster. IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the get snapshot API.

Query parameters

  • If true, the response does not include information from unavailable snapshots.

  • h string | array[string]

    List of columns to appear in the response. Supports simple wildcards.

  • s string | array[string]

    List of columns that determine how the table should be sorted. Sorting defaults to ascending and can be changed by setting :asc or :desc as a suffix to the column name.

  • Period to wait for a connection to the master node.

  • time string

    Unit used to display time values.

    Values are nanos, micros, ms, s, m, h, or d.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • id string

      The unique identifier for the snapshot.

    • The repository name.

    • status string

      The state of the snapshot process. Returned values include: FAILED: The snapshot process failed. INCOMPATIBLE: The snapshot process is incompatible with the current cluster version. IN_PROGRESS: The snapshot process started but has not completed. PARTIAL: The snapshot process completed with a partial success. SUCCESS: The snapshot process completed with a full success.

    • start_epoch number | string

      Some APIs will return values such as numbers also as a string (notably epoch timestamps). This behavior is used to capture this behavior while keeping the semantics of the field type.

      Depending on the target language, code generators can keep the union or remove it and leniently parse strings to the target type.

    • start_time string | object

      A time of day, expressed either as hh:mm, noon, midnight, or an hour/minutes structure.

      One of:
    • end_epoch number | string

      Some APIs will return values such as numbers also as a string (notably epoch timestamps). This behavior is used to capture this behavior while keeping the semantics of the field type.

      Depending on the target language, code generators can keep the union or remove it and leniently parse strings to the target type.

    • end_time string

      Time of day, expressed as HH:MM:SS

    • duration string

      A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

    • indices string

      The number of indices in the snapshot.

    • The number of successful shards in the snapshot.

    • The number of failed shards in the snapshot.

    • The total number of shards in the snapshot.

    • reason string

      The reason for any snapshot failures.

GET /_cat/snapshots
curl \
 --request GET 'http://api.example.com/_cat/snapshots' \
 --header "Authorization: $API_KEY"
Response examples (200)
A successful response from `GET /_cat/snapshots/repo1?v=true&s=id&format=json`.
[
  {
    "id": "snap1",
    "repository": "repo1",
    "status": "FAILED",
    "start_epoch": "1445616705",
    "start_time": "18:11:45",
    "end_epoch": "1445616978",
    "end_time": "18:16:18",
    "duration": "4.6m",
    "indices": "1",
    "successful_shards": "4",
    "failed_shards": "1",
    "total_shards": "5"
  },
  {
    "id": "snap2",
    "repository": "repo1",
    "status": "SUCCESS",
    "start_epoch": "1445634298",
    "start_time": "23:04:58",
    "end_epoch": "1445634672",
    "end_time": "23:11:12",
    "duration": "6.2m",
    "indices": "2",
    "successful_shards": "10",
    "failed_shards": "0",
    "total_shards": "10"
  }
]
















Get thread pool statistics

GET /_cat/thread_pool

Get thread pool statistics for each node in a cluster. Returned information includes all built-in thread pools and custom thread pools. IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the nodes info API.

Query parameters

  • h string | array[string]

    List of columns to appear in the response. Supports simple wildcards.

  • s string | array[string]

    List of columns that determine how the table should be sorted. Sorting defaults to ascending and can be changed by setting :asc or :desc as a suffix to the column name.

  • time string

    The unit used to display time values.

    Values are nanos, micros, ms, s, m, h, or d.

  • local boolean

    If true, the request computes the list of selected nodes from the local cluster state. If false the list of selected nodes are computed from the cluster state of the master node. In both cases the coordinating node will send requests for further information to each selected node.

  • Period to wait for a connection to the master node.

Responses

GET /_cat/thread_pool
curl \
 --request GET 'http://api.example.com/_cat/thread_pool' \
 --header "Authorization: $API_KEY"
Response examples (200)
A successful response from `GET /_cat/thread_pool?format=json`.
[
  {
    "node_name": "node-0",
    "name": "analyze",
    "active": "0",
    "queue": "0",
    "rejected": "0"
  },
  {
    "node_name": "node-0",
    "name": "fetch_shard_started",
    "active": "0",
    "queue": "0",
    "rejected": "0"
  },
  {
    "node_name": "node-0",
    "name": "fetch_shard_store",
    "active": "0",
    "queue": "0",
    "rejected": "0"
  },
  {
    "node_name": "node-0",
    "name": "flush",
    "active": "0",
    "queue": "0",
    "rejected": "0"
  },
  {
    "node_name": "node-0",
    "name": "write",
    "active": "0",
    "queue": "0",
    "rejected": "0"
  }
]
A successful response from `GET /_cat/thread_pool/generic?v=true&h=id,name,active,rejected,completed&format=json`. It returns the `id`, `name`, `active`, `rejected`, and `completed` columns. It also limits returned information to the generic thread pool.
[
  {
    "id": "0EWUhXeBQtaVGlexUeVwMg",
    "name": "generic",
    "active": "0",
    "rejected": "0",
    "completed": "70"
  }
]

Get thread pool statistics

GET /_cat/thread_pool/{thread_pool_patterns}

Get thread pool statistics for each node in a cluster. Returned information includes all built-in thread pools and custom thread pools. IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the nodes info API.

Path parameters

  • thread_pool_patterns string | array[string] Required

    A comma-separated list of thread pool names used to limit the request. Accepts wildcard expressions.

Query parameters

  • h string | array[string]

    List of columns to appear in the response. Supports simple wildcards.

  • s string | array[string]

    List of columns that determine how the table should be sorted. Sorting defaults to ascending and can be changed by setting :asc or :desc as a suffix to the column name.

  • time string

    The unit used to display time values.

    Values are nanos, micros, ms, s, m, h, or d.

  • local boolean

    If true, the request computes the list of selected nodes from the local cluster state. If false the list of selected nodes are computed from the cluster state of the master node. In both cases the coordinating node will send requests for further information to each selected node.

  • Period to wait for a connection to the master node.

Responses

GET /_cat/thread_pool/{thread_pool_patterns}
curl \
 --request GET 'http://api.example.com/_cat/thread_pool/{thread_pool_patterns}' \
 --header "Authorization: $API_KEY"
Response examples (200)
A successful response from `GET /_cat/thread_pool?format=json`.
[
  {
    "node_name": "node-0",
    "name": "analyze",
    "active": "0",
    "queue": "0",
    "rejected": "0"
  },
  {
    "node_name": "node-0",
    "name": "fetch_shard_started",
    "active": "0",
    "queue": "0",
    "rejected": "0"
  },
  {
    "node_name": "node-0",
    "name": "fetch_shard_store",
    "active": "0",
    "queue": "0",
    "rejected": "0"
  },
  {
    "node_name": "node-0",
    "name": "flush",
    "active": "0",
    "queue": "0",
    "rejected": "0"
  },
  {
    "node_name": "node-0",
    "name": "write",
    "active": "0",
    "queue": "0",
    "rejected": "0"
  }
]
A successful response from `GET /_cat/thread_pool/generic?v=true&h=id,name,active,rejected,completed&format=json`. It returns the `id`, `name`, `active`, `rejected`, and `completed` columns. It also limits returned information to the generic thread pool.
[
  {
    "id": "0EWUhXeBQtaVGlexUeVwMg",
    "name": "generic",
    "active": "0",
    "rejected": "0",
    "completed": "70"
  }
]




Get transform information Added in 7.7.0

GET /_cat/transforms/{transform_id}

Get configuration and usage information about transforms.

CAT APIs are only intended for human consumption using the Kibana console or command line. They are not intended for use by applications. For application consumption, use the get transform statistics API.

Path parameters

  • transform_id string Required

    A transform identifier or a wildcard expression. If you do not specify one of these options, the API returns information for all transforms.

Query parameters

  • Specifies what to do when the request: contains wildcard expressions and there are no transforms that match; contains the _all string or no identifiers and there are no matches; contains wildcard expressions and there are only partial matches. If true, it returns an empty transforms array when there are no matches and the subset of results when there are partial matches. If false, the request returns a 404 status code when there are no matches or only partial matches.

  • from number

    Skips the specified number of transforms.

  • h string | array[string]

    Comma-separated list of column names to display.

    Supported values include:

    • changes_last_detection_time (or cldt): The timestamp when changes were last detected in the source indices.
    • checkpoint (or cp): The sequence number for the checkpoint.
    • checkpoint_duration_time_exp_avg (or cdtea, checkpointTimeExpAvg): Exponential moving average of the duration of the checkpoint, in milliseconds.
    • checkpoint_progress (or c, checkpointProgress): The progress of the next checkpoint that is currently in progress.
    • create_time (or ct, createTime): The time the transform was created.
    • delete_time (or dtime): The amount of time spent deleting, in milliseconds.
    • description (or d): The description of the transform.
    • dest_index (or di, destIndex): The destination index for the transform. The mappings of the destination index are deduced based on the source fields when possible. If alternate mappings are required, use the Create index API prior to starting the transform.
    • documents_deleted (or docd): The number of documents that have been deleted from the destination index due to the retention policy for this transform.
    • documents_indexed (or doci): The number of documents that have been indexed into the destination index for the transform.
    • docs_per_second (or dps): Specifies a limit on the number of input documents per second. This setting throttles the transform by adding a wait time between search requests. The default value is null, which disables throttling.
    • documents_processed (or docp): The number of documents that have been processed from the source index of the transform.
    • frequency (or f): The interval between checks for changes in the source indices when the transform is running continuously. Also determines the retry interval in the event of transient failures while the transform is searching or indexing. The minimum value is 1s and the maximum is 1h. The default value is 1m.
    • id: Identifier for the transform.
    • index_failure (or if): The number of indexing failures.
    • index_time (or itime): The amount of time spent indexing, in milliseconds.
    • index_total (or it): The number of index operations.
    • indexed_documents_exp_avg (or idea): Exponential moving average of the number of new documents that have been indexed.
    • last_search_time (or lst, lastSearchTime): The timestamp of the last search in the source indices. This field is only shown if the transform is running.
    • max_page_search_size (or mpsz): Defines the initial page size to use for the composite aggregation for each checkpoint. If circuit breaker exceptions occur, the page size is dynamically adjusted to a lower value. The minimum value is 10 and the maximum is 65,536. The default value is 500.
    • pages_processed (or pp): The number of search or bulk index operations processed. Documents are processed in batches instead of individually.
    • pipeline (or p): The unique identifier for an ingest pipeline.
    • processed_documents_exp_avg (or pdea): Exponential moving average of the number of documents that have been processed.
    • processing_time (or pt): The amount of time spent processing results, in milliseconds.
    • reason (or r): If a transform has a failed state, this property provides details about the reason for the failure.
    • search_failure (or sf): The number of search failures.
    • search_time (or stime): The amount of time spent searching, in milliseconds.
    • search_total (or st): The number of search operations on the source index for the transform.
    • source_index (or si, sourceIndex): The source indices for the transform. It can be a single index, an index pattern (for example, "my-index-*"), an array of indices (for example, ["my-index-000001", "my-index-000002"]), or an array of index patterns (for example, ["my-index-*", "my-other-index-*"]. For remote indices use the syntax "remote_name:index_name". If any indices are in remote clusters then the master node and at least one transform node must have the remote_cluster_client node role.
    • state (or s): The status of the transform, which can be one of the following values:

      • aborting: The transform is aborting.
      • failed: The transform failed. For more information about the failure, check the reason field.
      • indexing: The transform is actively processing data and creating new documents.
      • started: The transform is running but not actively indexing data.
      • stopped: The transform is stopped.
      • stopping: The transform is stopping.
    • transform_type (or tt): Indicates the type of transform: batch or continuous.

    • trigger_count (or tc): The number of times the transform has been triggered by the scheduler. For example, the scheduler triggers the transform indexer to check for updates or ingest new data at an interval specified in the frequency property.

    • version (or v): The version of Elasticsearch that existed on the node when the transform was created.

  • s string | array[string]

    Comma-separated list of column names or column aliases used to sort the response.

    Supported values include:

    • changes_last_detection_time (or cldt): The timestamp when changes were last detected in the source indices.
    • checkpoint (or cp): The sequence number for the checkpoint.
    • checkpoint_duration_time_exp_avg (or cdtea, checkpointTimeExpAvg): Exponential moving average of the duration of the checkpoint, in milliseconds.
    • checkpoint_progress (or c, checkpointProgress): The progress of the next checkpoint that is currently in progress.
    • create_time (or ct, createTime): The time the transform was created.
    • delete_time (or dtime): The amount of time spent deleting, in milliseconds.
    • description (or d): The description of the transform.
    • dest_index (or di, destIndex): The destination index for the transform. The mappings of the destination index are deduced based on the source fields when possible. If alternate mappings are required, use the Create index API prior to starting the transform.
    • documents_deleted (or docd): The number of documents that have been deleted from the destination index due to the retention policy for this transform.
    • documents_indexed (or doci): The number of documents that have been indexed into the destination index for the transform.
    • docs_per_second (or dps): Specifies a limit on the number of input documents per second. This setting throttles the transform by adding a wait time between search requests. The default value is null, which disables throttling.
    • documents_processed (or docp): The number of documents that have been processed from the source index of the transform.
    • frequency (or f): The interval between checks for changes in the source indices when the transform is running continuously. Also determines the retry interval in the event of transient failures while the transform is searching or indexing. The minimum value is 1s and the maximum is 1h. The default value is 1m.
    • id: Identifier for the transform.
    • index_failure (or if): The number of indexing failures.
    • index_time (or itime): The amount of time spent indexing, in milliseconds.
    • index_total (or it): The number of index operations.
    • indexed_documents_exp_avg (or idea): Exponential moving average of the number of new documents that have been indexed.
    • last_search_time (or lst, lastSearchTime): The timestamp of the last search in the source indices. This field is only shown if the transform is running.
    • max_page_search_size (or mpsz): Defines the initial page size to use for the composite aggregation for each checkpoint. If circuit breaker exceptions occur, the page size is dynamically adjusted to a lower value. The minimum value is 10 and the maximum is 65,536. The default value is 500.
    • pages_processed (or pp): The number of search or bulk index operations processed. Documents are processed in batches instead of individually.
    • pipeline (or p): The unique identifier for an ingest pipeline.
    • processed_documents_exp_avg (or pdea): Exponential moving average of the number of documents that have been processed.
    • processing_time (or pt): The amount of time spent processing results, in milliseconds.
    • reason (or r): If a transform has a failed state, this property provides details about the reason for the failure.
    • search_failure (or sf): The number of search failures.
    • search_time (or stime): The amount of time spent searching, in milliseconds.
    • search_total (or st): The number of search operations on the source index for the transform.
    • source_index (or si, sourceIndex): The source indices for the transform. It can be a single index, an index pattern (for example, "my-index-*"), an array of indices (for example, ["my-index-000001", "my-index-000002"]), or an array of index patterns (for example, ["my-index-*", "my-other-index-*"]. For remote indices use the syntax "remote_name:index_name". If any indices are in remote clusters then the master node and at least one transform node must have the remote_cluster_client node role.
    • state (or s): The status of the transform, which can be one of the following values:

      • aborting: The transform is aborting.
      • failed: The transform failed. For more information about the failure, check the reason field.
      • indexing: The transform is actively processing data and creating new documents.
      • started: The transform is running but not actively indexing data.
      • stopped: The transform is stopped.
      • stopping: The transform is stopping.
    • transform_type (or tt): Indicates the type of transform: batch or continuous.

    • trigger_count (or tc): The number of times the transform has been triggered by the scheduler. For example, the scheduler triggers the transform indexer to check for updates or ingest new data at an interval specified in the frequency property.

    • version (or v): The version of Elasticsearch that existed on the node when the transform was created.

  • time string

    The unit used to display time values.

    Values are nanos, micros, ms, s, m, h, or d.

  • size number

    The maximum number of transforms to obtain.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • id string
    • state string

      The status of the transform. Returned values include: aborting: The transform is aborting. failed: The transform failed. For more information about the failure, check thereasonfield. indexing: The transform is actively processing data and creating new documents. started: The transform is running but not actively indexing data. stopped: The transform is stopped. stopping`: The transform is stopping.

    • The sequence number for the checkpoint.

    • The number of documents that have been processed from the source index of the transform.

    • checkpoint_progress string | null

      The progress of the next checkpoint that is currently in progress.

    • last_search_time string | null

      The timestamp of the last search in the source indices. This field is shown only if the transform is running.

    • changes_last_detection_time string | null

      The timestamp when changes were last detected in the source indices.

    • The time the transform was created.

    • version string
    • The source indices for the transform.

    • The destination index for the transform.

    • pipeline string

      The unique identifier for the ingest pipeline.

    • The description of the transform.

    • The type of transform: batch or continuous.

    • The interval between checks for changes in the source indices when the transform is running continuously.

    • The initial page size that is used for the composite aggregation for each checkpoint.

    • The number of input documents per second.

    • reason string

      If a transform has a failed state, these details describe the reason for failure.

    • The total number of search operations on the source index for the transform.

    • The total number of search failures.

    • The total amount of search time, in milliseconds.

    • The total number of index operations done by the transform.

    • The total number of indexing failures.

    • The total time spent indexing documents, in milliseconds.

    • The number of documents that have been indexed into the destination index for the transform.

    • The total time spent deleting documents, in milliseconds.

    • The number of documents deleted from the destination index due to the retention policy for the transform.

    • The number of times the transform has been triggered by the scheduler. For example, the scheduler triggers the transform indexer to check for updates or ingest new data at an interval specified in the frequency property.

    • The number of search or bulk index operations processed. Documents are processed in batches instead of individually.

    • The total time spent processing results, in milliseconds.

    • The exponential moving average of the duration of the checkpoint, in milliseconds.

    • The exponential moving average of the number of new documents that have been indexed.

    • The exponential moving average of the number of documents that have been processed.

GET /_cat/transforms/{transform_id}
curl \
 --request GET 'http://api.example.com/_cat/transforms/{transform_id}' \
 --header "Authorization: $API_KEY"
Response examples (200)
A successful response from `GET /_cat/transforms?v=true&format=json`.
[
  {
    "id" : "ecommerce_transform",
    "state" : "started",
    "checkpoint" : "1",
    "documents_processed" : "705",
    "checkpoint_progress" : "100.00",
    "changes_last_detection_time" : null
  }
]









































Get remote cluster information Added in 6.1.0

GET /_remote/info

Get information about configured remote clusters. The API returns connection and endpoint information keyed by the configured remote cluster alias.


This API returns information that reflects current state on the local cluster. The connected field does not necessarily reflect whether a remote cluster is down or unavailable, only whether there is currently an open connection to it. Elasticsearch does not spontaneously try to reconnect to a disconnected remote cluster. To trigger a reconnection, attempt a cross-cluster search, ES|QL cross-cluster search, or try the resolve cluster endpoint.

External documentation

Responses

GET /_remote/info
curl \
 --request GET 'http://api.example.com/_remote/info' \
 --header "Authorization: $API_KEY"








Get the cluster state Added in 1.3.0

GET /_cluster/state/{metric}

Get comprehensive information about the state of the cluster.

The cluster state is an internal data structure which keeps track of a variety of information needed by every node, including the identity and attributes of the other nodes in the cluster; cluster-wide settings; index metadata, including the mapping and settings for each index; the location and status of every shard copy in the cluster.

The elected master node ensures that every node in the cluster has a copy of the same cluster state. This API lets you retrieve a representation of this internal state for debugging or diagnostic purposes. You may need to consult the Elasticsearch source code to determine the precise meaning of the response.

By default the API will route requests to the elected master node since this node is the authoritative source of cluster states. You can also retrieve the cluster state held on the node handling the API request by adding the ?local=true query parameter.

Elasticsearch may need to expend significant effort to compute a response to this API in larger clusters, and the response may comprise a very large quantity of data. If you use this API repeatedly, your cluster may become unstable.

WARNING: The response is a representation of an internal data structure. Its format is not subject to the same compatibility guarantees as other more stable APIs and may change from version to version. Do not query this API using external monitoring tools. Instead, obtain the information you require using other more stable cluster APIs.

Path parameters

  • metric string | array[string] Required

    Limit the information returned to the specified metrics

Query parameters

  • Whether to ignore if a wildcard indices expression resolves into no concrete indices. (This includes _all string or when no indices have been specified)

  • expand_wildcards string | array[string]

    Whether to expand wildcard expression to concrete indices that are open, closed or both.

    Supported values include:

    • all: Match any data stream or index, including hidden ones.
    • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
    • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
    • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
    • none: Wildcard expressions are not accepted.
  • Return settings in flat format (default: false)

  • Whether specified concrete indices should be ignored when unavailable (missing or closed)

  • local boolean

    Return local information, do not retrieve the state from master node (default: false)

  • Specify timeout for connection to master

  • Wait for the metadata version to be equal or greater than the specified metadata version

  • The maximum time to wait for wait_for_metadata_version before timing out

Responses

GET /_cluster/state/{metric}
curl \
 --request GET 'http://api.example.com/_cluster/state/{metric}' \
 --header "Authorization: $API_KEY"








Get cluster statistics Added in 1.3.0

GET /_cluster/stats/nodes/{node_id}

Get basic index metrics (shard numbers, store size, memory usage) and information about the current nodes that form the cluster (number, roles, os, jvm versions, memory usage, cpu and installed plugins).

Path parameters

  • node_id string | array[string] Required

    Comma-separated list of node filters used to limit returned information. Defaults to all nodes in the cluster.

Query parameters

  • Include remote cluster data into the response

  • timeout string

    Period to wait for each node to respond. If a node does not respond before its timeout expires, the response does not include its stats. However, timed out nodes are included in the response’s _nodes.failed property. Defaults to no timeout.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • _nodes object
      Hide _nodes attributes Show _nodes attributes object
      • failures array[object]
        Hide failures attributes Show failures attributes object
      • total number Required

        Total number of nodes selected by the request.

      • successful number Required

        Number of nodes that responded successfully to the request.

      • failed number Required

        Number of nodes that rejected the request or failed to respond. If this value is not 0, a reason for the rejection or failure is included in the response.

    • cluster_name string Required
    • cluster_uuid string Required
    • indices object Required
      Hide indices attributes Show indices attributes object
      • analysis object Required
        Hide analysis attributes Show analysis attributes object
        • analyzer_types array[object] Required

          Contains statistics about analyzer types used in selected nodes.

          Hide analyzer_types attributes Show analyzer_types attributes object
          • name string Required
          • count number Required

            The number of occurrences of the field type in selected nodes.

          • index_count number Required

            The number of indices containing the field type in selected nodes.

          • For dense_vector field types, number of indexed vector types in selected nodes.

          • For dense_vector field types, the maximum dimension of all indexed vector types in selected nodes.

          • For dense_vector field types, the minimum dimension of all indexed vector types in selected nodes.

          • The number of fields that declare a script.

        • built_in_analyzers array[object] Required

          Contains statistics about built-in analyzers used in selected nodes.

          Hide built_in_analyzers attributes Show built_in_analyzers attributes object
          • name string Required
          • count number Required

            The number of occurrences of the field type in selected nodes.

          • index_count number Required

            The number of indices containing the field type in selected nodes.

          • For dense_vector field types, number of indexed vector types in selected nodes.

          • For dense_vector field types, the maximum dimension of all indexed vector types in selected nodes.

          • For dense_vector field types, the minimum dimension of all indexed vector types in selected nodes.

          • The number of fields that declare a script.

        • built_in_char_filters array[object] Required

          Contains statistics about built-in character filters used in selected nodes.

          Hide built_in_char_filters attributes Show built_in_char_filters attributes object
          • name string Required
          • count number Required

            The number of occurrences of the field type in selected nodes.

          • index_count number Required

            The number of indices containing the field type in selected nodes.

          • For dense_vector field types, number of indexed vector types in selected nodes.

          • For dense_vector field types, the maximum dimension of all indexed vector types in selected nodes.

          • For dense_vector field types, the minimum dimension of all indexed vector types in selected nodes.

          • The number of fields that declare a script.

        • built_in_filters array[object] Required

          Contains statistics about built-in token filters used in selected nodes.

          Hide built_in_filters attributes Show built_in_filters attributes object
          • name string Required
          • count number Required

            The number of occurrences of the field type in selected nodes.

          • index_count number Required

            The number of indices containing the field type in selected nodes.

          • For dense_vector field types, number of indexed vector types in selected nodes.

          • For dense_vector field types, the maximum dimension of all indexed vector types in selected nodes.

          • For dense_vector field types, the minimum dimension of all indexed vector types in selected nodes.

          • The number of fields that declare a script.

        • built_in_tokenizers array[object] Required

          Contains statistics about built-in tokenizers used in selected nodes.

          Hide built_in_tokenizers attributes Show built_in_tokenizers attributes object
          • name string Required
          • count number Required

            The number of occurrences of the field type in selected nodes.

          • index_count number Required

            The number of indices containing the field type in selected nodes.

          • For dense_vector field types, number of indexed vector types in selected nodes.

          • For dense_vector field types, the maximum dimension of all indexed vector types in selected nodes.

          • For dense_vector field types, the minimum dimension of all indexed vector types in selected nodes.

          • The number of fields that declare a script.

        • char_filter_types array[object] Required

          Contains statistics about character filter types used in selected nodes.

          Hide char_filter_types attributes Show char_filter_types attributes object
          • name string Required
          • count number Required

            The number of occurrences of the field type in selected nodes.

          • index_count number Required

            The number of indices containing the field type in selected nodes.

          • For dense_vector field types, number of indexed vector types in selected nodes.

          • For dense_vector field types, the maximum dimension of all indexed vector types in selected nodes.

          • For dense_vector field types, the minimum dimension of all indexed vector types in selected nodes.

          • The number of fields that declare a script.

        • filter_types array[object] Required

          Contains statistics about token filter types used in selected nodes.

          Hide filter_types attributes Show filter_types attributes object
          • name string Required
          • count number Required

            The number of occurrences of the field type in selected nodes.

          • index_count number Required

            The number of indices containing the field type in selected nodes.

          • For dense_vector field types, number of indexed vector types in selected nodes.

          • For dense_vector field types, the maximum dimension of all indexed vector types in selected nodes.

          • For dense_vector field types, the minimum dimension of all indexed vector types in selected nodes.

          • The number of fields that declare a script.

        • tokenizer_types array[object] Required

          Contains statistics about tokenizer types used in selected nodes.

          Hide tokenizer_types attributes Show tokenizer_types attributes object
          • name string Required
          • count number Required

            The number of occurrences of the field type in selected nodes.

          • index_count number Required

            The number of indices containing the field type in selected nodes.

          • For dense_vector field types, number of indexed vector types in selected nodes.

          • For dense_vector field types, the maximum dimension of all indexed vector types in selected nodes.

          • For dense_vector field types, the minimum dimension of all indexed vector types in selected nodes.

          • The number of fields that declare a script.

      • completion object Required
        Hide completion attributes Show completion attributes object
      • count number Required

        Total number of indices with shards assigned to selected nodes.

      • docs object Required
        Hide docs attributes Show docs attributes object
        • count number Required

          Total number of non-deleted documents across all primary shards assigned to selected nodes. This number is based on documents in Lucene segments and may include documents from nested fields.

        • deleted number

          Total number of deleted documents across all primary shards assigned to selected nodes. This number is based on documents in Lucene segments. Elasticsearch reclaims the disk space of deleted Lucene documents when a segment is merged.

      • fielddata object Required
        Hide fielddata attributes Show fielddata attributes object
      • query_cache object Required
        Hide query_cache attributes Show query_cache attributes object
        • cache_count number Required

          Total number of entries added to the query cache across all shards assigned to selected nodes. This number includes current and evicted entries.

        • cache_size number Required

          Total number of entries currently in the query cache across all shards assigned to selected nodes.

        • evictions number Required

          Total number of query cache evictions across all shards assigned to selected nodes.

        • hit_count number Required

          Total count of query cache hits across all shards assigned to selected nodes.

        • memory_size_in_bytes number Required

          Total amount, in bytes, of memory used for the query cache across all shards assigned to selected nodes.

        • miss_count number Required

          Total count of query cache misses across all shards assigned to selected nodes.

        • total_count number Required

          Total count of hits and misses in the query cache across all shards assigned to selected nodes.

      • segments object Required
        Hide segments attributes Show segments attributes object
      • shards object Required
        Hide shards attributes Show shards attributes object
        • index object
          Hide index attributes Show index attributes object
          • primaries object Required
            Hide primaries attributes Show primaries attributes object
            • avg number Required

              Mean number of shards in an index, counting only shards assigned to selected nodes.

            • max number Required

              Maximum number of shards in an index, counting only shards assigned to selected nodes.

            • min number Required

              Minimum number of shards in an index, counting only shards assigned to selected nodes.

          • replication object Required
            Hide replication attributes Show replication attributes object
            • avg number Required

              Mean number of shards in an index, counting only shards assigned to selected nodes.

            • max number Required

              Maximum number of shards in an index, counting only shards assigned to selected nodes.

            • min number Required

              Minimum number of shards in an index, counting only shards assigned to selected nodes.

          • shards object Required
            Hide shards attributes Show shards attributes object
            • avg number Required

              Mean number of shards in an index, counting only shards assigned to selected nodes.

            • max number Required

              Maximum number of shards in an index, counting only shards assigned to selected nodes.

            • min number Required

              Minimum number of shards in an index, counting only shards assigned to selected nodes.

        • Number of primary shards assigned to selected nodes.

        • Ratio of replica shards to primary shards across all selected nodes.

        • total number

          Total number of shards assigned to selected nodes.

      • store object Required
        Hide store attributes Show store attributes object
      • mappings object Required
        Hide mappings attributes Show mappings attributes object
        • field_types array[object] Required

          Contains statistics about field data types used in selected nodes.

          Hide field_types attributes Show field_types attributes object
          • name string Required
          • count number Required

            The number of occurrences of the field type in selected nodes.

          • index_count number Required

            The number of indices containing the field type in selected nodes.

          • For dense_vector field types, number of indexed vector types in selected nodes.

          • For dense_vector field types, the maximum dimension of all indexed vector types in selected nodes.

          • For dense_vector field types, the minimum dimension of all indexed vector types in selected nodes.

          • The number of fields that declare a script.

        • runtime_field_types array[object]

          Contains statistics about runtime field data types used in selected nodes.

          Hide runtime_field_types attributes Show runtime_field_types attributes object
          • chars_max number Required

            Maximum number of characters for a single runtime field script.

          • chars_total number Required

            Total number of characters for the scripts that define the current runtime field data type.

          • count number Required

            Number of runtime fields mapped to the field data type in selected nodes.

          • doc_max number Required

            Maximum number of accesses to doc_values for a single runtime field script

          • doc_total number Required

            Total number of accesses to doc_values for the scripts that define the current runtime field data type.

          • index_count number Required

            Number of indices containing a mapping of the runtime field data type in selected nodes.

          • lang array[string] Required

            Script languages used for the runtime fields scripts.

          • lines_max number Required

            Maximum number of lines for a single runtime field script.

          • lines_total number Required

            Total number of lines for the scripts that define the current runtime field data type.

          • name string Required
          • scriptless_count number Required

            Number of runtime fields that don’t declare a script.

          • shadowed_count number Required

            Number of runtime fields that shadow an indexed field.

          • source_max number Required

            Maximum number of accesses to _source for a single runtime field script.

          • source_total number Required

            Total number of accesses to _source for the scripts that define the current runtime field data type.

        • Total number of fields in all non-system indices.

        • Total number of fields in all non-system indices, accounting for mapping deduplication.

        • Total size of all mappings, in bytes, after deduplication and compression.

      • versions array[object]

        Contains statistics about analyzers and analyzer components used in selected nodes.

        Hide versions attributes Show versions attributes object
    • nodes object Required
      Hide nodes attributes Show nodes attributes object
      • count object Required
        Hide count attributes Show count attributes object
      • discovery_types object Required

        Contains statistics about the discovery types used by selected nodes.

        Hide discovery_types attribute Show discovery_types attribute object
        • * number Additional properties
      • fs object Required
        Hide fs attributes Show fs attributes object
        • available_in_bytes number Required

          Total number of bytes available to JVM in file stores across all selected nodes. Depending on operating system or process-level restrictions, this number may be less than nodes.fs.free_in_byes. This is the actual amount of free disk space the selected Elasticsearch nodes can use.

        • free_in_bytes number Required

          Total number of unallocated bytes in file stores across all selected nodes.

        • total_in_bytes number Required

          Total size, in bytes, of all file stores across all selected nodes.

      • indexing_pressure object Required
        Hide indexing_pressure attribute Show indexing_pressure attribute object
      • ingest object Required
        Hide ingest attributes Show ingest attributes object
        • number_of_pipelines number Required
        • processor_stats object Required
          Hide processor_stats attribute Show processor_stats attribute object
          • * object Additional properties
            Hide * attributes Show * attributes object
            • count number Required
            • current number Required
            • failed number Required
            • time string

              A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

      • jvm object Required
        Hide jvm attributes Show jvm attributes object
        • Time unit for milliseconds

        • mem object Required
          Hide mem attributes Show mem attributes object
          • heap_max_in_bytes number Required

            Maximum amount of memory, in bytes, available for use by the heap across all selected nodes.

          • heap_used_in_bytes number Required

            Memory, in bytes, currently in use by the heap across all selected nodes.

        • threads number Required

          Number of active threads in use by JVM across all selected nodes.

        • versions array[object] Required

          Contains statistics about the JVM versions used by selected nodes.

          Hide versions attributes Show versions attributes object
      • network_types object Required
        Hide network_types attributes Show network_types attributes object
        • http_types object Required

          Contains statistics about the HTTP network types used by selected nodes.

          Hide http_types attribute Show http_types attribute object
          • * number Additional properties
        • transport_types object Required

          Contains statistics about the transport network types used by selected nodes.

          Hide transport_types attribute Show transport_types attribute object
          • * number Additional properties
      • os object Required
        Hide os attributes Show os attributes object
        • allocated_processors number Required

          Number of processors used to calculate thread pool size across all selected nodes. This number can be set with the processors setting of a node and defaults to the number of processors reported by the operating system. In both cases, this number will never be larger than 32.

        • architectures array[object]

          Contains statistics about processor architectures (for example, x86_64 or aarch64) used by selected nodes.

          Hide architectures attributes Show architectures attributes object
          • arch string Required

            Name of an architecture used by one or more selected nodes.

          • count number Required

            Number of selected nodes using the architecture.

        • available_processors number Required

          Number of processors available to JVM across all selected nodes.

        • mem object Required
          Hide mem attributes Show mem attributes object
          • Total amount, in bytes, of memory across all selected nodes, but using the value specified using the es.total_memory_bytes system property instead of measured total memory for those nodes where that system property was set.

          • free_in_bytes number Required

            Amount, in bytes, of free physical memory across all selected nodes.

          • free_percent number Required

            Percentage of free physical memory across all selected nodes.

          • total_in_bytes number Required

            Total amount, in bytes, of physical memory across all selected nodes.

          • used_in_bytes number Required

            Amount, in bytes, of physical memory in use across all selected nodes.

          • used_percent number Required

            Percentage of physical memory in use across all selected nodes.

        • names array[object] Required

          Contains statistics about operating systems used by selected nodes.

          Hide names attributes Show names attributes object
          • count number Required

            Number of selected nodes using the operating system.

          • name string Required
        • pretty_names array[object] Required

          Contains statistics about operating systems used by selected nodes.

          Hide pretty_names attributes Show pretty_names attributes object
          • count number Required

            Number of selected nodes using the operating system.

          • pretty_name string Required
      • packaging_types array[object] Required

        Contains statistics about Elasticsearch distributions installed on selected nodes.

        Hide packaging_types attributes Show packaging_types attributes object
        • count number Required

          Number of selected nodes using the distribution flavor and file type.

        • flavor string Required

          Type of Elasticsearch distribution. This is always default.

        • type string Required

          File type (such as tar or zip) used for the distribution package.

      • plugins array[object] Required

        Contains statistics about installed plugins and modules by selected nodes. If no plugins or modules are installed, this array is empty.

        Hide plugins attributes Show plugins attributes object
      • process object Required
        Hide process attributes Show process attributes object
        • cpu object Required
          Hide cpu attribute Show cpu attribute object
          • percent number Required

            Percentage of CPU used across all selected nodes. Returns -1 if not supported.

        • open_file_descriptors object Required
          Hide open_file_descriptors attributes Show open_file_descriptors attributes object
          • avg number Required

            Average number of concurrently open file descriptors. Returns -1 if not supported.

          • max number Required

            Maximum number of concurrently open file descriptors allowed across all selected nodes. Returns -1 if not supported.

          • min number Required

            Minimum number of concurrently open file descriptors across all selected nodes. Returns -1 if not supported.

      • versions array[string] Required

        Array of Elasticsearch versions used on selected nodes.

    • status string Required

      Values are green, GREEN, yellow, YELLOW, red, or RED.

    • timestamp number Required

      Unix timestamp, in milliseconds, for the last time the cluster statistics were refreshed.

    • ccs object Required
      Hide ccs attributes Show ccs attributes object
      • clusters object

        Contains remote cluster settings and metrics collected from them. The keys are cluster names, and the values are per-cluster data. Only present if include_remotes option is set to true.

        Hide clusters attribute Show clusters attribute object
        • * object Additional properties
          Hide * attributes Show * attributes object
          • cluster_uuid string Required

            The UUID of the remote cluster.

          • mode string Required

            The connection mode used to communicate with the remote cluster.

          • skip_unavailable boolean Required

            The skip_unavailable setting used for this remote cluster.

          • transport_compress string Required

            Transport compression setting used for this remote cluster.

          • status string Required

            Values are green, GREEN, yellow, YELLOW, red, or RED.

          • version array[string] Required

            The list of Elasticsearch versions used by the nodes on the remote cluster.

          • nodes_count number Required

            The total count of nodes in the remote cluster.

          • shards_count number Required

            The total number of shards in the remote cluster.

          • indices_count number Required

            The total number of indices in the remote cluster.

          • Total data set size, in bytes, of all shards assigned to selected nodes.

          • Total data set size of all shards assigned to selected nodes, as a human-readable string.

          • max_heap_in_bytes number Required

            Maximum amount of memory, in bytes, available for use by the heap across the nodes of the remote cluster.

          • max_heap string

            Maximum amount of memory available for use by the heap across the nodes of the remote cluster, as a human-readable string.

          • mem_total_in_bytes number Required

            Total amount, in bytes, of physical memory across the nodes of the remote cluster.

          • Total amount of physical memory across the nodes of the remote cluster, as a human-readable string.

      • _esql object
        Hide _esql attributes Show _esql attributes object
        • total number Required

          The total number of cross-cluster search requests that have been executed by the cluster.

        • success number Required

          The total number of cross-cluster search requests that have been successfully executed by the cluster.

        • skipped number Required

          The total number of cross-cluster search requests (successful or failed) that had at least one remote cluster skipped.

        • took object Required
          Hide took attributes Show took attributes object
          • Time unit for milliseconds

          • Time unit for milliseconds

          • Time unit for milliseconds

        • Hide took_mrt_true attributes Show took_mrt_true attributes object
          • Time unit for milliseconds

          • Time unit for milliseconds

          • Time unit for milliseconds

        • Hide took_mrt_false attributes Show took_mrt_false attributes object
          • Time unit for milliseconds

          • Time unit for milliseconds

          • Time unit for milliseconds

        • remotes_per_search_max number Required

          The maximum number of remote clusters that were queried in a single cross-cluster search request.

        • remotes_per_search_avg number Required

          The average number of remote clusters that were queried in a single cross-cluster search request.

        • failure_reasons object Required

          Statistics about the reasons for cross-cluster search request failures. The keys are the failure reason names and the values are the number of requests that failed for that reason.

          Hide failure_reasons attribute Show failure_reasons attribute object
          • * number Additional properties
        • features object Required

          The keys are the names of the search feature, and the values are the number of requests that used that feature. Single request can use more than one feature (e.g. both async and wildcard).

          Hide features attribute Show features attribute object
          • * number Additional properties
        • clients object Required

          Statistics about the clients that executed cross-cluster search requests. The keys are the names of the clients, and the values are the number of requests that were executed by that client. Only known clients (such as kibana or elasticsearch) are counted.

          Hide clients attribute Show clients attribute object
          • * number Additional properties
        • clusters object Required

          Statistics about the clusters that were queried in cross-cluster search requests. The keys are cluster names, and the values are per-cluster telemetry data. This also includes the local cluster itself, which uses the name (local).

          Hide clusters attribute Show clusters attribute object
          • * object Additional properties
            Hide * attributes Show * attributes object
            • total number Required

              The total number of successful (not skipped) cross-cluster search requests that were executed against this cluster. This may include requests where partial results were returned, but not requests in which the cluster has been skipped entirely.

            • skipped number Required

              The total number of cross-cluster search requests for which this cluster was skipped.

            • took object Required
GET /_cluster/stats/nodes/{node_id}
curl \
 --request GET 'http://api.example.com/_cluster/stats/nodes/{node_id}' \
 --header "Authorization: $API_KEY"

Ping the cluster

HEAD /

Get information about whether the cluster is running.

Responses

HEAD /
curl \
 --request HEAD 'http://api.example.com/' \
 --header "Authorization: $API_KEY"

Clear the archived repositories metering Technical preview

DELETE /_nodes/{node_id}/_repositories_metering/{max_archive_version}

Clear the archived repositories metering information in the cluster.

Path parameters

  • node_id string | array[string] Required

    Comma-separated list of node IDs or names used to limit returned information.

  • max_archive_version number Required

    Specifies the maximum archive_version to be cleared from the archive.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • _nodes object
      Hide _nodes attributes Show _nodes attributes object
      • failures array[object]
        Hide failures attributes Show failures attributes object
      • total number Required

        Total number of nodes selected by the request.

      • successful number Required

        Number of nodes that responded successfully to the request.

      • failed number Required

        Number of nodes that rejected the request or failed to respond. If this value is not 0, a reason for the rejection or failure is included in the response.

    • cluster_name string Required
    • nodes object Required

      Contains repositories metering information for the nodes selected by the request.

      Hide nodes attribute Show nodes attribute object
      • * object Additional properties
        Hide * attributes Show * attributes object
        • repository_name string Required
        • repository_type string Required

          Repository type.

        • repository_location object Required
          Hide repository_location attributes Show repository_location attributes object
        • Time unit for milliseconds

        • Time unit for milliseconds

        • archived boolean Required

          A flag that tells whether or not this object has been archived. When a repository is closed or updated the repository metering information is archived and kept for a certain period of time. This allows retrieving the repository metering information of previous repository instantiations.

        • request_counts object Required
          Hide request_counts attributes Show request_counts attributes object
          • Number of Get Blob Properties requests (Azure)

          • GetBlob number

            Number of Get Blob requests (Azure)

          • Number of List Blobs requests (Azure)

          • PutBlob number

            Number of Put Blob requests (Azure)

          • PutBlock number

            Number of Put Block (Azure)

          • Number of Put Block List requests

          • Number of get object requests (GCP, S3)

          • Number of list objects requests (GCP, S3)

          • Number of insert object requests, including simple, multipart and resumable uploads. Resumable uploads can perform multiple http requests to insert a single object but they are considered as a single request since they are billed as an individual operation. (GCP)

          • Number of PutObject requests (S3)

          • Number of Multipart requests, including CreateMultipartUpload, UploadPart and CompleteMultipartUpload requests (S3)

DELETE /_nodes/{node_id}/_repositories_metering/{max_archive_version}
curl \
 --request DELETE 'http://api.example.com/_nodes/{node_id}/_repositories_metering/{max_archive_version}' \
 --header "Authorization: $API_KEY"













































































Get the cluster health Added in 8.7.0

GET /_health_report

Get a report with the health status of an Elasticsearch cluster. The report contains a list of indicators that compose Elasticsearch functionality.

Each indicator has a health status of: green, unknown, yellow or red. The indicator will provide an explanation and metadata describing the reason for its current health status.

The cluster’s status is controlled by the worst indicator status.

In the event that an indicator’s status is non-green, a list of impacts may be present in the indicator result which detail the functionalities that are negatively affected by the health issue. Each impact carries with it a severity level, an area of the system that is affected, and a simple description of the impact on the system.

Some health indicators can determine the root cause of a health problem and prescribe a set of steps that can be performed in order to improve the health of the system. The root cause and remediation steps are encapsulated in a diagnosis. A diagnosis contains a cause detailing a root cause analysis, an action containing a brief description of the steps to take to fix the problem, the list of affected resources (if applicable), and a detailed step-by-step troubleshooting guide to fix the diagnosed problem.

NOTE: The health indicators perform root cause analysis of non-green health statuses. This can be computationally expensive when called frequently. When setting up automated polling of the API for health status, set verbose to false to disable the more expensive analysis logic.

Query parameters

  • timeout string

    Explicit operation timeout.

  • verbose boolean

    Opt-in for more information about the health of the system.

  • size number

    Limit the number of affected resources the health report API returns.

Responses

GET /_health_report
curl \
 --request GET 'http://api.example.com/_health_report' \
 --header "Authorization: $API_KEY"

Get the cluster health Added in 8.7.0

GET /_health_report/{feature}

Get a report with the health status of an Elasticsearch cluster. The report contains a list of indicators that compose Elasticsearch functionality.

Each indicator has a health status of: green, unknown, yellow or red. The indicator will provide an explanation and metadata describing the reason for its current health status.

The cluster’s status is controlled by the worst indicator status.

In the event that an indicator’s status is non-green, a list of impacts may be present in the indicator result which detail the functionalities that are negatively affected by the health issue. Each impact carries with it a severity level, an area of the system that is affected, and a simple description of the impact on the system.

Some health indicators can determine the root cause of a health problem and prescribe a set of steps that can be performed in order to improve the health of the system. The root cause and remediation steps are encapsulated in a diagnosis. A diagnosis contains a cause detailing a root cause analysis, an action containing a brief description of the steps to take to fix the problem, the list of affected resources (if applicable), and a detailed step-by-step troubleshooting guide to fix the diagnosed problem.

NOTE: The health indicators perform root cause analysis of non-green health statuses. This can be computationally expensive when called frequently. When setting up automated polling of the API for health status, set verbose to false to disable the more expensive analysis logic.

Path parameters

  • feature string | array[string] Required

    A feature of the cluster, as returned by the top-level health report API.

Query parameters

  • timeout string

    Explicit operation timeout.

  • verbose boolean

    Opt-in for more information about the health of the system.

  • size number

    Limit the number of affected resources the health report API returns.

Responses

GET /_health_report/{feature}
curl \
 --request GET 'http://api.example.com/_health_report/{feature}' \
 --header "Authorization: $API_KEY"

Connector

The connector and sync jobs APIs provide a convenient way to create and manage Elastic connectors and sync jobs in an internal index. Connectors are Elasticsearch integrations for syncing content from third-party data sources, which can be deployed on Elastic Cloud or hosted on your own infrastructure. This API provides an alternative to relying solely on Kibana UI for connector and sync job management. The API comes with a set of validations and assertions to ensure that the state representation in the internal index remains valid. This API requires the manage_connector privilege or, for read-only endpoints, the monitor_connector privilege.

Check out the connector API tutorial

Check in a connector Technical preview

PUT /_connector/{connector_id}/_check_in

Update the last_seen field in the connector and set it to the current timestamp.

Path parameters

  • connector_id string Required

    The unique identifier of the connector to be checked in

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • result string Required

      Values are created, updated, deleted, not_found, or noop.

PUT /_connector/{connector_id}/_check_in
curl \
 --request PUT 'http://api.example.com/_connector/{connector_id}/_check_in' \
 --header "Authorization: $API_KEY"
Response examples (200)
{
    "result": "updated"
}

Get a connector Beta

GET /_connector/{connector_id}

Get the details about a connector.

Path parameters

Query parameters

  • A flag to indicate if the desired connector should be fetched, even if it was soft-deleted.

Responses

GET /_connector/{connector_id}
curl \
 --request GET 'http://api.example.com/_connector/{connector_id}' \
 --header "Authorization: $API_KEY"








Get all connectors Beta

GET /_connector

Get information about all connectors.

Query parameters

  • from number

    Starting offset (default: 0)

  • size number

    Specifies a max number of results to get

  • index_name string | array[string]

    A comma-separated list of connector index names to fetch connector documents for

  • connector_name string | array[string]

    A comma-separated list of connector names to fetch connector documents for

  • service_type string | array[string]

    A comma-separated list of connector service types to fetch connector documents for

  • A flag to indicate if the desired connector should be fetched, even if it was soft-deleted.

  • query string

    A wildcard query string that filters connectors with matching name, description or index name

Responses

GET /_connector
curl \
 --request GET 'http://api.example.com/_connector' \
 --header "Authorization: $API_KEY"




Create a connector Beta

POST /_connector

Connectors are Elasticsearch integrations that bring content from third-party data sources, which can be deployed on Elastic Cloud or hosted on your own infrastructure. Elastic managed connectors (Native connectors) are a managed service on Elastic Cloud. Self-managed connectors (Connector clients) are self-managed on your infrastructure.

application/json

Body

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • result string Required

      Values are created, updated, deleted, not_found, or noop.

    • id string Required
POST /_connector
curl \
 --request POST 'http://api.example.com/_connector' \
 --header "Authorization: $API_KEY" \
 --header "Content-Type: application/json" \
 --data '{"description":"string","index_name":"string","is_native":true,"language":"string","name":"string","service_type":"string"}'

Cancel a connector sync job Beta

PUT /_connector/_sync_job/{connector_sync_job_id}/_cancel

Cancel a connector sync job, which sets the status to cancelling and updates cancellation_requested_at to the current time. The connector service is then responsible for setting the status of connector sync jobs to cancelled.

Path parameters

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • result string Required

      Values are created, updated, deleted, not_found, or noop.

PUT /_connector/_sync_job/{connector_sync_job_id}/_cancel
curl \
 --request PUT 'http://api.example.com/_connector/_sync_job/{connector_sync_job_id}/_cancel' \
 --header "Authorization: $API_KEY"
















































Update the connector features Technical preview

PUT /_connector/{connector_id}/_features

Update the connector features in the connector document. This API can be used to control the following aspects of a connector:

  • document-level security
  • incremental syncs
  • advanced sync rules
  • basic sync rules

Normally, the running connector service automatically manages these features. However, you can use this API to override the default behavior.

To sync data using self-managed connectors, you need to deploy the Elastic connector service on your own infrastructure. This service runs automatically on Elastic Cloud for Elastic managed connectors.

Path parameters

  • connector_id string Required

    The unique identifier of the connector to be updated.

application/json

Body Required

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • result string Required

      Values are created, updated, deleted, not_found, or noop.

PUT /_connector/{connector_id}/_features
curl \
 --request PUT 'http://api.example.com/_connector/{connector_id}/_features' \
 --header "Authorization: $API_KEY" \
 --header "Content-Type: application/json" \
 --data '"{\n  \"features\": {\n    \"document_level_security\": {\n      \"enabled\": true\n    },\n    \"incremental_sync\": {\n      \"enabled\": true\n    },\n    \"sync_rules\": {\n      \"advanced\": {\n        \"enabled\": false\n      },\n      \"basic\": {\n        \"enabled\": true\n      }\n    }\n  }\n}"'
Request examples
{
  "features": {
    "document_level_security": {
      "enabled": true
    },
    "incremental_sync": {
      "enabled": true
    },
    "sync_rules": {
      "advanced": {
        "enabled": false
      },
      "basic": {
        "enabled": true
      }
    }
  }
}
{
  "features": {
    "document_level_security": {
      "enabled": true
    }
  }
}
Response examples (200)
{
  "result": "updated"
}




Update the connector draft filtering validation Technical preview

PUT /_connector/{connector_id}/_filtering/_validation

Update the draft filtering validation info for a connector.

Path parameters

  • connector_id string Required

    The unique identifier of the connector to be updated

application/json

Body Required

  • validation object Required
    Hide validation attributes Show validation attributes object
    • errors array[object] Required
      Hide errors attributes Show errors attributes object
    • state string Required

      Values are edited, invalid, or valid.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • result string Required

      Values are created, updated, deleted, not_found, or noop.

PUT /_connector/{connector_id}/_filtering/_validation
curl \
 --request PUT 'http://api.example.com/_connector/{connector_id}/_filtering/_validation' \
 --header "Authorization: $API_KEY" \
 --header "Content-Type: application/json" \
 --data '{"validation":{"errors":[{"ids":["string"],"messages":["string"]}],"state":"edited"}}'

Update the connector index name Beta

PUT /_connector/{connector_id}/_index_name

Update the index_name field of a connector, specifying the index where the data ingested by the connector is stored.

Path parameters

  • connector_id string Required

    The unique identifier of the connector to be updated

application/json

Body Required

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • result string Required

      Values are created, updated, deleted, not_found, or noop.

PUT /_connector/{connector_id}/_index_name
curl \
 --request PUT 'http://api.example.com/_connector/{connector_id}/_index_name' \
 --header "Authorization: $API_KEY" \
 --header "Content-Type: application/json" \
 --data '"{\n    \"index_name\": \"data-from-my-google-drive\"\n}"'
Request example
{
    "index_name": "data-from-my-google-drive"
}
Response examples (200)
{
  "result": "updated"
}

Path parameters

  • connector_id string Required

    The unique identifier of the connector to be updated

application/json

Body Required

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • result string Required

      Values are created, updated, deleted, not_found, or noop.

PUT /_connector/{connector_id}/_name
curl \
 --request PUT 'http://api.example.com/_connector/{connector_id}/_name' \
 --header "Authorization: $API_KEY" \
 --header "Content-Type: application/json" \
 --data '"{\n    \"name\": \"Custom connector\",\n    \"description\": \"This is my customized connector\"\n}"'
Request example
{
    "name": "Custom connector",
    "description": "This is my customized connector"
}
Response examples (200)
{
  "result": "updated"
}

Path parameters

  • connector_id string Required

    The unique identifier of the connector to be updated

application/json

Body Required

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • result string Required

      Values are created, updated, deleted, not_found, or noop.

PUT /_connector/{connector_id}/_native
curl \
 --request PUT 'http://api.example.com/_connector/{connector_id}/_native' \
 --header "Authorization: $API_KEY" \
 --header "Content-Type: application/json" \
 --data '{"is_native":true}'





























Create a follower Added in 6.5.0

PUT /{index}/_ccr/follow

Create a cross-cluster replication follower index that follows a specific leader index. When the API returns, the follower index exists and cross-cluster replication starts replicating operations from the leader index to the follower index.

Path parameters

  • index string Required

    The name of the follower index.

Query parameters

  • Period to wait for a connection to the master node.

  • wait_for_active_shards number | string

    Specifies the number of shards to wait on being active before responding. This defaults to waiting on none of the shards to be active. A shard must be restored from the leader index before being active. Restoring a follower shard requires transferring all the remote Lucene segment files to the follower index.

application/json

Body Required

  • If the leader index is part of a data stream, the name to which the local data stream for the followed index should be renamed.

  • leader_index string Required
  • The maximum number of outstanding reads requests from the remote cluster.

  • The maximum number of outstanding write requests on the follower.

  • The maximum number of operations to pull per read from the remote cluster.

  • A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

  • The maximum number of operations that can be queued for writing. When this limit is reached, reads from the remote cluster will be deferred until the number of queued operations goes below the limit.

  • The maximum number of operations per bulk write request executed on the follower.

  • A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

  • remote_cluster string Required

    The remote cluster containing the leader index.

  • settings object
    Hide settings attributes Show settings attributes object
    • index object
    • mode string
    • Hide soft_deletes attributes Show soft_deletes attributes object
      • enabled boolean

        Indicates whether soft deletes are enabled on the index.

      • Hide retention_lease attribute Show retention_lease attribute object
        • period string Required

          A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

    • sort object
      Hide sort attributes Show sort attributes object
    • Values are true, false, or checksum.

    • codec string
    • routing_partition_size number | string

      Some APIs will return values such as numbers also as a string (notably epoch timestamps). This behavior is used to capture this behavior while keeping the semantics of the field type.

      Depending on the target language, code generators can keep the union or remove it and leniently parse strings to the target type.

    • auto_expand_replicas string | null

      One of:
    • merge object
      Hide merge attribute Show merge attribute object
      • Hide scheduler attributes Show scheduler attributes object
        • max_thread_count number | string

          Some APIs will return values such as numbers also as a string (notably epoch timestamps). This behavior is used to capture this behavior while keeping the semantics of the field type.

          Depending on the target language, code generators can keep the union or remove it and leniently parse strings to the target type.

        • max_merge_count number | string

          Some APIs will return values such as numbers also as a string (notably epoch timestamps). This behavior is used to capture this behavior while keeping the semantics of the field type.

          Depending on the target language, code generators can keep the union or remove it and leniently parse strings to the target type.

    • A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

    • blocks object
      Hide blocks attributes Show blocks attributes object
      • read_only boolean | string

        Some APIs will return values such as numbers also as a string (notably epoch timestamps). This behavior is used to capture this behavior while keeping the semantics of the field type.

        Depending on the target language, code generators can keep the union or remove it and leniently parse strings to the target type.

      • read_only_allow_delete boolean | string

        Some APIs will return values such as numbers also as a string (notably epoch timestamps). This behavior is used to capture this behavior while keeping the semantics of the field type.

        Depending on the target language, code generators can keep the union or remove it and leniently parse strings to the target type.

      • read boolean | string

        Some APIs will return values such as numbers also as a string (notably epoch timestamps). This behavior is used to capture this behavior while keeping the semantics of the field type.

        Depending on the target language, code generators can keep the union or remove it and leniently parse strings to the target type.

      • write boolean | string

        Some APIs will return values such as numbers also as a string (notably epoch timestamps). This behavior is used to capture this behavior while keeping the semantics of the field type.

        Depending on the target language, code generators can keep the union or remove it and leniently parse strings to the target type.

      • metadata boolean | string

        Some APIs will return values such as numbers also as a string (notably epoch timestamps). This behavior is used to capture this behavior while keeping the semantics of the field type.

        Depending on the target language, code generators can keep the union or remove it and leniently parse strings to the target type.

    • analyze object
      Hide analyze attribute Show analyze attribute object
      • max_token_count number | string

        Some APIs will return values such as numbers also as a string (notably epoch timestamps). This behavior is used to capture this behavior while keeping the semantics of the field type.

        Depending on the target language, code generators can keep the union or remove it and leniently parse strings to the target type.

    • Hide highlight attribute Show highlight attribute object
    • routing object
      Hide routing attributes Show routing attributes object
    • A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

    • Hide lifecycle attributes Show lifecycle attributes object
      • name string
      • indexing_complete boolean | string

        Some APIs will return values such as numbers also as a string (notably epoch timestamps). This behavior is used to capture this behavior while keeping the semantics of the field type.

        Depending on the target language, code generators can keep the union or remove it and leniently parse strings to the target type.

      • If specified, this is the timestamp used to calculate the index age for its phase transitions. Use this setting if you create a new index that contains old data and want to use the original creation date to calculate the index age. Specified as a Unix epoch value in milliseconds.

      • Set to true to parse the origination date from the index name. This origination date is used to calculate the index age for its phase transitions. The index name must match the pattern .*-{date_format}-\d+, where the date_format is yyyy.MM.dd and the trailing digits are optional. An index that was rolled over would normally match the full format, for example logs-2016.10.31-000002). If the index name doesn’t match the pattern, index creation fails.

      • step object
        Hide step attribute Show step attribute object
        • A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

      • The index alias to update when the index rolls over. Specify when using a policy that contains a rollover action. When the index rolls over, the alias is updated to reflect that the index is no longer the write index. For more information about rolling indices, see Rollover.

      • prefer_ilm boolean | string

        Preference for the system that manages a data stream backing index (preferring ILM when both ILM and DLM are applicable for an index).

    • creation_date number | string

      Some APIs will return values such as numbers also as a string (notably epoch timestamps). This behavior is used to capture this behavior while keeping the semantics of the field type.

      Depending on the target language, code generators can keep the union or remove it and leniently parse strings to the target type.

    • creation_date_string string | number

      A date and time, either as a string whose format can depend on the context (defaulting to ISO 8601), or a number of milliseconds since the Epoch. Elasticsearch accepts both as input, but will generally output a string representation.

    • uuid string
    • version object
      Hide version attributes Show version attributes object
    • translog object
      Hide translog attributes Show translog attributes object
    • Hide query_string attribute Show query_string attribute object
      • lenient boolean | string Required

        Some APIs will return values such as numbers also as a string (notably epoch timestamps). This behavior is used to capture this behavior while keeping the semantics of the field type.

        Depending on the target language, code generators can keep the union or remove it and leniently parse strings to the target type.

    • analysis object
      Hide analysis attributes Show analysis attributes object
    • settings object
    • Hide time_series attributes Show time_series attributes object
      • end_time string | number

        A date and time, either as a string whose format can depend on the context (defaulting to ISO 8601), or a number of milliseconds since the Epoch. Elasticsearch accepts both as input, but will generally output a string representation.

      • start_time string | number

        A date and time, either as a string whose format can depend on the context (defaulting to ISO 8601), or a number of milliseconds since the Epoch. Elasticsearch accepts both as input, but will generally output a string representation.

    • queries object
      Hide queries attribute Show queries attribute object
      • cache object
        Hide cache attribute Show cache attribute object
    • Configure custom similarity settings to customize how search results are scored.

    • mapping object
      Hide mapping attributes Show mapping attributes object
      • coerce boolean
      • Hide total_fields attributes Show total_fields attributes object
        • limit number | string

          The maximum number of fields in an index. Field and object mappings, as well as field aliases count towards this limit. The limit is in place to prevent mappings and searches from becoming too large. Higher values can lead to performance degradations and memory issues, especially in clusters with a high load or few resources.

        • ignore_dynamic_beyond_limit boolean | string

          This setting determines what happens when a dynamically mapped field would exceed the total fields limit. When set to false (the default), the index request of the document that tries to add a dynamic field to the mapping will fail with the message Limit of total fields [X] has been exceeded. When set to true, the index request will not fail. Instead, fields that would exceed the limit are not added to the mapping, similar to dynamic: false. The fields that were not added to the mapping will be added to the _ignored field.

      • depth object
        Hide depth attribute Show depth attribute object
        • limit number

          The maximum depth for a field, which is measured as the number of inner objects. For instance, if all fields are defined at the root object level, then the depth is 1. If there is one object mapping, then the depth is 2, etc.

      • Hide nested_fields attribute Show nested_fields attribute object
        • limit number

          The maximum number of distinct nested mappings in an index. The nested type should only be used in special cases, when arrays of objects need to be queried independently of each other. To safeguard against poorly designed mappings, this setting limits the number of unique nested types per index.

      • Hide nested_objects attribute Show nested_objects attribute object
        • limit number

          The maximum number of nested JSON objects that a single document can contain across all nested types. This limit helps to prevent out of memory errors when a document contains too many nested objects.

      • Hide field_name_length attribute Show field_name_length attribute object
        • limit number

          Setting for the maximum length of a field name. This setting isn’t really something that addresses mappings explosion but might still be useful if you want to limit the field length. It usually shouldn’t be necessary to set this setting. The default is okay unless a user starts to add a huge number of fields with really long names. Default is Long.MAX_VALUE (no limit).

      • Hide dimension_fields attribute Show dimension_fields attribute object
        • limit number

          [preview] This functionality is in technical preview and may be changed or removed in a future release. Elastic will work to fix any issues, but features in technical preview are not subject to the support SLA of official GA features.

      • source object
        Hide source attribute Show source attribute object
        • mode string Required

          Values are disabled, stored, or synthetic.

    • Hide indexing.slowlog attributes Show indexing.slowlog attributes object
      • level string
      • source number
      • reformat boolean
      • Hide threshold attribute Show threshold attribute object
        • index object
          Hide index attributes Show index attributes object
          • warn string

            A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

          • info string

            A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

          • debug string

            A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

          • trace string

            A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

    • Hide indexing_pressure attribute Show indexing_pressure attribute object
      • memory object Required
        Hide memory attribute Show memory attribute object
        • limit number

          Number of outstanding bytes that may be consumed by indexing requests. When this limit is reached or exceeded, the node will reject new coordinating and primary operations. When replica operations consume 1.5x this limit, the node will reject new replica operations. Defaults to 10% of the heap.

    • store object
      Hide store attributes Show store attributes object
      • type string Required

        Any of:

        Values are fs, niofs, mmapfs, or hybridfs.

      • allow_mmap boolean

        You can restrict the use of the mmapfs and the related hybridfs store type via the setting node.store.allow_mmap. This is a boolean setting indicating whether or not memory-mapping is allowed. The default is to allow it. This setting is useful, for example, if you are in an environment where you can not control the ability to create a lot of memory maps so you need disable the ability to use memory-mapping.

Responses

PUT /{index}/_ccr/follow
curl \
 --request PUT 'http://api.example.com/{index}/_ccr/follow' \
 --header "Authorization: $API_KEY" \
 --header "Content-Type: application/json" \
 --data '"{\n  \"remote_cluster\" : \"remote_cluster\",\n  \"leader_index\" : \"leader_index\",\n  \"settings\": {\n    \"index.number_of_replicas\": 0\n  },\n  \"max_read_request_operation_count\" : 1024,\n  \"max_outstanding_read_requests\" : 16,\n  \"max_read_request_size\" : \"1024k\",\n  \"max_write_request_operation_count\" : 32768,\n  \"max_write_request_size\" : \"16k\",\n  \"max_outstanding_write_requests\" : 8,\n  \"max_write_buffer_count\" : 512,\n  \"max_write_buffer_size\" : \"512k\",\n  \"max_retry_delay\" : \"10s\",\n  \"read_poll_timeout\" : \"30s\"\n}"'
Request example
Run `PUT /follower_index/_ccr/follow?wait_for_active_shards=1` to create a follower index named `follower_index`.
{
  "remote_cluster" : "remote_cluster",
  "leader_index" : "leader_index",
  "settings": {
    "index.number_of_replicas": 0
  },
  "max_read_request_operation_count" : 1024,
  "max_outstanding_read_requests" : 16,
  "max_read_request_size" : "1024k",
  "max_write_request_operation_count" : 32768,
  "max_write_request_size" : "16k",
  "max_outstanding_write_requests" : 8,
  "max_write_buffer_count" : 512,
  "max_write_buffer_size" : "512k",
  "max_retry_delay" : "10s",
  "read_poll_timeout" : "30s"
}
Response examples (200)
A successful response from `PUT /follower_index/_ccr/follow?wait_for_active_shards=1`.
{
  "follow_index_created" : true,
  "follow_index_shards_acked" : true,
  "index_following_started" : true
}












Get auto-follow patterns Added in 6.5.0

GET /_ccr/auto_follow

Get cross-cluster replication auto-follow patterns.

External documentation

Query parameters

  • The period to wait for a connection to the master node. If the master node is not available before the timeout expires, the request fails and returns an error. It can also be set to -1 to indicate that the request should never timeout.

Responses

GET /_ccr/auto_follow
curl \
 --request GET 'http://api.example.com/_ccr/auto_follow' \
 --header "Authorization: $API_KEY"
Response examples (200)
A successful response from `GET /_ccr/auto_follow/my_auto_follow_pattern`, which gets auto-follow patterns.
{
  "patterns": [
    {
      "name": "my_auto_follow_pattern",
      "pattern": {
        "active": true,
        "remote_cluster" : "remote_cluster",
        "leader_index_patterns" :
        [
          "leader_index*"
        ],
        "leader_index_exclusion_patterns":
        [
          "leader_index_001"
        ],
        "follow_index_pattern" : "{{leader_index}}-follower"
      }
    }
  ]
}








Resume an auto-follow pattern Added in 7.5.0

POST /_ccr/auto_follow/{name}/resume

Resume a cross-cluster replication auto-follow pattern that was paused. The auto-follow pattern will resume configuring following indices for newly created indices that match its patterns on the remote cluster. Remote indices created while the pattern was paused will also be followed unless they have been deleted or closed in the interim.

External documentation

Path parameters

  • name string Required

    The name of the auto-follow pattern to resume.

Query parameters

  • The period to wait for a connection to the master node. If the master node is not available before the timeout expires, the request fails and returns an error. It can also be set to -1 to indicate that the request should never timeout.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

POST /_ccr/auto_follow/{name}/resume
curl \
 --request POST 'http://api.example.com/_ccr/auto_follow/{name}/resume' \
 --header "Authorization: $API_KEY"
Response examples (200)
A successful response `POST /_ccr/auto_follow/my_auto_follow_pattern/resume`, which resumes an auto-follow pattern.
{
  "acknowledged" : true
}








Unfollow an index Added in 6.5.0

POST /{index}/_ccr/unfollow

Convert a cross-cluster replication follower index to a regular index. The API stops the following task associated with a follower index and removes index metadata and settings associated with cross-cluster replication. The follower index must be paused and closed before you call the unfollow API.


Currently cross-cluster replication does not support converting an existing regular index to a follower index. Converting a follower index to a regular index is an irreversible operation.

External documentation

Path parameters

  • index string Required

    The name of the follower index.

Query parameters

  • The period to wait for a connection to the master node. If the master node is not available before the timeout expires, the request fails and returns an error. It can also be set to -1 to indicate that the request should never timeout.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

POST /{index}/_ccr/unfollow
curl \
 --request POST 'http://api.example.com/{index}/_ccr/unfollow' \
 --header "Authorization: $API_KEY"
Response examples (200)
A successful response from `POST /follower_index/_ccr/unfollow`.
{
  "acknowledged" : true
}









Delete data streams Added in 7.9.0

DELETE /_data_stream/{name}

Deletes one or more data streams and their backing indices.

Path parameters

  • name string | array[string] Required

    Comma-separated list of data streams to delete. Wildcard (*) expressions are supported.

Query parameters

  • Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.

  • expand_wildcards string | array[string]

    Type of data stream that wildcard patterns can match. Supports comma-separated values,such as open,hidden.

    Supported values include:

    • all: Match any data stream or index, including hidden ones.
    • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
    • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
    • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
    • none: Wildcard expressions are not accepted.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

DELETE /_data_stream/{name}
curl \
 --request DELETE 'http://api.example.com/_data_stream/{name}' \
 --header "Authorization: $API_KEY"

Get data stream stats Added in 7.9.0

GET /_data_stream/_stats

Get statistics for one or more data streams.

Query parameters

  • expand_wildcards string | array[string]

    Type of data stream that wildcard patterns can match. Supports comma-separated values, such as open,hidden.

    Supported values include:

    • all: Match any data stream or index, including hidden ones.
    • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
    • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
    • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
    • none: Wildcard expressions are not accepted.

Responses

GET /_data_stream/_stats
curl \
 --request GET 'http://api.example.com/_data_stream/_stats' \
 --header "Authorization: $API_KEY"
Response examples (200)
A successful response for retrieving statistics for a data stream.
{
  "_shards": {
    "total": 10,
    "successful": 5,
    "failed": 0
  },
  "data_stream_count": 2,
  "backing_indices": 5,
  "total_store_size": "7kb",
  "total_store_size_bytes": 7268,
  "data_streams": [
    {
      "data_stream": "my-data-stream",
      "backing_indices": 3,
      "store_size": "3.7kb",
      "store_size_bytes": 3772,
      "maximum_timestamp": 1607512028000
    },
    {
      "data_stream": "my-data-stream-two",
      "backing_indices": 2,
      "store_size": "3.4kb",
      "store_size_bytes": 3496,
      "maximum_timestamp": 1607425567000
    }
  ]
}








Update data stream lifecycles Added in 8.11.0

PUT /_data_stream/{name}/_lifecycle

Update the data stream lifecycle of the specified data streams.

Path parameters

  • name string | array[string] Required

    Comma-separated list of data streams used to limit the request. Supports wildcards (*). To target all data streams use * or _all.

Query parameters

  • expand_wildcards string | array[string]

    Type of data stream that wildcard patterns can match. Supports comma-separated values, such as open,hidden. Valid values are: all, hidden, open, closed, none.

    Supported values include:

    • all: Match any data stream or index, including hidden ones.
    • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
    • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
    • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
    • none: Wildcard expressions are not accepted.
  • Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.

  • timeout string

    Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.

application/json

Body

  • A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

  • Hide downsampling attribute Show downsampling attribute object
    • rounds array[object] Required

      The list of downsampling rounds to execute as part of this downsampling configuration

      Hide rounds attributes Show rounds attributes object
      • after string Required

        A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

      • config object Required
        Hide config attribute Show config attribute object
        • fixed_interval string Required

          A date histogram interval. Similar to Duration with additional units: w (week), M (month), q (quarter) and y (year)

  • enabled boolean

    If defined, it turns data stream lifecycle on/off (true/false) for this data stream. A data stream lifecycle that's disabled (enabled: false) will have no effect on the data stream.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

PUT /_data_stream/{name}/_lifecycle
curl \
 --request PUT 'http://api.example.com/_data_stream/{name}/_lifecycle' \
 --header "Authorization: $API_KEY" \
 --header "Content-Type: application/json" \
 --data '"{\n  \"data_retention\": \"7d\"\n}"'
{
  "data_retention": "7d"
}
This example configures two downsampling rounds.
{
    "downsampling": [
      {
        "after": "1d",
        "fixed_interval": "10m"
      },
      {
        "after": "7d",
        "fixed_interval": "1d"
      }
    ]
}
Response examples (200)
A successful response for configuring a data stream lifecycle.
{
  "acknowledged": true
}

Downsample an index Technical preview

POST /{index}/_downsample/{target_index}

Aggregate a time series (TSDS) index and store pre-computed statistical summaries (min, max, sum, value_count and avg) for each metric field grouped by a configured time interval. For example, a TSDS index that contains metrics sampled every 10 seconds can be downsampled to an hourly index. All documents within an hour interval are summarized and stored as a single document in the downsample index.

NOTE: Only indices in a time series data stream are supported. Neither field nor document level security can be defined on the source index. The source index must be read only (index.blocks.write: true).

Path parameters

  • index string Required

    Name of the time series index to downsample.

  • target_index string Required

    Name of the index to create.

application/json

Body Required

  • fixed_interval string Required

    A date histogram interval. Similar to Duration with additional units: w (week), M (month), q (quarter) and y (year)

Responses

POST /{index}/_downsample/{target_index}
curl \
 --request POST 'http://api.example.com/{index}/_downsample/{target_index}' \
 --header "Authorization: $API_KEY" \
 --header "Content-Type: application/json" \
 --data '"{\n  \"fixed_interval\": \"1d\"\n}"'
Request example
{
  "fixed_interval": "1d"
}

Get the status for a data stream lifecycle Added in 8.11.0

GET /{index}/_lifecycle/explain

Get information about an index or data stream's current data stream lifecycle status, such as time since index creation, time since rollover, the lifecycle configuration managing the index, or any errors encountered during lifecycle execution.

Path parameters

  • index string | array[string] Required

    The name of the index to explain

Query parameters

  • indicates if the API should return the default values the system uses for the index's lifecycle

  • Specify timeout for connection to master

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • indices object Required
      Hide indices attribute Show indices attribute object
      • * object Additional properties
        Hide * attributes Show * attributes object
        • index string Required
        • managed_by_lifecycle boolean Required
        • Time unit for milliseconds

        • A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

        • Time unit for milliseconds

        • A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

        • Hide lifecycle attributes Show lifecycle attributes object
        • A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

        • error string
GET /{index}/_lifecycle/explain
curl \
 --request GET 'http://api.example.com/{index}/_lifecycle/explain' \
 --header "Authorization: $API_KEY"
Response examples (200)
A successful response from `GET .ds-metrics-2023.03.22-000001/_lifecycle/explain`, which retrieves the lifecycle status for a data stream backing index. If the index is managed by a data stream lifecycle, the API will show the `managed_by_lifecycle` field set to `true` and the rest of the response will contain information about the lifecycle execution status for this index.
{
  "indices": {
    ".ds-metrics-2023.03.22-000001": {
      "index" : ".ds-metrics-2023.03.22-000001",
      "managed_by_lifecycle" : true,
      "index_creation_date_millis" : 1679475563571,
      "time_since_index_creation" : "843ms",
      "rollover_date_millis" : 1679475564293,
      "time_since_rollover" : "121ms",
      "lifecycle" : { },
      "generation_time" : "121ms"
  }
}
The API reports any errors related to the lifecycle execution for the target index.
{
  "indices": {
    ".ds-metrics-2023.03.22-000001": {
      "index" : ".ds-metrics-2023.03.22-000001",
      "managed_by_lifecycle" : true,
      "index_creation_date_millis" : 1679475563571,
      "time_since_index_creation" : "843ms",
      "lifecycle" : {
        "enabled": true
      },
      "error": "{\"type\":\"validation_exception\",\"reason\":\"Validation Failed: 1: this action would add [2] shards, but this cluster
currently has [4]/[3] maximum normal shards open;\"}"
  }
}

Get data stream lifecycle stats Added in 8.12.0

GET /_lifecycle/stats

Get statistics about the data streams that are managed by a data stream lifecycle.

Responses

GET /_lifecycle/stats
curl \
 --request GET 'http://api.example.com/_lifecycle/stats' \
 --header "Authorization: $API_KEY"
Response examples (200)
A successful response for `GET _lifecycle/stats?human&pretty`
{
  "last_run_duration_in_millis": 2,
  "last_run_duration": "2ms",
  "time_between_starts_in_millis": 9998,
  "time_between_starts": "9.99s",
  "data_streams_count": 2,
  "data_streams": [
    {
      "name": "my-data-stream",
      "backing_indices_in_total": 2,
      "backing_indices_in_error": 0
    },
    {
      "name": "my-other-stream",
      "backing_indices_in_total": 2,
      "backing_indices_in_error": 1
    }
  ]
}

Get data streams Added in 7.9.0

GET /_data_stream

Get information about one or more data streams.

Query parameters

  • expand_wildcards string | array[string]

    Type of data stream that wildcard patterns can match. Supports comma-separated values, such as open,hidden.

    Supported values include:

    • all: Match any data stream or index, including hidden ones.
    • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
    • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
    • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
    • none: Wildcard expressions are not accepted.
  • If true, returns all relevant default configurations for the index template.

  • Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.

  • verbose boolean

    Whether the maximum timestamp for each data stream should be calculated and returned.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • data_streams array[object] Required
      Hide data_streams attributes Show data_streams attributes object
      • _meta object
        Hide _meta attribute Show _meta attribute object
        • * object Additional properties
      • If true, the data stream allows custom routing on write request.

      • Hide failure_store attributes Show failure_store attributes object
      • generation number Required

        Current generation for the data stream. This number acts as a cumulative count of the stream’s rollovers, starting at 1.

      • hidden boolean Required

        If true, the data stream is hidden.

      • Values are Index Lifecycle Management, Data stream lifecycle, or Unmanaged.

      • prefer_ilm boolean Required

        Indicates if ILM should take precedence over DSL in case both are configured to managed this data stream.

      • indices array[object] Required

        Array of objects containing information about the data stream’s backing indices. The last item in this array contains information about the stream’s current write index.

        Hide indices attributes Show indices attributes object
      • Hide lifecycle attributes Show lifecycle attributes object
      • name string Required
      • replicated boolean

        If true, the data stream is created and managed by cross-cluster replication and the local cluster can not write into this data stream or change its mappings.

      • rollover_on_write boolean Required

        If true, the next write to this data stream will trigger a rollover first and the document will be indexed in the new backing index. If the rollover fails the indexing request will fail too.

      • status string Required

        Values are green, GREEN, yellow, YELLOW, red, or RED.

      • system boolean

        If true, the data stream is created and managed by an Elastic stack component and cannot be modified through normal user interaction.

      • template string Required
      • timestamp_field object Required
        Hide timestamp_field attribute Show timestamp_field attribute object
        • name string Required

          Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

GET /_data_stream
curl \
 --request GET 'http://api.example.com/_data_stream' \
 --header "Authorization: $API_KEY"
Response examples (200)
A successful response for retrieving information about a data stream.
{
  "data_streams": [
    {
      "name": "my-data-stream",
      "timestamp_field": {
        "name": "@timestamp"
      },
      "indices": [
        {
          "index_name": ".ds-my-data-stream-2099.03.07-000001",
          "index_uuid": "xCEhwsp8Tey0-FLNFYVwSg",
          "prefer_ilm": true,
          "ilm_policy": "my-lifecycle-policy",
          "managed_by": "Index Lifecycle Management"
        },
        {
          "index_name": ".ds-my-data-stream-2099.03.08-000002",
          "index_uuid": "PA_JquKGSiKcAKBA8DJ5gw",
          "prefer_ilm": true,
          "ilm_policy": "my-lifecycle-policy",
          "managed_by": "Index Lifecycle Management"
        }
      ],
      "generation": 2,
      "_meta": {
        "my-meta-field": "foo"
      },
      "status": "GREEN",
      "next_generation_managed_by": "Index Lifecycle Management",
      "prefer_ilm": true,
      "template": "my-index-template",
      "ilm_policy": "my-lifecycle-policy",
      "hidden": false,
      "system": false,
      "allow_custom_routing": false,
      "replicated": false,
      "rollover_on_write": false
    },
    {
      "name": "my-data-stream-two",
      "timestamp_field": {
        "name": "@timestamp"
      },
      "indices": [
        {
          "index_name": ".ds-my-data-stream-two-2099.03.08-000001",
          "index_uuid": "3liBu2SYS5axasRt6fUIpA",
          "prefer_ilm": true,
          "ilm_policy": "my-lifecycle-policy",
          "managed_by": "Index Lifecycle Management"
        }
      ],
      "generation": 1,
      "_meta": {
        "my-meta-field": "foo"
      },
      "status": "YELLOW",
      "next_generation_managed_by": "Index Lifecycle Management",
      "prefer_ilm": true,
      "template": "my-index-template",
      "ilm_policy": "my-lifecycle-policy",
      "hidden": false,
      "system": false,
      "allow_custom_routing": false,
      "replicated": false,
      "rollover_on_write": false
    }
  ]
}

































































Get a document's source

GET /{index}/_source/{id}

Get the source of a document. For example:

GET my-index-000001/_source/1

You can use the source filtering parameters to control which parts of the _source are returned:

GET my-index-000001/_source/1/?_source_includes=*.id&_source_excludes=entities
External documentation

Path parameters

  • index string Required

    The name of the index that contains the document.

  • id string Required

    A unique document identifier.

Query parameters

  • The node or shard the operation should be performed on. By default, the operation is randomized between the shard replicas.

  • realtime boolean

    If true, the request is real-time as opposed to near-real-time.

  • refresh boolean

    If true, the request refreshes the relevant shards before retrieving the document. Setting it to true should be done after careful thought and verification that this does not cause a heavy load on the system (and slow down indexing).

  • routing string

    A custom value used to route operations to a specific shard.

  • _source boolean | string | array[string]

    Indicates whether to return the _source field (true or false) or lists the fields to return.

  • _source_excludes string | array[string]

    A comma-separated list of source fields to exclude in the response.

  • _source_includes string | array[string]

    A comma-separated list of source fields to include in the response.

  • stored_fields string | array[string]

    A comma-separated list of stored fields to return as part of a hit.

  • version number

    The version number for concurrency control. It must match the current version of the document for the request to succeed.

  • The version type.

    Supported values include:

    • internal: Use internal versioning that starts at 1 and increments with each update or delete.
    • external: Only index the document if the specified version is strictly higher than the version of the stored document or if there is no existing document.
    • external_gte: Only index the document if the specified version is equal or higher than the version of the stored document or if there is no existing document. NOTE: The external_gte version type is meant for special use cases and should be used with care. If used incorrectly, it can result in loss of data.
    • force: This option is deprecated because it can cause primary and replica shards to diverge.

    Values are internal, external, external_gte, or force.

Responses

GET /{index}/_source/{id}
curl \
 --request GET 'http://api.example.com/{index}/_source/{id}' \
 --header "Authorization: $API_KEY"

Check for a document source Added in 5.4.0

HEAD /{index}/_source/{id}

Check whether a document source exists in an index. For example:

HEAD my-index-000001/_source/1

A document's source is not available if it is disabled in the mapping.

External documentation

Path parameters

  • index string Required

    A comma-separated list of data streams, indices, and aliases. It supports wildcards (*).

  • id string Required

    A unique identifier for the document.

Query parameters

  • The node or shard the operation should be performed on. By default, the operation is randomized between the shard replicas.

  • realtime boolean

    If true, the request is real-time as opposed to near-real-time.

  • refresh boolean

    If true, the request refreshes the relevant shards before retrieving the document. Setting it to true should be done after careful thought and verification that this does not cause a heavy load on the system (and slow down indexing).

  • routing string

    A custom value used to route operations to a specific shard.

  • _source boolean | string | array[string]

    Indicates whether to return the _source field (true or false) or lists the fields to return.

  • _source_excludes string | array[string]

    A comma-separated list of source fields to exclude in the response.

  • _source_includes string | array[string]

    A comma-separated list of source fields to include in the response.

  • version number

    The version number for concurrency control. It must match the current version of the document for the request to succeed.

  • The version type.

    Supported values include:

    • internal: Use internal versioning that starts at 1 and increments with each update or delete.
    • external: Only index the document if the specified version is strictly higher than the version of the stored document or if there is no existing document.
    • external_gte: Only index the document if the specified version is equal or higher than the version of the stored document or if there is no existing document. NOTE: The external_gte version type is meant for special use cases and should be used with care. If used incorrectly, it can result in loss of data.
    • force: This option is deprecated because it can cause primary and replica shards to diverge.

    Values are internal, external, external_gte, or force.

Responses

HEAD /{index}/_source/{id}
curl \
 --request HEAD 'http://api.example.com/{index}/_source/{id}' \
 --header "Authorization: $API_KEY"








Get multiple documents Added in 1.3.0

POST /_mget

Get multiple JSON documents by ID from one or more indices. If you specify an index in the request URI, you only need to specify the document IDs in the request body. To ensure fast responses, this multi get (mget) API responds with partial results if one or more shards fail.

Filter source fields

By default, the _source field is returned for every document (if stored). Use the _source and _source_include or source_exclude attributes to filter what fields are returned for a particular document. You can include the _source, _source_includes, and _source_excludes query parameters in the request URI to specify the defaults to use when there are no per-document instructions.

Get stored fields

Use the stored_fields attribute to specify the set of stored fields you want to retrieve. Any requested fields that are not stored are ignored. You can include the stored_fields query parameter in the request URI to specify the defaults to use when there are no per-document instructions.

Query parameters

  • Specifies the node or shard the operation should be performed on. Random by default.

  • realtime boolean

    If true, the request is real-time as opposed to near-real-time.

  • refresh boolean

    If true, the request refreshes relevant shards before retrieving documents.

  • routing string

    Custom value used to route operations to a specific shard.

  • _source boolean | string | array[string]

    True or false to return the _source field or not, or a list of fields to return.

  • _source_excludes string | array[string]

    A comma-separated list of source fields to exclude from the response. You can also use this parameter to exclude fields from the subset specified in _source_includes query parameter.

  • _source_includes string | array[string]

    A comma-separated list of source fields to include in the response. If this parameter is specified, only these source fields are returned. You can exclude fields from this subset using the _source_excludes query parameter. If the _source parameter is false, this parameter is ignored.

  • stored_fields string | array[string]

    If true, retrieves the document fields stored in the index rather than the document _source.

application/json

Body Required

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • docs array[object] Required

      The response includes a docs array that contains the documents in the order specified in the request. The structure of the returned documents is similar to that returned by the get API. If there is a failure getting a particular document, the error is included in place of the document.

      One of:
      Hide attributes Show attributes
      • _index string Required
      • fields object

        If the stored_fields parameter is set to true and found is true, it contains the document fields stored in the index.

        Hide fields attribute Show fields attribute object
        • * object Additional properties
      • _ignored array[string]
      • found boolean Required

        Indicates whether the document exists.

      • _id string Required
      • The primary term assigned to the document for the indexing operation.

      • _routing string

        The explicit routing, if set.

      • _seq_no number
      • _source object

        If found is true, it contains the document data formatted in JSON. If the _source parameter is set to false or the stored_fields parameter is set to true, it is excluded.

      • _version number
POST /_mget
curl \
 --request POST 'http://api.example.com/_mget' \
 --header "Authorization: $API_KEY" \
 --header "Content-Type: application/json" \
 --data '"{\n  \"docs\": [\n    {\n      \"_id\": \"1\"\n    },\n    {\n      \"_id\": \"2\"\n    }\n  ]\n}"'
Run `GET /my-index-000001/_mget`. When you specify an index in the request URI, only the document IDs are required in the request body.
{
  "docs": [
    {
      "_id": "1"
    },
    {
      "_id": "2"
    }
  ]
}
Run `GET /_mget`. This request sets `_source` to `false` for document 1 to exclude the source entirely. It retrieves `field3` and `field4` from document 2. It retrieves the `user` field from document 3 but filters out the `user.location` field.
{
  "docs": [
    {
      "_index": "test",
      "_id": "1",
      "_source": false
    },
    {
      "_index": "test",
      "_id": "2",
      "_source": [ "field3", "field4" ]
    },
    {
      "_index": "test",
      "_id": "3",
      "_source": {
        "include": [ "user" ],
        "exclude": [ "user.location" ]
      }
    }
  ]
}
Run `GET /_mget`. This request retrieves `field1` and `field2` from document 1 and `field3` and `field4` from document 2.
{
  "docs": [
    {
      "_index": "test",
      "_id": "1",
      "stored_fields": [ "field1", "field2" ]
    },
    {
      "_index": "test",
      "_id": "2",
      "stored_fields": [ "field3", "field4" ]
    }
  ]
}
Run `GET /_mget?routing=key1`. If routing is used during indexing, you need to specify the routing value to retrieve documents. This request fetches `test/_doc/2` from the shard corresponding to routing key `key1`. It fetches `test/_doc/1` from the shard corresponding to routing key `key2`.
{
  "docs": [
    {
      "_index": "test",
      "_id": "1",
      "routing": "key2"
    },
    {
      "_index": "test",
      "_id": "2"
    }
  ]
}

Get multiple documents Added in 1.3.0

GET /{index}/_mget

Get multiple JSON documents by ID from one or more indices. If you specify an index in the request URI, you only need to specify the document IDs in the request body. To ensure fast responses, this multi get (mget) API responds with partial results if one or more shards fail.

Filter source fields

By default, the _source field is returned for every document (if stored). Use the _source and _source_include or source_exclude attributes to filter what fields are returned for a particular document. You can include the _source, _source_includes, and _source_excludes query parameters in the request URI to specify the defaults to use when there are no per-document instructions.

Get stored fields

Use the stored_fields attribute to specify the set of stored fields you want to retrieve. Any requested fields that are not stored are ignored. You can include the stored_fields query parameter in the request URI to specify the defaults to use when there are no per-document instructions.

Path parameters

  • index string Required

    Name of the index to retrieve documents from when ids are specified, or when a document in the docs array does not specify an index.

Query parameters

  • Specifies the node or shard the operation should be performed on. Random by default.

  • realtime boolean

    If true, the request is real-time as opposed to near-real-time.

  • refresh boolean

    If true, the request refreshes relevant shards before retrieving documents.

  • routing string

    Custom value used to route operations to a specific shard.

  • _source boolean | string | array[string]

    True or false to return the _source field or not, or a list of fields to return.

  • _source_excludes string | array[string]

    A comma-separated list of source fields to exclude from the response. You can also use this parameter to exclude fields from the subset specified in _source_includes query parameter.

  • _source_includes string | array[string]

    A comma-separated list of source fields to include in the response. If this parameter is specified, only these source fields are returned. You can exclude fields from this subset using the _source_excludes query parameter. If the _source parameter is false, this parameter is ignored.

  • stored_fields string | array[string]

    If true, retrieves the document fields stored in the index rather than the document _source.

application/json

Body Required

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • docs array[object] Required

      The response includes a docs array that contains the documents in the order specified in the request. The structure of the returned documents is similar to that returned by the get API. If there is a failure getting a particular document, the error is included in place of the document.

      One of:
      Hide attributes Show attributes
      • _index string Required
      • fields object

        If the stored_fields parameter is set to true and found is true, it contains the document fields stored in the index.

        Hide fields attribute Show fields attribute object
        • * object Additional properties
      • _ignored array[string]
      • found boolean Required

        Indicates whether the document exists.

      • _id string Required
      • The primary term assigned to the document for the indexing operation.

      • _routing string

        The explicit routing, if set.

      • _seq_no number
      • _source object

        If found is true, it contains the document data formatted in JSON. If the _source parameter is set to false or the stored_fields parameter is set to true, it is excluded.

      • _version number
GET /{index}/_mget
curl \
 --request GET 'http://api.example.com/{index}/_mget' \
 --header "Authorization: $API_KEY" \
 --header "Content-Type: application/json" \
 --data '"{\n  \"docs\": [\n    {\n      \"_id\": \"1\"\n    },\n    {\n      \"_id\": \"2\"\n    }\n  ]\n}"'
Run `GET /my-index-000001/_mget`. When you specify an index in the request URI, only the document IDs are required in the request body.
{
  "docs": [
    {
      "_id": "1"
    },
    {
      "_id": "2"
    }
  ]
}
Run `GET /_mget`. This request sets `_source` to `false` for document 1 to exclude the source entirely. It retrieves `field3` and `field4` from document 2. It retrieves the `user` field from document 3 but filters out the `user.location` field.
{
  "docs": [
    {
      "_index": "test",
      "_id": "1",
      "_source": false
    },
    {
      "_index": "test",
      "_id": "2",
      "_source": [ "field3", "field4" ]
    },
    {
      "_index": "test",
      "_id": "3",
      "_source": {
        "include": [ "user" ],
        "exclude": [ "user.location" ]
      }
    }
  ]
}
Run `GET /_mget`. This request retrieves `field1` and `field2` from document 1 and `field3` and `field4` from document 2.
{
  "docs": [
    {
      "_index": "test",
      "_id": "1",
      "stored_fields": [ "field1", "field2" ]
    },
    {
      "_index": "test",
      "_id": "2",
      "stored_fields": [ "field3", "field4" ]
    }
  ]
}
Run `GET /_mget?routing=key1`. If routing is used during indexing, you need to specify the routing value to retrieve documents. This request fetches `test/_doc/2` from the shard corresponding to routing key `key1`. It fetches `test/_doc/1` from the shard corresponding to routing key `key2`.
{
  "docs": [
    {
      "_index": "test",
      "_id": "1",
      "routing": "key2"
    },
    {
      "_index": "test",
      "_id": "2"
    }
  ]
}












































Update a document

POST /{index}/_update/{id}

Update a document by running a script or passing a partial document.

If the Elasticsearch security features are enabled, you must have the index or write index privilege for the target index or index alias.

The script can update, delete, or skip modifying the document. The API also supports passing a partial document, which is merged into the existing document. To fully replace an existing document, use the index API. This operation:

  • Gets the document (collocated with the shard) from the index.
  • Runs the specified script.
  • Indexes the result.

The document must still be reindexed, but using this API removes some network roundtrips and reduces chances of version conflicts between the GET and the index operation.

The _source field must be enabled to use this API. In addition to _source, you can access the following variables through the ctx map: _index, _type, _id, _version, _routing, and _now (the current timestamp).

Path parameters

  • index string Required

    The name of the target index. By default, the index is created automatically if it doesn't exist.

  • id string Required

    A unique identifier for the document to be updated.

Query parameters

  • Only perform the operation if the document has this primary term.

  • Only perform the operation if the document has this sequence number.

  • True or false if to include the document source in the error message in case of parsing errors.

  • lang string

    The script language.

  • refresh string

    If 'true', Elasticsearch refreshes the affected shards to make this operation visible to search. If 'wait_for', it waits for a refresh to make this operation visible to search. If 'false', it does nothing with refreshes.

    Values are true, false, or wait_for.

  • If true, the destination must be an index alias.

  • The number of times the operation should be retried when a conflict occurs.

  • routing string

    A custom value used to route operations to a specific shard.

  • timeout string

    The period to wait for the following operations: dynamic mapping updates and waiting for active shards. Elasticsearch waits for at least the timeout period before failing. The actual wait time could be longer, particularly when multiple waits occur.

  • wait_for_active_shards number | string

    The number of copies of each shard that must be active before proceeding with the operation. Set to 'all' or any positive integer up to the total number of shards in the index (number_of_replicas+1). The default value of 1 means it waits for each primary shard to be active.

  • _source boolean | string | array[string]

    If false, source retrieval is turned off. You can also specify a comma-separated list of the fields you want to retrieve.

  • _source_excludes string | array[string]

    The source fields you want to exclude.

  • _source_includes string | array[string]

    The source fields you want to retrieve.

application/json

Body Required

  • If true, the result in the response is set to noop (no operation) when there are no changes to the document.

  • doc object

    A partial update to an existing document. If both doc and script are specified, doc is ignored.

  • If true, use the contents of 'doc' as the value of 'upsert'. NOTE: Using ingest pipelines with doc_as_upsert is not supported.

  • script object
    Hide script attributes Show script attributes object
    • source string | object

      One of:
    • id string
    • params object

      Specifies any named parameters that are passed into the script as variables. Use parameters instead of hard-coded values to decrease compile time.

      Hide params attribute Show params attribute object
      • * object Additional properties
    • lang string

      Any of:

      Values are painless, expression, mustache, or java.

    • options object
      Hide options attribute Show options attribute object
      • * string Additional properties
  • If true, run the script whether or not the document exists.

  • _source boolean | object

    Defines how to fetch a source. Fetching can be disabled entirely, or the source can be filtered.

    One of:
  • upsert object

    If the document does not already exist, the contents of 'upsert' are inserted as a new document. If the document exists, the 'script' is run.

Responses

POST /{index}/_update/{id}
curl \
 --request POST 'http://api.example.com/{index}/_update/{id}' \
 --header "Authorization: $API_KEY" \
 --header "Content-Type: application/json" \
 --data '"{\n  \"script\" : {\n    \"source\": \"ctx._source.counter += params.count\",\n    \"lang\": \"painless\",\n    \"params\" : {\n      \"count\" : 4\n    }\n  }\n}"'
Run `POST test/_update/1` to increment a counter by using a script.
{
  "script" : {
    "source": "ctx._source.counter += params.count",
    "lang": "painless",
    "params" : {
      "count" : 4
    }
  }
}
Run `POST test/_update/1` to perform a scripted upsert. When `scripted_upsert` is `true`, the script runs whether or not the document exists.
{
  "scripted_upsert": true,
  "script": {
    "source": """
      if ( ctx.op == 'create' ) {
        ctx._source.counter = params.count
      } else {
        ctx._source.counter += params.count
      }
    """,
    "params": {
      "count": 4
    }
  },
  "upsert": {}
}
Run `POST test/_update/1` to perform a doc as upsert. Instead of sending a partial `doc` plus an `upsert` doc, you can set `doc_as_upsert` to `true` to use the contents of `doc` as the `upsert` value.
{
  "doc": {
    "name": "new_name"
  },
  "doc_as_upsert": true
}
Run `POST test/_update/1` to use a script to add a tag to a list of tags. In this example, it is just a list, so the tag is added even it exists.
{
  "script": {
    "source": "ctx._source.tags.add(params.tag)",
    "lang": "painless",
    "params": {
      "tag": "blue"
    }
  }
}
Run `POST test/_update/1` to use a script to remove a tag from a list of tags. The Painless function to remove a tag takes the array index of the element you want to remove. To avoid a possible runtime error, you first need to make sure the tag exists. If the list contains duplicates of the tag, this script just removes one occurrence.
{
  "script": {
    "source": "if (ctx._source.tags.contains(params.tag)) { ctx._source.tags.remove(ctx._source.tags.indexOf(params.tag)) }",
    "lang": "painless",
    "params": {
      "tag": "blue"
    }
  }
}
Run `POST test/_update/1` to use a script to add a field `new_field` to the document.
{
  "script" : "ctx._source.new_field = 'value_of_new_field'"
}
Run `POST test/_update/1` to use a script to remove a field `new_field` from the document.
{
  "script" : "ctx._source.remove('new_field')"
}
Run `POST test/_update/1` to use a script to remove a subfield from an object field.
{
  "script": "ctx._source['my-object'].remove('my-subfield')"
}
Run `POST test/_update/1` to change the operation that runs from within the script. For example, this request deletes the document if the `tags` field contains `green`, otherwise it does nothing (`noop`).
{
  "script": {
    "source": "if (ctx._source.tags.contains(params.tag)) { ctx.op = 'delete' } else { ctx.op = 'noop' }",
    "lang": "painless",
    "params": {
      "tag": "green"
    }
  }
}
Run `POST test/_update/1` to do a partial update that adds a new field to the existing document.
{
  "doc": {
    "name": "new_name"
  }
}
Run `POST test/_update/1` to perfom an upsert. If the document does not already exist, the contents of the upsert element are inserted as a new document. If the document exists, the script is run.
{
  "script": {
    "source": "ctx._source.counter += params.count",
    "lang": "painless",
    "params": {
      "count": 4
    }
  },
  "upsert": {
    "counter": 1
  }
}
Response examples (200)
By default updates that don't change anything detect that they don't change anything and return `"result": "noop"`.
{
   "_shards": {
        "total": 0,
        "successful": 0,
        "failed": 0
   },
   "_index": "test",
   "_id": "1",
   "_version": 2,
   "_primary_term": 1,
   "_seq_no": 1,
   "result": "noop"
}









Get an enrich policy Added in 7.5.0

GET /_enrich/policy/{name}

Returns information about an enrich policy.

Path parameters

  • name string | array[string] Required

    Comma-separated list of enrich policy names used to limit the request. To return information for all enrich policies, omit this parameter.

Query parameters

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • policies array[object] Required
      Hide policies attribute Show policies attribute object
GET /_enrich/policy/{name}
curl \
 --request GET 'http://api.example.com/_enrich/policy/{name}' \
 --header "Authorization: $API_KEY"












Get an enrich policy Added in 7.5.0

GET /_enrich/policy

Returns information about an enrich policy.

Query parameters

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • policies array[object] Required
      Hide policies attribute Show policies attribute object
GET /_enrich/policy
curl \
 --request GET 'http://api.example.com/_enrich/policy' \
 --header "Authorization: $API_KEY"




EQL

Event Query Language (EQL) is a query language for event-based time series data, such as logs, metrics, and traces.

Learn more about EQL search




















ES|QL

The Elasticsearch Query Language (ES|QL) provides a powerful way to filter, transform, and analyze data stored in Elasticsearch, and in the future in other runtimes.

Learn more about ES|QL












Stop async ES|QL query Added in 8.18.0

POST /_query/async/{id}/stop

This API interrupts the query execution and returns the results so far. If the Elasticsearch security features are enabled, only the user who first submitted the ES|QL query can stop it.

External documentation

Path parameters

  • id string Required

    The unique identifier of the query. A query ID is provided in the ES|QL async query API response for a query that does not complete in the designated time. A query ID is also provided when the request was submitted with the keep_on_completion parameter set to true.

Query parameters

  • Indicates whether columns that are entirely null will be removed from the columns and values portion of the results. If true, the response will include an extra section under the name all_columns which has the name of all the columns.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • took number

      Time unit for milliseconds

    • is_partial boolean
    • all_columns array[object]
      Hide all_columns attributes Show all_columns attributes object
    • columns array[object] Required
      Hide columns attributes Show columns attributes object
    • values array[array] Required

      A field value.

      A field value.

    • Hide _clusters attributes Show _clusters attributes object
    • profile object

      Profiling information. Present if profile was true in the request. The contents of this field are currently unstable.

POST /_query/async/{id}/stop
curl \
 --request POST 'http://api.example.com/_query/async/{id}/stop' \
 --header "Authorization: $API_KEY"





Get the features Added in 7.12.0

GET /_features

Get a list of features that can be included in snapshots using the feature_states field when creating a snapshot. You can use this API to determine which feature states to include when taking a snapshot. By default, all feature states are included in a snapshot if that snapshot includes the global state, or none if it does not.

A feature state includes one or more system indices necessary for a given feature to function. In order to ensure data integrity, all system indices that comprise a feature state are snapshotted and restored together.

The features listed by this API are a combination of built-in features and features defined by plugins. In order for a feature state to be listed in this API and recognized as a valid feature state by the create snapshot API, the plugin that defines that feature must be installed on the master node.

External documentation

Query parameters

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
GET /_features
curl \
 --request GET 'http://api.example.com/_features' \
 --header "Authorization: $API_KEY"
Response examples (200)
A successful response for retrieving a list of feature states that can be included when taking a snapshot.
{
  "features": [
    {
      "name": "tasks",
      "description": "Manages task results"
    },
    {
      "name": "kibana",
      "description": "Manages Kibana configuration and reports"
    }
  ]
}




Get global checkpoints Added in 7.13.0

GET /{index}/_fleet/global_checkpoints

Get the current global checkpoints for an index. This API is designed for internal use by the Fleet server project.

Path parameters

  • index string Required

    A single index or index alias that resolves to a single index.

Query parameters

  • A boolean value which controls whether to wait (until the timeout) for the global checkpoints to advance past the provided checkpoints.

  • A boolean value which controls whether to wait (until the timeout) for the target index to exist and all primary shards be active. Can only be true when wait_for_advance is true.

  • checkpoints array[number]

    A comma separated list of previous global checkpoints. When used in combination with wait_for_advance, the API will only return once the global checkpoints advances past the checkpoints. Providing an empty list will cause Elasticsearch to immediately return the current global checkpoints.

  • timeout string

    Period to wait for a global checkpoints to advance past checkpoints.

Responses

GET /{index}/_fleet/global_checkpoints
curl \
 --request GET 'http://api.example.com/{index}/_fleet/global_checkpoints' \
 --header "Authorization: $API_KEY"