Authentication

The API accepts 3 different authentication methods:

Api key auth (http_api_key)

Elasticsearch APIs support key-based authentication. You must create an API key and use the encoded value in the request header. For example:

curl -X GET "${ES_URL}/_cat/indices?v=true" \
  -H "Authorization: ApiKey ${API_KEY}"

To get API keys, use the /_security/api_key APIs.

Basic auth (http)

Basic auth tokens are constructed with the Basic keyword, followed by a space, followed by a base64-encoded string of your username:password (separated by a : colon).

Example: send a Authorization: Basic aGVsbG86aGVsbG8= HTTP header with your requests to authenticate with the API.

Bearer auth (http)

Elasticsearch APIs support the use of bearer tokens in the Authorization HTTP header to authenticate with the API. For examples, refer to Token-based authentication services






Create a behavioral analytics collection Technical preview; Added in 8.8.0

PUT /_application/analytics/{name}

Path parameters

  • name string Required

    The name of the analytics collection to be created or updated.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

    • name string Required

      The name of the analytics collection created or updated

PUT /_application/analytics/{name}
PUT _application/analytics/my_analytics_collection
resp = client.search_application.put_behavioral_analytics(
    name="my_analytics_collection",
)
const response = await client.searchApplication.putBehavioralAnalytics({
  name: "my_analytics_collection",
});
response = client.search_application.put_behavioral_analytics(
  name: "my_analytics_collection"
)
$resp = $client->searchApplication()->putBehavioralAnalytics([
    "name" => "my_analytics_collection",
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_application/analytics/my_analytics_collection"
client.searchApplication().putBehavioralAnalytics(p -> p
    .name("my_analytics_collection")
);




Create a behavioral analytics collection event Technical preview

POST /_application/analytics/{collection_name}/event/{event_type} External documentation

Path parameters

  • collection_name string Required

    The name of the behavioral analytics collection.

  • event_type string

    The analytics event type.

    Values are page_view, search, or search_click.

Query parameters

  • debug boolean

    Whether the response type has to include more details

application/json

Body Required

object object

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • accepted boolean Required
    • event object
POST /_application/analytics/{collection_name}/event/{event_type}
POST _application/analytics/my_analytics_collection/event/search_click
{
  "session": {
    "id": "1797ca95-91c9-4e2e-b1bd-9c38e6f386a9"
  },
  "user": {
    "id": "5f26f01a-bbee-4202-9298-81261067abbd"
  },
  "search":{
    "query": "search term",
    "results": {
      "items": [
        {
          "document": {
            "id": "123",
            "index": "products"
          }
        }
      ],
      "total_results": 10
    },
    "sort": {
      "name": "relevance"
    },
    "search_application": "website"
  },
  "document":{
    "id": "123",
    "index": "products"
  }
}
resp = client.search_application.post_behavioral_analytics_event(
    collection_name="my_analytics_collection",
    event_type="search_click",
    payload={
        "session": {
            "id": "1797ca95-91c9-4e2e-b1bd-9c38e6f386a9"
        },
        "user": {
            "id": "5f26f01a-bbee-4202-9298-81261067abbd"
        },
        "search": {
            "query": "search term",
            "results": {
                "items": [
                    {
                        "document": {
                            "id": "123",
                            "index": "products"
                        }
                    }
                ],
                "total_results": 10
            },
            "sort": {
                "name": "relevance"
            },
            "search_application": "website"
        },
        "document": {
            "id": "123",
            "index": "products"
        }
    },
)
const response = await client.searchApplication.postBehavioralAnalyticsEvent({
  collection_name: "my_analytics_collection",
  event_type: "search_click",
  payload: {
    session: {
      id: "1797ca95-91c9-4e2e-b1bd-9c38e6f386a9",
    },
    user: {
      id: "5f26f01a-bbee-4202-9298-81261067abbd",
    },
    search: {
      query: "search term",
      results: {
        items: [
          {
            document: {
              id: "123",
              index: "products",
            },
          },
        ],
        total_results: 10,
      },
      sort: {
        name: "relevance",
      },
      search_application: "website",
    },
    document: {
      id: "123",
      index: "products",
    },
  },
});
response = client.search_application.post_behavioral_analytics_event(
  collection_name: "my_analytics_collection",
  event_type: "search_click",
  body: {
    "session": {
      "id": "1797ca95-91c9-4e2e-b1bd-9c38e6f386a9"
    },
    "user": {
      "id": "5f26f01a-bbee-4202-9298-81261067abbd"
    },
    "search": {
      "query": "search term",
      "results": {
        "items": [
          {
            "document": {
              "id": "123",
              "index": "products"
            }
          }
        ],
        "total_results": 10
      },
      "sort": {
        "name": "relevance"
      },
      "search_application": "website"
    },
    "document": {
      "id": "123",
      "index": "products"
    }
  }
)
$resp = $client->searchApplication()->postBehavioralAnalyticsEvent([
    "collection_name" => "my_analytics_collection",
    "event_type" => "search_click",
    "body" => [
        "session" => [
            "id" => "1797ca95-91c9-4e2e-b1bd-9c38e6f386a9",
        ],
        "user" => [
            "id" => "5f26f01a-bbee-4202-9298-81261067abbd",
        ],
        "search" => [
            "query" => "search term",
            "results" => [
                "items" => array(
                    [
                        "document" => [
                            "id" => "123",
                            "index" => "products",
                        ],
                    ],
                ),
                "total_results" => 10,
            ],
            "sort" => [
                "name" => "relevance",
            ],
            "search_application" => "website",
        ],
        "document" => [
            "id" => "123",
            "index" => "products",
        ],
    ],
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"session":{"id":"1797ca95-91c9-4e2e-b1bd-9c38e6f386a9"},"user":{"id":"5f26f01a-bbee-4202-9298-81261067abbd"},"search":{"query":"search term","results":{"items":[{"document":{"id":"123","index":"products"}}],"total_results":10},"sort":{"name":"relevance"},"search_application":"website"},"document":{"id":"123","index":"products"}}' "$ELASTICSEARCH_URL/_application/analytics/my_analytics_collection/event/search_click"
client.searchApplication().postBehavioralAnalyticsEvent(p -> p
    .collectionName("my_analytics_collection")
    .eventType(EventType.SearchClick)
    .payload(JsonData.fromJson("{\"session\":{\"id\":\"1797ca95-91c9-4e2e-b1bd-9c38e6f386a9\"},\"user\":{\"id\":\"5f26f01a-bbee-4202-9298-81261067abbd\"},\"search\":{\"query\":\"search term\",\"results\":{\"items\":[{\"document\":{\"id\":\"123\",\"index\":\"products\"}}],\"total_results\":10},\"sort\":{\"name\":\"relevance\"},\"search_application\":\"website\"},\"document\":{\"id\":\"123\",\"index\":\"products\"}}"))
);
Request example
Run `POST _application/analytics/my_analytics_collection/event/search_click` to send a `search_click` event to an analytics collection called `my_analytics_collection`.
{
  "session": {
    "id": "1797ca95-91c9-4e2e-b1bd-9c38e6f386a9"
  },
  "user": {
    "id": "5f26f01a-bbee-4202-9298-81261067abbd"
  },
  "search":{
    "query": "search term",
    "results": {
      "items": [
        {
          "document": {
            "id": "123",
            "index": "products"
          }
        }
      ],
      "total_results": 10
    },
    "sort": {
      "name": "relevance"
    },
    "search_application": "website"
  },
  "document":{
    "id": "123",
    "index": "products"
  }
}





Get shard allocation information Generally available

GET /_cat/allocation/{node_id}

All methods and paths for this operation:

GET /_cat/allocation

GET /_cat/allocation/{node_id}

Get a snapshot of the number of shards allocated to each data node and their disk space.

IMPORTANT: CAT APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications.

Required authorization

  • Cluster privileges: monitor

Path parameters

  • node_id string | array[string]

    A comma-separated list of node identifiers or names used to limit the returned information.

Query parameters

  • bytes string

    The unit used to display byte values.

    Values are b, kb, mb, gb, tb, or pb.

  • h string | array[string]

    A comma-separated list of columns names to display. It supports simple wildcards.

    Supported values include:

    • shards (or s): The number of shards on the node.
    • shards.undesired: The number of shards scheduled to be moved elsewhere in the cluster.
    • write_load.forecast (or wlf, writeLoadForecast): The sum of index write load forecasts.
    • disk.indices.forecast (or dif, diskIndicesForecast): The sum of shard size forecasts.
    • disk.indices (or di, diskIndices): The disk space used by Elasticsearch indices.
    • disk.used (or du, diskUsed): The total disk space used on the node.
    • disk.avail (or da, diskAvail): The available disk space on the node.
    • disk.total (or dt, diskTotal): The total disk capacity of all volumes on the node.
    • disk.percent (or dp, diskPercent): The percentage of disk space used on the node.
    • host (or h): IThe host of the node.
    • ip: The IP address of the node.
    • node (or n): The name of the node.
    • node.role (or r, role, nodeRole): The roles assigned to the node.

    Values are shards, s, shards.undesired, write_load.forecast, wlf, writeLoadForecast, disk.indices.forecast, dif, diskIndicesForecast, disk.indices, di, diskIndices, disk.used, du, diskUsed, disk.avail, da, diskAvail, disk.total, dt, diskTotal, disk.percent, dp, diskPercent, host, h, ip, node, n, node.role, r, role, or nodeRole.

  • s string | array[string]

    List of columns that determine how the table should be sorted. Sorting defaults to ascending and can be changed by setting :asc or :desc as a suffix to the column name.

  • local boolean

    If true, the request computes the list of selected nodes from the local cluster state. If false the list of selected nodes are computed from the cluster state of the master node. In both cases the coordinating node will send requests for further information to each selected node.

  • master_timeout string

    Period to wait for a connection to the master node.

    Values are -1 or 0.

Responses

GET /_cat/allocation/{node_id}
GET /_cat/allocation?v=true&format=json
resp = client.cat.allocation(
    v=True,
    format="json",
)
const response = await client.cat.allocation({
  v: "true",
  format: "json",
});
response = client.cat.allocation(
  v: "true",
  format: "json"
)
$resp = $client->cat()->allocation([
    "v" => "true",
    "format" => "json",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_cat/allocation?v=true&format=json"
client.cat().allocation();
Response examples (200)
A successful response from `GET /_cat/allocation?v=true&format=json`. It shows a single shard is allocated to the one node available.
[
  {
    "shards": "1",
    "shards.undesired": "0",
    "write_load.forecast": "0.0",
    "disk.indices.forecast": "260b",
    "disk.indices": "260b",
    "disk.used": "47.3gb",
    "disk.avail": "43.4gb",
    "disk.total": "100.7gb",
    "disk.percent": "46",
    "host": "127.0.0.1",
    "ip": "127.0.0.1",
    "node": "CSUXak2",
    "node.role": "himrst"
  }
]




































































Get segment information Generally available

GET /_cat/segments/{index}

All methods and paths for this operation:

GET /_cat/segments

GET /_cat/segments/{index}

Get low-level information about the Lucene segments in index shards. For data streams, the API returns information about the backing indices. IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the index segments API.

Required authorization

  • Index privileges: monitor
  • Cluster privileges: monitor

Path parameters

  • index string | array[string] Required

    A comma-separated list of data streams, indices, and aliases used to limit the request. Supports wildcards (*). To target all data streams and indices, omit this parameter or use * or _all.

Query parameters

  • bytes string

    The unit used to display byte values.

    Values are b, kb, mb, gb, tb, or pb.

  • h string | array[string]

    A comma-separated list of columns names to display. It supports simple wildcards.

    Supported values include:

    • index (or i, idx): The name of the index.
    • shard (or s, sh): The name of the shard.
    • prirep (or p, pr, primaryOrReplica): The shard type. Returned values are 'primary' or 'replica'.
    • ip: IP address of the segment’s shard, such as '127.0.1.1'.
    • segment: The name of the segment, such as '_0'. The segment name is derived from the segment generation and used internally to create file names in the directory of the shard.
    • generation: Generation number, such as '0'. Elasticsearch increments this generation number for each segment written. Elasticsearch then uses this number to derive the segment name.
    • docs.count: The number of documents as reported by Lucene. This excludes deleted documents and counts any nested documents separately from their parents. It also excludes documents which were indexed recently and do not yet belong to a segment.
    • docs.deleted: The number of deleted documents as reported by Lucene, which may be higher or lower than the number of delete operations you have performed. This number excludes deletes that were performed recently and do not yet belong to a segment. Deleted documents are cleaned up by the automatic merge process if it makes sense to do so. Also, Elasticsearch creates extra deleted documents to internally track the recent history of operations on a shard.
    • size: The disk space used by the segment, such as '50kb'.
    • size.memory: The bytes of segment data stored in memory for efficient search, such as '1264'. A value of '-1' indicates Elasticsearch was unable to compute this number.
    • committed: If 'true', the segments is synced to disk. Segments that are synced can survive a hard reboot. If 'false', the data from uncommitted segments is also stored in the transaction log so that Elasticsearch is able to replay changes on the next start.
    • searchable: If 'true', the segment is searchable. If 'false', the segment has most likely been written to disk but needs a refresh to be searchable.
    • version: The version of Lucene used to write the segment.
    • compound: If 'true', the segment is stored in a compound file. This means Lucene merged all files from the segment in a single file to save file descriptors.
    • id: The ID of the node, such as 'k0zy'.

    Values are index, i, idx, shard, s, sh, prirep, p, pr, primaryOrReplica, ip, segment, generation, docs.count, docs.deleted, size, size.memory, committed, searchable, version, compound, or id.

  • s string | array[string]

    A comma-separated list of column names or aliases that determines the sort order. Sorting defaults to ascending and can be changed by setting :asc or :desc as a suffix to the column name.

  • local boolean

    If true, the request computes the list of selected nodes from the local cluster state. If false the list of selected nodes are computed from the cluster state of the master node. In both cases the coordinating node will send requests for further information to each selected node.

  • master_timeout string

    Period to wait for a connection to the master node.

    Values are -1 or 0.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • index string

      The index name.

    • shard string

      The shard name.

    • prirep string

      The shard type: primary or replica.

    • ip string

      The IP address of the node where it lives.

    • id string

      The unique identifier of the node where it lives.

    • segment string

      The segment name, which is derived from the segment generation and used internally to create file names in the directory of the shard.

    • generation string

      The segment generation number. Elasticsearch increments this generation number for each segment written then uses this number to derive the segment name.

    • docs.count string

      The number of documents in the segment. This excludes deleted documents and counts any nested documents separately from their parents. It also excludes documents which were indexed recently and do not yet belong to a segment.

    • docs.deleted string

      The number of deleted documents in the segment, which might be higher or lower than the number of delete operations you have performed. This number excludes deletes that were performed recently and do not yet belong to a segment. Deleted documents are cleaned up by the automatic merge process if it makes sense to do so. Also, Elasticsearch creates extra deleted documents to internally track the recent history of operations on a shard.

    • size number | string

      The segment size in bytes.

      One of:

      The segment size in bytes.

    • size.memory number | string

      The segment memory in bytes. A value of -1 indicates Elasticsearch was unable to compute this number.

      One of:

      The segment memory in bytes. A value of -1 indicates Elasticsearch was unable to compute this number.

    • committed string

      If true, the segment is synced to disk. Segments that are synced can survive a hard reboot. If false, the data from uncommitted segments is also stored in the transaction log so that Elasticsearch is able to replay changes on the next start.

    • searchable string

      If true, the segment is searchable. If false, the segment has most likely been written to disk but needs a refresh to be searchable.

    • version string

      The version of Lucene used to write the segment.

    • compound string

      If true, the segment is stored in a compound file. This means Lucene merged all files from the segment in a single file to save file descriptors.

GET /_cat/segments/{index}
GET /_cat/segments?v=true&format=json
resp = client.cat.segments(
    v=True,
    format="json",
)
const response = await client.cat.segments({
  v: "true",
  format: "json",
});
response = client.cat.segments(
  v: "true",
  format: "json"
)
$resp = $client->cat()->segments([
    "v" => "true",
    "format" => "json",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_cat/segments?v=true&format=json"
client.cat().segments();
Response examples (200)
A successful response from `GET /_cat/segments?v=true&format=json`.
[
  {
    "index": "test",
    "shard": "0",
    "prirep": "p",
    "ip": "127.0.0.1",
    "segment": "_0",
    "generation": "0",
    "docs.count": "1",
    "docs.deleted": "0",
    "size": "3kb",
    "size.memory": "0",
    "committed": "false",
    "searchable": "true",
    "version": "9.12.0",
    "compound": "true"
  },
  {
    "index": "test1",
    "shard": "0",
    "prirep": "p",
    "ip": "127.0.0.1",
    "segment": "_0",
    "generation": "0",
    "docs.count": "1",
    "docs.deleted": "0",
    "size": "3kb",
    "size.memory": "0",
    "committed": "false",
    "searchable": "true",
    "version": "9.12.0",
    "compound": "true"
  }
]












Get index template information Generally available; Added in 5.2.0

GET /_cat/templates/{name}

All methods and paths for this operation:

GET /_cat/templates

GET /_cat/templates/{name}

Get information about the index templates in a cluster. You can use index templates to apply index settings and field mappings to new indices at creation. IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the get index template API.

Required authorization

  • Cluster privileges: monitor

Path parameters

  • name string Required

    The name of the template to return. Accepts wildcard expressions. If omitted, all templates are returned.

Query parameters

  • h string | array[string]

    List of columns to appear in the response. Supports simple wildcards.

  • s string | array[string]

    List of columns that determine how the table should be sorted. Sorting defaults to ascending and can be changed by setting :asc or :desc as a suffix to the column name.

  • local boolean

    If true, the request computes the list of selected nodes from the local cluster state. If false the list of selected nodes are computed from the cluster state of the master node. In both cases the coordinating node will send requests for further information to each selected node.

  • master_timeout string

    Period to wait for a connection to the master node.

    Values are -1 or 0.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • name string

      The template name.

    • index_patterns string

      The template index patterns.

    • order string

      The template application order or priority number.

    • version string | null

      The template version.

    • composed_of string

      The component templates that comprise the index template.

GET /_cat/templates/{name}
GET _cat/templates/my-template-*?v=true&s=name&format=json
resp = client.cat.templates(
    name="my-template-*",
    v=True,
    s="name",
    format="json",
)
const response = await client.cat.templates({
  name: "my-template-*",
  v: "true",
  s: "name",
  format: "json",
});
response = client.cat.templates(
  name: "my-template-*",
  v: "true",
  s: "name",
  format: "json"
)
$resp = $client->cat()->templates([
    "name" => "my-template-*",
    "v" => "true",
    "s" => "name",
    "format" => "json",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_cat/templates/my-template-*?v=true&s=name&format=json"
client.cat().templates();
Response examples (200)
A successful response from `GET _cat/templates/my-template-*?v=true&s=name&format=json`.
[
  {
    "name": "my-template-0",
    "index_patterns": "[te*]",
    "order": "500",
    "version": null,
    "composed_of": "[]"
  },
  {
    "name": "my-template-1",
    "index_patterns": "[tea*]",
    "order": "501",
    "version": null,
    "composed_of": "[]"
  },
  {
    "name": "my-template-2",
    "index_patterns": "[teak*]",
    "order": "502",
    "version": "7",
    "composed_of": "[]"
  }
]

























Update the cluster settings Generally available

PUT /_cluster/settings

Configure and update dynamic settings on a running cluster. You can also configure dynamic settings locally on an unstarted or shut down node in elasticsearch.yml.

Updates made with this API can be persistent, which apply across cluster restarts, or transient, which reset after a cluster restart. You can also reset transient or persistent settings by assigning them a null value.

If you configure the same setting using multiple methods, Elasticsearch applies the settings in following order of precedence: 1) Transient setting; 2) Persistent setting; 3) elasticsearch.yml setting; 4) Default setting value. For example, you can apply a transient setting to override a persistent setting or elasticsearch.yml setting. However, a change to an elasticsearch.yml setting will not override a defined transient or persistent setting.

TIP: In Elastic Cloud, use the user settings feature to configure all cluster settings. This method automatically rejects unsafe settings that could break your cluster. If you run Elasticsearch on your own hardware, use this API to configure dynamic cluster settings. Only use elasticsearch.yml for static cluster settings and node settings. The API doesn’t require a restart and ensures a setting’s value is the same on all nodes.

WARNING: Transient cluster settings are no longer recommended. Use persistent cluster settings instead. If a cluster becomes unstable, transient settings can clear unexpectedly, resulting in a potentially undesired cluster configuration.

External documentation

Query parameters

  • flat_settings boolean

    Return settings in flat format (default: false)

  • master_timeout string

    Explicit operation timeout for connection to master node

    Values are -1 or 0.

  • timeout string

    Explicit operation timeout

    Values are -1 or 0.

application/json

Body Required

  • persistent object

    The settings that persist after the cluster restarts.

    Hide persistent attribute Show persistent attribute object
    • * object Additional properties
  • transient object

    The settings that do not persist after the cluster restarts.

    Hide transient attribute Show transient attribute object
    • * object Additional properties

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • acknowledged boolean Required
    • persistent object Required
      Hide persistent attribute Show persistent attribute object
      • * object Additional properties
    • transient object Required
      Hide transient attribute Show transient attribute object
      • * object Additional properties
PUT /_cluster/settings
PUT /_cluster/settings
{
  "persistent" : {
    "indices.recovery.max_bytes_per_sec" : "50mb"
  }
}
resp = client.cluster.put_settings(
    persistent={
        "indices.recovery.max_bytes_per_sec": "50mb"
    },
)
const response = await client.cluster.putSettings({
  persistent: {
    "indices.recovery.max_bytes_per_sec": "50mb",
  },
});
response = client.cluster.put_settings(
  body: {
    "persistent": {
      "indices.recovery.max_bytes_per_sec": "50mb"
    }
  }
)
$resp = $client->cluster()->putSettings([
    "body" => [
        "persistent" => [
            "indices.recovery.max_bytes_per_sec" => "50mb",
        ],
    ],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"persistent":{"indices.recovery.max_bytes_per_sec":"50mb"}}' "$ELASTICSEARCH_URL/_cluster/settings"
client.cluster().putSettings(p -> p
    .persistent("indices.recovery.max_bytes_per_sec", JsonData.fromJson("\"50mb\""))
);
Request examples
An example of a persistent update.
{
  "persistent" : {
    "indices.recovery.max_bytes_per_sec" : "50mb"
  }
}
PUT `/_cluster/settings` to update the `action.auto_create_index` setting. The setting accepts a comma-separated list of patterns that you want to allow or you can prefix each pattern with `+` or `-` to indicate whether it should be allowed or blocked. In this example, the auto-creation of indices called `my-index-000001` or `index10` is allowed, the creation of indices that match the pattern `index1*` is blocked, and the creation of any other indices that match the `ind*` pattern is allowed. Patterns are matched in the order specified.
{
  "persistent": {
    "action.auto_create_index": "my-index-000001,index10,-index1*,+ind*" 
  }
}








































Get the hot threads for nodes Generally available

GET /_nodes/{node_id}/hot_threads

All methods and paths for this operation:

GET /_nodes/hot_threads

GET /_nodes/{node_id}/hot_threads

Get a breakdown of the hot threads on each selected node in the cluster. The output is plain text with a breakdown of the top hot threads for each node.

Required authorization

  • Cluster privileges: monitor,manage

Path parameters

  • node_id string | array[string] Required

    List of node IDs or names used to limit returned information.

Query parameters

  • ignore_idle_threads boolean

    If true, known idle threads (e.g. waiting in a socket select, or to get a task from an empty queue) are filtered out.

  • interval string

    The interval to do the second sampling of threads.

    Values are -1 or 0.

  • snapshots number

    Number of samples of thread stacktrace.

  • threads number

    Specifies the number of hot threads to provide information for.

  • timeout string

    Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.

    Values are -1 or 0.

  • type string

    The type to sample.

    Values are cpu, wait, block, gpu, or mem.

  • sort string

    The sort order for 'cpu' type (default: total)

    Values are cpu, wait, block, gpu, or mem.

Responses

  • 200 application/json
GET /_nodes/{node_id}/hot_threads
GET /_nodes/hot_threads
resp = client.nodes.hot_threads()
const response = await client.nodes.hotThreads();
response = client.nodes.hot_threads
$resp = $client->nodes()->hotThreads();
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_nodes/hot_threads"
client.nodes().hotThreads(h -> h);




Reload the keystore on nodes in the cluster Generally available; Added in 6.5.0

POST /_nodes/{node_id}/reload_secure_settings

All methods and paths for this operation:

POST /_nodes/reload_secure_settings

POST /_nodes/{node_id}/reload_secure_settings

Secure settings are stored in an on-disk keystore. Certain of these settings are reloadable. That is, you can change them on disk and reload them without restarting any nodes in the cluster. When you have updated reloadable secure settings in your keystore, you can use this API to reload those settings on each node.

When the Elasticsearch keystore is password protected and not simply obfuscated, you must provide the password for the keystore when you reload the secure settings. Reloading the settings for the whole cluster assumes that the keystores for all nodes are protected with the same password; this method is allowed only when inter-node communications are encrypted. Alternatively, you can reload the secure settings on each node by locally accessing the API and passing the node-specific Elasticsearch keystore password.

Path parameters

  • node_id string | array[string] Required

    The names of particular nodes in the cluster to target.

Query parameters

  • timeout string

    Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.

    Values are -1 or 0.

application/json

Body

  • secure_settings_password string

    The password for the Elasticsearch keystore.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • _nodes object

      Contains statistics about the number of nodes selected by the request’s node filters.

      Hide _nodes attributes Show _nodes attributes object
      • failures array[object]

        Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

        Hide failures attributes Show failures attributes object
        • type string Required

          The type of error

        • reason
        • stack_trace string

          The server stack trace. Present only if the error_trace=true parameter was sent with the request.

        • caused_by
        • root_cause array[object]
        • suppressed array[object]
      • total number Required

        Total number of nodes selected by the request.

      • successful number Required

        Number of nodes that responded successfully to the request.

      • failed number Required

        Number of nodes that rejected the request or failed to respond. If this value is not 0, a reason for the rejection or failure is included in the response.

    • cluster_name string Required
    • nodes object Required
      Hide nodes attribute Show nodes attribute object
      • * object Additional properties
        Hide * attributes Show * attributes object
        • name string Required
        • reload_exception object

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

          Hide reload_exception attributes Show reload_exception attributes object
          • type string Required

            The type of error

          • reason
          • stack_trace string

            The server stack trace. Present only if the error_trace=true parameter was sent with the request.

          • root_cause array[object]
          • suppressed array[object]
POST /_nodes/{node_id}/reload_secure_settings
POST _nodes/reload_secure_settings
{
  "secure_settings_password": "keystore-password"
}
resp = client.nodes.reload_secure_settings(
    secure_settings_password="keystore-password",
)
const response = await client.nodes.reloadSecureSettings({
  secure_settings_password: "keystore-password",
});
response = client.nodes.reload_secure_settings(
  body: {
    "secure_settings_password": "keystore-password"
  }
)
$resp = $client->nodes()->reloadSecureSettings([
    "body" => [
        "secure_settings_password" => "keystore-password",
    ],
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"secure_settings_password":"keystore-password"}' "$ELASTICSEARCH_URL/_nodes/reload_secure_settings"
client.nodes().reloadSecureSettings(r -> r
    .secureSettingsPassword("keystore-password")
);
Request example
Run `POST _nodes/reload_secure_settings` to reload the keystore on nodes in the cluster.
{
  "secure_settings_password": "keystore-password"
}
Response examples (200)
A successful response when reloading keystore on nodes in your cluster.
{
  "_nodes": {
    "total": 1,
    "successful": 1,
    "failed": 0
  },
  "cluster_name": "my_cluster",
  "nodes": {
    "pQHNt5rXTTWNvUgOrdynKg": {
      "name": "node-0"
    }
  }
}


















Get a connector Beta; Added in 8.12.0

GET /_connector/{connector_id}

Get the details about a connector.

Path parameters

  • connector_id string Required

    The unique identifier of the connector

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • api_key_id string
    • api_key_secret_id string
    • configuration object Required
      Hide configuration attribute Show configuration attribute object
      • * object Additional properties
        Hide * attributes Show * attributes object
        • category string
        • depends_on array[object] Required
          Hide depends_on attributes Show depends_on attributes object
          • field string Required
          • value
        • display string Required

          Values are textbox, textarea, numeric, toggle, or dropdown.

        • label string Required
        • options array[object] Required
          Hide options attributes Show options attributes object
          • label string Required
          • value
        • order number
        • placeholder string
        • required boolean Required
        • sensitive boolean Required
        • type string

          Values are str, int, list, or bool.

        • ui_restrictions array[string]
        • validations array[object]
        • value object Required
    • custom_scheduling object Required
      Hide custom_scheduling attribute Show custom_scheduling attribute object
      • * object Additional properties
        Hide * attributes Show * attributes object
        • configuration_overrides object Required
          Hide configuration_overrides attributes Show configuration_overrides attributes object
          • max_crawl_depth number
          • sitemap_discovery_disabled boolean
          • domain_allowlist array[string]
          • sitemap_urls array[string]
          • seed_urls array[string]
        • enabled boolean Required
        • interval string Required
        • last_synced string
        • name string Required
    • description string
    • features object
      Hide features attributes Show features attributes object
      • document_level_security object

        Indicates whether document-level security is enabled.

        Hide document_level_security attribute Show document_level_security attribute object
        • enabled boolean Required
      • incremental_sync object

        Indicates whether incremental syncs are enabled.

        Hide incremental_sync attribute Show incremental_sync attribute object
        • enabled boolean Required
      • native_connector_api_keys object

        Indicates whether managed connector API keys are enabled.

        Hide native_connector_api_keys attribute Show native_connector_api_keys attribute object
        • enabled boolean Required
      • sync_rules object
        Hide sync_rules attributes Show sync_rules attributes object
        • advanced object

          Indicates whether advanced sync rules are enabled.

        • basic object

          Indicates whether basic sync rules are enabled.

    • filtering array[object] Required
      Hide filtering attributes Show filtering attributes object
      • active object Required
        Hide active attributes Show active attributes object
        • advanced_snippet object Required
        • rules array[object] Required
        • validation object Required
      • domain string
      • draft object Required
        Hide draft attributes Show draft attributes object
        • advanced_snippet object Required
        • rules array[object] Required
        • validation object Required
    • id string
    • index_name string | null

    • is_native boolean Required
    • language string
    • last_access_control_sync_error string
    • last_access_control_sync_scheduled_at string | number

      One of:
    • last_access_control_sync_status string

      Values are canceling, canceled, completed, error, in_progress, pending, or suspended.

    • last_deleted_document_count number
    • last_incremental_sync_scheduled_at string | number

      One of:
    • last_indexed_document_count number
    • last_seen string | number

      One of:
    • last_sync_error string
    • last_sync_scheduled_at string | number

      One of:
    • last_sync_status string

      Values are canceling, canceled, completed, error, in_progress, pending, or suspended.

    • last_synced string | number

      One of:
    • name string
    • pipeline object
      Hide pipeline attributes Show pipeline attributes object
      • extract_binary_content boolean Required
      • name string Required
      • reduce_whitespace boolean Required
      • run_ml_inference boolean Required
    • scheduling object Required
      Hide scheduling attributes Show scheduling attributes object
      • access_control object
        Hide access_control attributes Show access_control attributes object
        • enabled boolean Required
        • interval string Required

          The interval is expressed using the crontab syntax

      • full object
        Hide full attributes Show full attributes object
        • enabled boolean Required
        • interval string Required

          The interval is expressed using the crontab syntax

      • incremental object
        Hide incremental attributes Show incremental attributes object
        • enabled boolean Required
        • interval string Required

          The interval is expressed using the crontab syntax

    • service_type string
    • status string Required

      Values are created, needs_configuration, configured, connected, or error.

    • sync_cursor object
    • sync_now boolean Required
GET /_connector/{connector_id}
GET _connector/my-connector-id
resp = client.connector.get(
    connector_id="my-connector-id",
)
const response = await client.connector.get({
  connector_id: "my-connector-id",
});
response = client.connector.get(
  connector_id: "my-connector-id"
)
$resp = $client->connector()->get([
    "connector_id" => "my-connector-id",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_connector/my-connector-id"
client.connector().get(g -> g
    .connectorId("my-connector-id")
);




Delete a connector Beta; Added in 8.12.0

DELETE /_connector/{connector_id}

Removes a connector and associated sync jobs. This is a destructive action that is not recoverable. NOTE: This action doesn’t delete any API keys, ingest pipelines, or data indices associated with the connector. These need to be removed manually.

Path parameters

  • connector_id string Required

    The unique identifier of the connector to be deleted

Query parameters

  • delete_sync_jobs boolean

    A flag indicating if associated sync jobs should be also removed. Defaults to false.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

DELETE /_connector/{connector_id}
DELETE _connector/my-connector-id&delete_sync_jobs=true
resp = client.connector.delete(
    connector_id="my-connector-id&delete_sync_jobs=true",
)
const response = await client.connector.delete({
  connector_id: "my-connector-id&delete_sync_jobs=true",
});
response = client.connector.delete(
  connector_id: "my-connector-id&delete_sync_jobs=true"
)
$resp = $client->connector()->delete([
    "connector_id" => "my-connector-id&delete_sync_jobs=true",
]);
curl -X DELETE -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_connector/my-connector-id&delete_sync_jobs=true"
client.connector().delete(d -> d
    .connectorId("my-connector-id&delete_sync_jobs=true")
);
Response examples (200)
{
    "acknowledged": true
}








Cancel a connector sync job Beta; Added in 8.12.0

PUT /_connector/_sync_job/{connector_sync_job_id}/_cancel

Cancel a connector sync job, which sets the status to cancelling and updates cancellation_requested_at to the current time. The connector service is then responsible for setting the status of connector sync jobs to cancelled.

Path parameters

  • connector_sync_job_id string Required

    The unique identifier of the connector sync job

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • result string Required

      Values are created, updated, deleted, not_found, or noop.

PUT /_connector/_sync_job/{connector_sync_job_id}/_cancel
PUT _connector/_sync_job/my-connector-sync-job-id/_cancel
resp = client.connector.sync_job_cancel(
    connector_sync_job_id="my-connector-sync-job-id",
)
const response = await client.connector.syncJobCancel({
  connector_sync_job_id: "my-connector-sync-job-id",
});
response = client.connector.sync_job_cancel(
  connector_sync_job_id: "my-connector-sync-job-id"
)
$resp = $client->connector()->syncJobCancel([
    "connector_sync_job_id" => "my-connector-sync-job-id",
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_connector/_sync_job/my-connector-sync-job-id/_cancel"
client.connector().syncJobCancel(s -> s
    .connectorSyncJobId("my-connector-sync-job-id")
);












































Update the connector error field Technical preview; Added in 8.12.0

PUT /_connector/{connector_id}/_error

Set the error field for the connector. If the error provided in the request body is non-null, the connector’s status is updated to error. Otherwise, if the error is reset to null, the connector status is updated to connected.

Path parameters

  • connector_id string Required

    The unique identifier of the connector to be updated

application/json

Body Required

  • error string | null Required

    One of:

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • result string Required

      Values are created, updated, deleted, not_found, or noop.

PUT /_connector/{connector_id}/_error
PUT _connector/my-connector/_error
{
    "error": "Houston, we have a problem!"
}
resp = client.connector.update_error(
    connector_id="my-connector",
    error="Houston, we have a problem!",
)
const response = await client.connector.updateError({
  connector_id: "my-connector",
  error: "Houston, we have a problem!",
});
response = client.connector.update_error(
  connector_id: "my-connector",
  body: {
    "error": "Houston, we have a problem!"
  }
)
$resp = $client->connector()->updateError([
    "connector_id" => "my-connector",
    "body" => [
        "error" => "Houston, we have a problem!",
    ],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"error":"Houston, we have a problem!"}' "$ELASTICSEARCH_URL/_connector/my-connector/_error"
client.connector().updateError(u -> u
    .connectorId("my-connector")
    .error("Houston, we have a problem!")
);
Request example
{
    "error": "Houston, we have a problem!"
}
Response examples (200)
{
  "result": "updated"
}
































Update the connector service type Beta; Added in 8.12.0

PUT /_connector/{connector_id}/_service_type

Path parameters

  • connector_id string Required

    The unique identifier of the connector to be updated

application/json

Body Required

  • service_type string Required

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • result string Required

      Values are created, updated, deleted, not_found, or noop.

PUT /_connector/{connector_id}/_service_type
PUT _connector/my-connector/_service_type
{
    "service_type": "sharepoint_online"
}
resp = client.connector.update_service_type(
    connector_id="my-connector",
    service_type="sharepoint_online",
)
const response = await client.connector.updateServiceType({
  connector_id: "my-connector",
  service_type: "sharepoint_online",
});
response = client.connector.update_service_type(
  connector_id: "my-connector",
  body: {
    "service_type": "sharepoint_online"
  }
)
$resp = $client->connector()->updateServiceType([
    "connector_id" => "my-connector",
    "body" => [
        "service_type" => "sharepoint_online",
    ],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"service_type":"sharepoint_online"}' "$ELASTICSEARCH_URL/_connector/my-connector/_service_type"
client.connector().updateServiceType(u -> u
    .connectorId("my-connector")
    .serviceType("sharepoint_online")
);
Request example
{
    "service_type": "sharepoint_online"
}
Response examples (200)
{
  "result": "updated"
}





















Get follower information Generally available; Added in 6.7.0

GET /{index}/_ccr/info

Get information about all cross-cluster replication follower indices. For example, the results include follower index names, leader index names, replication options, and whether the follower indices are active or paused.

Required authorization

  • Cluster privileges: monitor
External documentation

Path parameters

  • index string | array[string] Required

    A comma-delimited list of follower index patterns.

Query parameters

  • master_timeout string

    The period to wait for a connection to the master node. If the master node is not available before the timeout expires, the request fails and returns an error. It can also be set to -1 to indicate that the request should never timeout.

    Values are -1 or 0.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • follower_indices array[object] Required
      Hide follower_indices attributes Show follower_indices attributes object
      • follower_index string Required

        The name of the follower index.

      • leader_index string Required

        The name of the index in the leader cluster that is followed.

      • parameters object

        An object that encapsulates cross-cluster replication parameters. If the follower index's status is paused, this object is omitted.

        Hide parameters attributes Show parameters attributes object
        • max_outstanding_read_requests number

          The maximum number of outstanding reads requests from the remote cluster.

        • max_outstanding_write_requests number

          The maximum number of outstanding write requests on the follower.

        • max_read_request_operation_count number

          The maximum number of operations to pull per read from the remote cluster.

        • max_read_request_size
        • max_retry_delay string

          The maximum time to wait before retrying an operation that failed exceptionally. An exponential backoff strategy is employed when retrying.

        • max_write_buffer_count number

          The maximum number of operations that can be queued for writing. When this limit is reached, reads from the remote cluster will be deferred until the number of queued operations goes below the limit.

        • max_write_buffer_size
        • max_write_request_operation_count number

          The maximum number of operations per bulk write request executed on the follower.

        • max_write_request_size
        • read_poll_timeout string

          The maximum time to wait for new operations on the remote cluster when the follower index is synchronized with the leader index. When the timeout has elapsed, the poll for operations will return to the follower so that it can update some statistics. Then the follower will immediately attempt to read from the leader again.

      • remote_cluster string Required

        The remote cluster that contains the leader index.

      • status string Required

        The status of the index following: active or paused.

        Values are active or paused.

GET /{index}/_ccr/info
GET /follower_index/_ccr/info
resp = client.ccr.follow_info(
    index="follower_index",
)
const response = await client.ccr.followInfo({
  index: "follower_index",
});
response = client.ccr.follow_info(
  index: "follower_index"
)
$resp = $client->ccr()->followInfo([
    "index" => "follower_index",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/follower_index/_ccr/info"
client.ccr().followInfo(f -> f
    .index("follower_index")
);
Response examples (200)
A successful response from `GET /follower_index/_ccr/info` when the follower index is active.
{
  "follower_indices": [
    {
      "follower_index": "follower_index",
      "remote_cluster": "remote_cluster",
      "leader_index": "leader_index",
      "status": "active",
      "parameters": {
        "max_read_request_operation_count": 5120,
        "max_read_request_size": "32mb",
        "max_outstanding_read_requests": 12,
        "max_write_request_operation_count": 5120,
        "max_write_request_size": "9223372036854775807b",
        "max_outstanding_write_requests": 9,
        "max_write_buffer_count": 2147483647,
        "max_write_buffer_size": "512mb",
        "max_retry_delay": "500ms",
        "read_poll_timeout": "1m"
      }
    }
  ]
}
A successful response from `GET /follower_index/_ccr/info` when the follower index is paused.
{
  "follower_indices": [
    {
      "follower_index": "follower_index",
      "remote_cluster": "remote_cluster",
      "leader_index": "leader_index",
      "status": "paused"
    }
  ]
}





















































Update data stream lifecycles Generally available; Added in 8.11.0

PUT /_data_stream/{name}/_lifecycle

Update the data stream lifecycle of the specified data streams.

External documentation

Path parameters

  • name string | array[string] Required

    Comma-separated list of data streams used to limit the request. Supports wildcards (*). To target all data streams use * or _all.

Query parameters

  • expand_wildcards string | array[string]

    Type of data stream that wildcard patterns can match. Supports comma-separated values, such as open,hidden.

    Supported values include:

    • all: Match any data stream or index, including hidden ones.
    • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
    • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
    • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
    • none: Wildcard expressions are not accepted.

    Values are all, open, closed, hidden, or none.

  • master_timeout string

    Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.

    Values are -1 or 0.

  • timeout string

    Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.

    Values are -1 or 0.

application/json

Body

  • data_retention string

    If defined, every document added to this data stream will be stored at least for this time frame. Any time after this duration the document could be deleted. When empty, every document in this data stream will be stored indefinitely.

  • downsampling object

    The downsampling configuration to execute for the managed backing index after rollover.

    Hide downsampling attribute Show downsampling attribute object
    • rounds array[object] Required

      The list of downsampling rounds to execute as part of this downsampling configuration

      Hide rounds attributes Show rounds attributes object
      • after string Required

        The duration since rollover when this downsampling round should execute

      • config object Required

        The downsample configuration to execute.

  • enabled boolean

    If defined, it turns data stream lifecycle on/off (true/false) for this data stream. A data stream lifecycle that's disabled (enabled: false) will have no effect on the data stream.

    Default value is true.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

PUT /_data_stream/{name}/_lifecycle
PUT _data_stream/my-data-stream/_lifecycle
{
  "data_retention": "7d"
}
resp = client.indices.put_data_lifecycle(
    name="my-data-stream",
    data_retention="7d",
)
const response = await client.indices.putDataLifecycle({
  name: "my-data-stream",
  data_retention: "7d",
});
response = client.indices.put_data_lifecycle(
  name: "my-data-stream",
  body: {
    "data_retention": "7d"
  }
)
$resp = $client->indices()->putDataLifecycle([
    "name" => "my-data-stream",
    "body" => [
        "data_retention" => "7d",
    ],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"data_retention":"7d"}' "$ELASTICSEARCH_URL/_data_stream/my-data-stream/_lifecycle"
client.indices().putDataLifecycle(p -> p
    .dataRetention(d -> d
        .time("7d")
    )
    .name("my-data-stream")
);
Request examples
{
  "data_retention": "7d"
}
This example configures two downsampling rounds.
{
    "downsampling": [
      {
        "after": "1d",
        "fixed_interval": "10m"
      },
      {
        "after": "7d",
        "fixed_interval": "1d"
      }
    ]
}
Response examples (200)
A successful response for configuring a data stream lifecycle.
{
  "acknowledged": true
}

Downsample an index Technical preview; Added in 8.5.0

POST /{index}/_downsample/{target_index}

Aggregate a time series (TSDS) index and store pre-computed statistical summaries (min, max, sum, value_count and avg) for each metric field grouped by a configured time interval. For example, a TSDS index that contains metrics sampled every 10 seconds can be downsampled to an hourly index. All documents within an hour interval are summarized and stored as a single document in the downsample index.

NOTE: Only indices in a time series data stream are supported. Neither field nor document level security can be defined on the source index. The source index must be read only (index.blocks.write: true).

Path parameters

  • index string Required

    Name of the time series index to downsample.

  • target_index string Required

    Name of the index to create.

application/json

Body Required

  • fixed_interval string Required

    The interval at which to aggregate the original time series index.

Responses

  • 200 application/json
POST /{index}/_downsample/{target_index}
POST /my-time-series-index/_downsample/my-downsampled-time-series-index
{
  "fixed_interval": "1d"
}
resp = client.indices.downsample(
    index="my-time-series-index",
    target_index="my-downsampled-time-series-index",
    config={
        "fixed_interval": "1d"
    },
)
const response = await client.indices.downsample({
  index: "my-time-series-index",
  target_index: "my-downsampled-time-series-index",
  config: {
    fixed_interval: "1d",
  },
});
response = client.indices.downsample(
  index: "my-time-series-index",
  target_index: "my-downsampled-time-series-index",
  body: {
    "fixed_interval": "1d"
  }
)
$resp = $client->indices()->downsample([
    "index" => "my-time-series-index",
    "target_index" => "my-downsampled-time-series-index",
    "body" => [
        "fixed_interval" => "1d",
    ],
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"fixed_interval":"1d"}' "$ELASTICSEARCH_URL/my-time-series-index/_downsample/my-downsampled-time-series-index"
client.indices().downsample(d -> d
    .index("my-time-series-index")
    .targetIndex("my-downsampled-time-series-index")
    .config(c -> c
        .fixedInterval(f -> f
            .time("1d")
        )
    )
);
Request example
{
  "fixed_interval": "1d"
}













































Delete documents Generally available; Added in 5.0.0

POST /{index}/_delete_by_query

Deletes documents that match the specified query.

If the Elasticsearch security features are enabled, you must have the following index privileges for the target data stream, index, or alias:

  • read
  • delete or write

You can specify the query criteria in the request URI or the request body using the same syntax as the search API. When you submit a delete by query request, Elasticsearch gets a snapshot of the data stream or index when it begins processing the request and deletes matching documents using internal versioning. If a document changes between the time that the snapshot is taken and the delete operation is processed, it results in a version conflict and the delete operation fails.

NOTE: Documents with a version equal to 0 cannot be deleted using delete by query because internal versioning does not support 0 as a valid version number.

While processing a delete by query request, Elasticsearch performs multiple search requests sequentially to find all of the matching documents to delete. A bulk delete request is performed for each batch of matching documents. If a search or bulk request is rejected, the requests are retried up to 10 times, with exponential back off. If the maximum retry limit is reached, processing halts and all failed requests are returned in the response. Any delete requests that completed successfully still stick, they are not rolled back.

You can opt to count version conflicts instead of halting and returning by setting conflicts to proceed. Note that if you opt to count version conflicts the operation could attempt to delete more documents from the source than max_docs until it has successfully deleted max_docs documents, or it has gone through every document in the source query.

Throttling delete requests

To control the rate at which delete by query issues batches of delete operations, you can set requests_per_second to any positive decimal number. This pads each batch with a wait time to throttle the rate. Set requests_per_second to -1 to disable throttling.

Throttling uses a wait time between batches so that the internal scroll requests can be given a timeout that takes the request padding into account. The padding time is the difference between the batch size divided by the requests_per_second and the time spent writing. By default the batch size is 1000, so if requests_per_second is set to 500:

target_time = 1000 / 500 per second = 2 seconds
wait_time = target_time - write_time = 2 seconds - .5 seconds = 1.5 seconds

Since the batch is issued as a single _bulk request, large batch sizes cause Elasticsearch to create many requests and wait before starting the next set. This is "bursty" instead of "smooth".

Slicing

Delete by query supports sliced scroll to parallelize the delete process. This can improve efficiency and provide a convenient way to break the request down into smaller parts.

Setting slices to auto lets Elasticsearch choose the number of slices to use. This setting will use one slice per shard, up to a certain limit. If there are multiple source data streams or indices, it will choose the number of slices based on the index or backing index with the smallest number of shards. Adding slices to the delete by query operation creates sub-requests which means it has some quirks:

  • You can see these requests in the tasks APIs. These sub-requests are "child" tasks of the task for the request with slices.
  • Fetching the status of the task for the request with slices only contains the status of completed slices.
  • These sub-requests are individually addressable for things like cancellation and rethrottling.
  • Rethrottling the request with slices will rethrottle the unfinished sub-request proportionally.
  • Canceling the request with slices will cancel each sub-request.
  • Due to the nature of slices each sub-request won't get a perfectly even portion of the documents. All documents will be addressed, but some slices may be larger than others. Expect larger slices to have a more even distribution.
  • Parameters like requests_per_second and max_docs on a request with slices are distributed proportionally to each sub-request. Combine that with the earlier point about distribution being uneven and you should conclude that using max_docs with slices might not result in exactly max_docs documents being deleted.
  • Each sub-request gets a slightly different snapshot of the source data stream or index though these are all taken at approximately the same time.

If you're slicing manually or otherwise tuning automatic slicing, keep in mind that:

  • Query performance is most efficient when the number of slices is equal to the number of shards in the index or backing index. If that number is large (for example, 500), choose a lower number as too many slices hurts performance. Setting slices higher than the number of shards generally does not improve efficiency and adds overhead.
  • Delete performance scales linearly across available resources with the number of slices.

Whether query or delete performance dominates the runtime depends on the documents being reindexed and cluster resources.

Cancel a delete by query operation

Any delete by query can be canceled using the task cancel API. For example:

POST _tasks/r1A2WoRbTwKZ516z6NEs5A:36619/_cancel

The task ID can be found by using the get tasks API.

Cancellation should happen quickly but might take a few seconds. The get task status API will continue to list the delete by query task until this task checks that it has been cancelled and terminates itself.

Required authorization

  • Index privileges: read,delete

Path parameters

  • index string | array[string] Required

    A comma-separated list of data streams, indices, and aliases to search. It supports wildcards (*). To search all data streams or indices, omit this parameter or use * or _all.

Query parameters

  • allow_no_indices boolean

    If false, the request returns an error if any wildcard expression, index alias, or _all value targets only missing or closed indices. This behavior applies even if the request targets other open indices. For example, a request targeting foo*,bar* returns an error if an index starts with foo but no index starts with bar.

  • analyzer string

    Analyzer to use for the query string. This parameter can be used only when the q query string parameter is specified.

  • analyze_wildcard boolean

    If true, wildcard and prefix queries are analyzed. This parameter can be used only when the q query string parameter is specified.

  • conflicts string

    What to do if delete by query hits version conflicts: abort or proceed.

    Supported values include:

    • abort: Stop reindexing if there are conflicts.
    • proceed: Continue reindexing even if there are conflicts.

    Values are abort or proceed.

  • default_operator string

    The default operator for query string query: AND or OR. This parameter can be used only when the q query string parameter is specified.

    Values are and, AND, or, or OR.

  • df string

    The field to use as default where no field prefix is given in the query string. This parameter can be used only when the q query string parameter is specified.

  • expand_wildcards string | array[string]

    The type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. It supports comma-separated values, such as open,hidden.

    Supported values include:

    • all: Match any data stream or index, including hidden ones.
    • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
    • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
    • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
    • none: Wildcard expressions are not accepted.

    Values are all, open, closed, hidden, or none.

  • from number

    Skips the specified number of documents.

  • ignore_unavailable boolean

    If false, the request returns an error if it targets a missing or closed index.

  • lenient boolean

    If true, format-based query failures (such as providing text to a numeric field) in the query string will be ignored. This parameter can be used only when the q query string parameter is specified.

  • max_docs number

    The maximum number of documents to process. Defaults to all documents. When set to a value less then or equal to scroll_size, a scroll will not be used to retrieve the results for the operation.

  • preference string

    The node or shard the operation should be performed on. It is random by default.

  • refresh boolean

    If true, Elasticsearch refreshes all shards involved in the delete by query after the request completes. This is different than the delete API's refresh parameter, which causes just the shard that received the delete request to be refreshed. Unlike the delete API, it does not support wait_for.

  • request_cache boolean

    If true, the request cache is used for this request. Defaults to the index-level setting.

  • requests_per_second number

    The throttle for this request in sub-requests per second.

  • routing string

    A custom value used to route operations to a specific shard.

  • q string

    A query in the Lucene query string syntax.

  • scroll string

    The period to retain the search context for scrolling.

    Values are -1 or 0.

  • scroll_size number

    The size of the scroll request that powers the operation.

  • search_timeout string

    The explicit timeout for each search request. It defaults to no timeout.

    Values are -1 or 0.

  • search_type string

    The type of the search operation. Available options include query_then_fetch and dfs_query_then_fetch.

    Supported values include:

    • query_then_fetch: Documents are scored using local term and document frequencies for the shard. This is usually faster but less accurate.
    • dfs_query_then_fetch: Documents are scored using global term and document frequencies across all shards. This is usually slower but more accurate.

    Values are query_then_fetch or dfs_query_then_fetch.

  • slices number | string

    The number of slices this task should be divided into.

    Value is auto.

  • sort array[string]

    A comma-separated list of <field>:<direction> pairs.

  • stats array[string]

    The specific tag of the request for logging and statistical purposes.

  • terminate_after number

    The maximum number of documents to collect for each shard. If a query reaches this limit, Elasticsearch terminates the query early. Elasticsearch collects documents before sorting.

    Use with caution. Elasticsearch applies this parameter to each shard handling the request. When possible, let Elasticsearch perform early termination automatically. Avoid specifying this parameter for requests that target data streams with backing indices across multiple data tiers.

  • timeout string

    The period each deletion request waits for active shards.

    Values are -1 or 0.

  • version boolean

    If true, returns the document version as part of a hit.

  • wait_for_active_shards number | string

    The number of shard copies that must be active before proceeding with the operation. Set to all or any positive integer up to the total number of shards in the index (number_of_replicas+1). The timeout value controls how long each write request waits for unavailable shards to become available.

    Values are all or index-setting.

  • wait_for_completion boolean

    If true, the request blocks until the operation is complete. If false, Elasticsearch performs some preflight checks, launches the request, and returns a task you can use to cancel or get the status of the task. Elasticsearch creates a record of this task as a document at .tasks/task/${taskId}. When you are done with a task, you should delete the task document so Elasticsearch can reclaim the space.

application/json

Body Required

  • max_docs number

    The maximum number of documents to delete.

  • query object

    The documents to delete specified with Query DSL.

    External documentation
  • slice object

    Slice the request manually using the provided slice ID and total number of slices.

    Hide slice attributes Show slice attributes object
    • field string

      Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

    • id string Required
    • max number Required
  • sort string | object | array[string | object]

    A sort object that specifies the order of deleted documents.

    One of:

    A sort object that specifies the order of deleted documents.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • batches number

      The number of scroll responses pulled back by the delete by query.

    • deleted number

      The number of documents that were successfully deleted.

    • failures array[object]

      An array of failures if there were any unrecoverable errors during the process. If this array is not empty, the request ended abnormally because of those failures. Delete by query is implemented using batches and any failures cause the entire process to end but all failures in the current batch are collected into the array. You can use the conflicts option to prevent reindex from ending on version conflicts.

      Hide failures attributes Show failures attributes object
      • cause object Required

        Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

        Hide cause attributes Show cause attributes object
        • type string Required

          The type of error

        • reason string | null

          A human-readable explanation of the error, in English.

        • stack_trace string

          The server stack trace. Present only if the error_trace=true parameter was sent with the request.

        • caused_by object

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

        • root_cause array[object]

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

        • suppressed array[object]

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

      • id string Required
      • index string Required
      • status number Required
    • noops number

      This field is always equal to zero for delete by query. It exists only so that delete by query, update by query, and reindex APIs return responses with the same structure.

    • requests_per_second number

      The number of requests per second effectively run during the delete by query.

    • retries object

      The number of retries attempted by delete by query. bulk is the number of bulk actions retried. search is the number of search actions retried.

      Hide retries attributes Show retries attributes object
      • bulk number Required

        The number of bulk actions retried.

    • slice_id number
    • task string | number

    • throttled string

      A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

    • throttled_millis number

      Time unit for milliseconds

    • throttled_until string

      A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

    • throttled_until_millis number

      Time unit for milliseconds

    • timed_out boolean

      If true, some requests run during the delete by query operation timed out.

    • took number

      Time unit for milliseconds

    • total number

      The number of documents that were successfully processed.

    • version_conflicts number

      The number of version conflicts that the delete by query hit.

POST /{index}/_delete_by_query
POST /my-index-000001,my-index-000002/_delete_by_query
{
  "query": {
    "match_all": {}
  }
}
resp = client.delete_by_query(
    index="my-index-000001,my-index-000002",
    query={
        "match_all": {}
    },
)
const response = await client.deleteByQuery({
  index: "my-index-000001,my-index-000002",
  query: {
    match_all: {},
  },
});
response = client.delete_by_query(
  index: "my-index-000001,my-index-000002",
  body: {
    "query": {
      "match_all": {}
    }
  }
)
$resp = $client->deleteByQuery([
    "index" => "my-index-000001,my-index-000002",
    "body" => [
        "query" => [
            "match_all" => new ArrayObject([]),
        ],
    ],
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"query":{"match_all":{}}}' "$ELASTICSEARCH_URL/my-index-000001,my-index-000002/_delete_by_query"
client.deleteByQuery(d -> d
    .index(List.of("my-index-000001","my-index-000002"))
    .query(q -> q
        .matchAll(m -> m)
    )
);
Request examples
Run `POST /my-index-000001,my-index-000002/_delete_by_query` to delete all documents from multiple data streams or indices.
{
  "query": {
    "match_all": {}
  }
}
Run `POST my-index-000001/_delete_by_query` to delete a document by using a unique attribute.
{
  "query": {
    "term": {
      "user.id": "kimchy"
    }
  },
  "max_docs": 1
}
Run `POST my-index-000001/_delete_by_query` to slice a delete by query manually. Provide a slice ID and total number of slices.
{
  "slice": {
    "id": 0,
    "max": 2
  },
  "query": {
    "range": {
      "http.response.bytes": {
        "lt": 2000000
      }
    }
  }
}
Run `POST my-index-000001/_delete_by_query?refresh&slices=5` to let delete by query automatically parallelize using sliced scroll to slice on `_id`. The `slices` query parameter value specifies the number of slices to use.
{
  "query": {
    "range": {
      "http.response.bytes": {
        "lt": 2000000
      }
    }
  }
}
Response examples (200)
A successful response from `POST /my-index-000001/_delete_by_query`.
{
  "took" : 147,
  "timed_out": false,
  "total": 119,
  "deleted": 119,
  "batches": 1,
  "version_conflicts": 0,
  "noops": 0,
  "retries": {
    "bulk": 0,
    "search": 0
  },
  "throttled_millis": 0,
  "requests_per_second": -1.0,
  "throttled_until_millis": 0,
  "failures" : [ ]
}

Throttle a delete by query operation Generally available; Added in 6.5.0

POST /_delete_by_query/{task_id}/_rethrottle

Change the number of requests per second for a particular delete by query operation. Rethrottling that speeds up the query takes effect immediately but rethrotting that slows down the query takes effect after completing the current batch to prevent scroll timeouts.

Path parameters

  • task_id string | number Required

    The ID for the task.

Query parameters

  • requests_per_second number

    The throttle for this request in sub-requests per second. To disable throttling, set it to -1.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • node_failures array[object]

      Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

      Hide node_failures attributes Show node_failures attributes object
      • type string Required

        The type of error

      • reason string | null

        A human-readable explanation of the error, in English.

      • stack_trace string

        The server stack trace. Present only if the error_trace=true parameter was sent with the request.

      • caused_by object

        Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

      • root_cause array[object]

        Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

        Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

      • suppressed array[object]

        Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

        Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

    • task_failures array[object]
      Hide task_failures attributes Show task_failures attributes object
      • task_id number Required
      • node_id string Required
      • status string Required
      • reason object Required

        Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

        Hide reason attributes Show reason attributes object
        • type string Required

          The type of error

        • reason string | null

          A human-readable explanation of the error, in English.

        • stack_trace string

          The server stack trace. Present only if the error_trace=true parameter was sent with the request.

        • caused_by object

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

        • root_cause array[object]

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

        • suppressed array[object]

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

    • nodes object

      Task information grouped by node, if group_by was set to node (the default).

      Hide nodes attribute Show nodes attribute object
      • * object Additional properties
        Hide * attributes Show * attributes object
        • name string
        • transport_address string
        • host string
        • ip string
        • roles array[string]
        • attributes object
          Hide attributes attribute Show attributes attribute object
          • * string Additional properties
        • tasks object Required
          Hide tasks attribute Show tasks attribute object
          • * object Additional properties
            Hide * attributes Show * attributes object
            • action string Required
            • cancelled boolean
            • cancellable boolean Required
            • description string

              Human readable text that identifies the particular request that the task is performing. For example, it might identify the search request being performed by a search task. Other kinds of tasks have different descriptions, like _reindex which has the source and the destination, or _bulk which just has the number of requests and the destination indices. Many requests will have only an empty description because more detailed information about the request is not easily available or particularly helpful in identifying the request.

            • headers object Required
              Hide headers attribute Show headers attribute object
              • * string Additional properties
            • id number Required
            • node string Required
            • running_time string

              A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

            • running_time_in_nanos
            • start_time_in_millis
            • status object

              The internal status of the task, which varies from task to task. The format also varies. While the goal is to keep the status for a particular task consistent from version to version, this is not always possible because sometimes the implementation changes. Fields might be removed from the status for a particular request so any parsing you do of the status might break in minor releases.

            • type string Required
            • parent_task_id
    • tasks array[object] | object

      Either a flat list of tasks if group_by was set to none, or grouped by parents if group_by was set to parents.

      One of:

      Either a flat list of tasks if group_by was set to none, or grouped by parents if group_by was set to parents.

POST /_delete_by_query/{task_id}/_rethrottle
POST _delete_by_query/r1A2WoRbTwKZ516z6NEs5A:36619/_rethrottle?requests_per_second=-1
resp = client.delete_by_query_rethrottle(
    task_id="r1A2WoRbTwKZ516z6NEs5A:36619",
    requests_per_second="-1",
)
const response = await client.deleteByQueryRethrottle({
  task_id: "r1A2WoRbTwKZ516z6NEs5A:36619",
  requests_per_second: "-1",
});
response = client.delete_by_query_rethrottle(
  task_id: "r1A2WoRbTwKZ516z6NEs5A:36619",
  requests_per_second: "-1"
)
$resp = $client->deleteByQueryRethrottle([
    "task_id" => "r1A2WoRbTwKZ516z6NEs5A:36619",
    "requests_per_second" => "-1",
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_delete_by_query/r1A2WoRbTwKZ516z6NEs5A:36619/_rethrottle?requests_per_second=-1"
client.deleteByQueryRethrottle(d -> d
    .requestsPerSecond(-1.0F)
    .taskId("r1A2WoRbTwKZ516z6NEs5A:36619")
);














































































ES|QL

The Elasticsearch Query Language (ES|QL) provides a powerful way to filter, transform, and analyze data stored in Elasticsearch, and in the future in other runtimes.

Learn more about ES|QL






























Get global checkpoints Generally available; Added in 7.13.0

GET /{index}/_fleet/global_checkpoints

Get the current global checkpoints for an index. This API is designed for internal use by the Fleet server project.

Path parameters

  • index string Required

    A single index or index alias that resolves to a single index.

Query parameters

  • wait_for_advance boolean

    A boolean value which controls whether to wait (until the timeout) for the global checkpoints to advance past the provided checkpoints.

  • wait_for_index boolean

    A boolean value which controls whether to wait (until the timeout) for the target index to exist and all primary shards be active. Can only be true when wait_for_advance is true.

  • checkpoints array[number]

    A comma separated list of previous global checkpoints. When used in combination with wait_for_advance, the API will only return once the global checkpoints advances past the checkpoints. Providing an empty list will cause Elasticsearch to immediately return the current global checkpoints.

  • timeout string

    Period to wait for a global checkpoints to advance past checkpoints.

    Values are -1 or 0.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • global_checkpoints array[number] Required
    • timed_out boolean Required
GET /{index}/_fleet/global_checkpoints
curl \
 --request GET 'http://api.example.com/{index}/_fleet/global_checkpoints' \
 --header "Authorization: $API_KEY"













Index

Index APIs enable you to manage individual indices, index settings, aliases, mappings, and index templates.





























Add an index block Generally available; Added in 7.9.0

PUT /{index}/_block/{block}

Add an index block to an index. Index blocks limit the operations allowed on an index by blocking specific operation types.

Path parameters

  • index string Required

    A comma-separated list or wildcard expression of index names used to limit the request. By default, you must explicitly name the indices you are adding blocks to. To allow the adding of blocks to indices with _all, *, or other wildcard expressions, change the action.destructive_requires_name setting to false. You can update this setting in the elasticsearch.yml file or by using the cluster update settings API.

  • block string

    The block type to add to the index.

    Supported values include:

    • metadata: Disable metadata changes, such as closing the index.
    • read: Disable read operations.
    • read_only: Disable write operations and metadata changes.
    • write: Disable write operations. However, metadata changes are still allowed.

    Values are metadata, read, read_only, or write.

Query parameters

  • allow_no_indices boolean

    If false, the request returns an error if any wildcard expression, index alias, or _all value targets only missing or closed indices. This behavior applies even if the request targets other open indices. For example, a request targeting foo*,bar* returns an error if an index starts with foo but no index starts with bar.

  • expand_wildcards string | array[string]

    The type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. It supports comma-separated values, such as open,hidden.

    Supported values include:

    • all: Match any data stream or index, including hidden ones.
    • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
    • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
    • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
    • none: Wildcard expressions are not accepted.

    Values are all, open, closed, hidden, or none.

  • ignore_unavailable boolean

    If false, the request returns an error if it targets a missing or closed index.

  • master_timeout string

    The period to wait for the master node. If the master node is not available before the timeout expires, the request fails and returns an error. It can also be set to -1 to indicate that the request should never timeout.

    Values are -1 or 0.

  • timeout string

    The period to wait for a response from all relevant nodes in the cluster after updating the cluster metadata. If no response is received before the timeout expires, the cluster metadata update still applies but the response will indicate that it was not completely acknowledged. It can also be set to -1 to indicate that the request should never timeout.

    Values are -1 or 0.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • acknowledged boolean Required
    • shards_acknowledged boolean Required
    • indices array[object] Required
      Hide indices attributes Show indices attributes object
      • name string Required
      • blocked boolean Required
PUT /{index}/_block/{block}
PUT /my-index-000001/_block/write
resp = client.indices.add_block(
    index="my-index-000001",
    block="write",
)
const response = await client.indices.addBlock({
  index: "my-index-000001",
  block: "write",
});
response = client.indices.add_block(
  index: "my-index-000001",
  block: "write"
)
$resp = $client->indices()->addBlock([
    "index" => "my-index-000001",
    "block" => "write",
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/my-index-000001/_block/write"
client.indices().addBlock(a -> a
    .block(IndicesBlockOptions.Write)
    .index("my-index-000001")
);
Response examples (200)
A successful response from `PUT /my-index-000001/_block/write`, which adds an index block to an index.'
{
  "acknowledged" : true,
  "shards_acknowledged" : true,
  "indices" : [ {
    "name" : "my-index-000001",
    "blocked" : true
  } ]
}
















































Create or update an index template Generally available; Added in 7.9.0

POST /_index_template/{name}

All methods and paths for this operation:

PUT /_index_template/{name}

POST /_index_template/{name}

Index templates define settings, mappings, and aliases that can be applied automatically to new indices.

Elasticsearch applies templates to new indices based on an wildcard pattern that matches the index name. Index templates are applied during data stream or index creation. For data streams, these settings and mappings are applied when the stream's backing indices are created. Settings and mappings specified in a create index API request override any settings or mappings specified in an index template. Changes to index templates do not affect existing indices, including the existing backing indices of a data stream.

You can use C-style /* *\/ block comments in index templates. You can include comments anywhere in the request body, except before the opening curly bracket.

Multiple matching templates

If multiple index templates match the name of a new index or data stream, the template with the highest priority is used.

Multiple templates with overlapping index patterns at the same priority are not allowed and an error will be thrown when attempting to create a template matching an existing index template at identical priorities.

Composing aliases, mappings, and settings

When multiple component templates are specified in the composed_of field for an index template, they are merged in the order specified, meaning that later component templates override earlier component templates. Any mappings, settings, or aliases from the parent index template are merged in next. Finally, any configuration on the index request itself is merged. Mapping definitions are merged recursively, which means that later mapping components can introduce new field mappings and update the mapping configuration. If a field mapping is already contained in an earlier component, its definition will be completely overwritten by the later one. This recursive merging strategy applies not only to field mappings, but also root options like dynamic_templates and meta. If an earlier component contains a dynamic_templates block, then by default new dynamic_templates entries are appended onto the end. If an entry already exists with the same key, then it is overwritten by the new definition.

Required authorization

  • Cluster privileges: manage_index_templates

Path parameters

  • name string Required

    Index or template name

Query parameters

  • create boolean

    If true, this request cannot replace or update existing index templates.

  • master_timeout string

    Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.

    Values are -1 or 0.

  • cause string

    User defined reason for creating/updating the index template

application/json

Body Required

  • index_patterns string | array[string]

    Name of the index template to create.

  • composed_of array[string]

    An ordered list of component template names. Component templates are merged in the order specified, meaning that the last component template specified has the highest precedence.

  • template object

    Template to be applied. It may optionally include an aliases, mappings, or settings configuration.

    Hide template attributes Show template attributes object
    • aliases object

      Aliases to add. If the index template includes a data_stream object, these are data stream aliases. Otherwise, these are index aliases. Data stream aliases ignore the index_routing, routing, and search_routing options.

      Hide aliases attribute Show aliases attribute object
      • * object Additional properties
        Hide * attributes Show * attributes object
        • filter object

          Query used to limit documents the alias can access.

          External documentation
        • index_routing string

          Value used to route indexing operations to a specific shard. If specified, this overwrites the routing value for indexing operations.

        • is_hidden boolean

          If true, the alias is hidden. All indices for the alias must have the same is_hidden value.

          Default value is false.

        • is_write_index boolean

          If true, the index is the write index for the alias.

          Default value is false.

        • routing string

          Value used to route indexing and search operations to a specific shard.

        • search_routing string

          Value used to route search operations to a specific shard. If specified, this overwrites the routing value for search operations.

    • mappings object

      Mapping for fields in the index. If specified, this mapping can include field names, field data types, and mapping parameters.

      Hide mappings attributes Show mappings attributes object
      • all_field object
        Hide all_field attributes Show all_field attributes object
        • analyzer string Required
        • enabled boolean Required
        • omit_norms boolean Required
        • search_analyzer string Required
        • similarity string Required
        • store boolean Required
        • store_term_vector_offsets boolean Required
        • store_term_vector_payloads boolean Required
        • store_term_vector_positions boolean Required
        • store_term_vectors boolean Required
      • date_detection boolean
      • dynamic string

        Values are strict, runtime, true, or false.

      • dynamic_date_formats array[string]
      • dynamic_templates array[object]
      • _field_names object
        Hide _field_names attribute Show _field_names attribute object
        • enabled boolean Required
      • index_field object
        Hide index_field attribute Show index_field attribute object
        • enabled boolean Required
      • _meta object
        Hide _meta attribute Show _meta attribute object
        • * object Additional properties
      • numeric_detection boolean
      • properties object
      • _routing object
        Hide _routing attribute Show _routing attribute object
        • required boolean Required
      • _size object
        Hide _size attribute Show _size attribute object
        • enabled boolean Required
      • _source object
        Hide _source attributes Show _source attributes object
        • compress boolean
        • compress_threshold string
        • enabled boolean
        • excludes array[string]
        • includes array[string]
      • runtime object
        Hide runtime attribute Show runtime attribute object
        • * object Additional properties
          Hide * attributes Show * attributes object
          • fields object

            For type composite

          • fetch_fields array[object]

            For type lookup

          • format string

            A custom format for date type runtime fields.

      • enabled boolean
      • subobjects string

        Values are true or false.

      • _data_stream_timestamp object
        Hide _data_stream_timestamp attribute Show _data_stream_timestamp attribute object
        • enabled boolean Required
    • settings object

      Configuration options for the index.

      Index settings
    • lifecycle object

      Data stream lifecycle denotes that a data stream is managed by the data stream lifecycle and contains the configuration.

      Hide lifecycle attributes Show lifecycle attributes object
      • data_retention string

        If defined, every document added to this data stream will be stored at least for this time frame. Any time after this duration the document could be deleted. When empty, every document in this data stream will be stored indefinitely.

      • downsampling object

        The downsampling configuration to execute for the managed backing index after rollover.

        Hide downsampling attribute Show downsampling attribute object
        • rounds array[object] Required

          The list of downsampling rounds to execute as part of this downsampling configuration

      • enabled boolean

        If defined, it turns data stream lifecycle on/off (true/false) for this data stream. A data stream lifecycle that's disabled (enabled: false) will have no effect on the data stream.

        Default value is true.

  • data_stream object

    If this object is included, the template is used to create data streams and their backing indices. Supports an empty object. Data streams require a matching index template with a data_stream object.

    Hide data_stream attributes Show data_stream attributes object
    • hidden boolean
    • allow_custom_routing boolean
  • priority number

    Priority to determine index template precedence when a new data stream or index is created. The index template with the highest priority is chosen. If no priority is specified the template is treated as though it is of priority 0 (lowest priority). This number is not automatically generated by Elasticsearch.

  • version number

    Version number used to manage index templates externally. This number is not automatically generated by Elasticsearch. External systems can use these version numbers to simplify template management. To unset a version, replace the template without specifying one.

  • _meta object

    Optional user metadata about the index template. It may have any contents. It is not automatically generated or used by Elasticsearch. This user-defined object is stored in the cluster state, so keeping it short is preferable To unset the metadata, replace the template without specifying it.

    Hide _meta attribute Show _meta attribute object
    • * object Additional properties
  • allow_auto_create boolean

    This setting overrides the value of the action.auto_create_index cluster setting. If set to true in a template, then indices can be automatically created using that template even if auto-creation of indices is disabled via actions.auto_create_index. If set to false, then indices or data streams matching the template must always be explicitly created, and may never be automatically created.

  • ignore_missing_component_templates array[string]

    The configuration option ignore_missing_component_templates can be used when an index template references a component template that might not exist

  • deprecated boolean

    Marks this index template as deprecated. When creating or updating a non-deprecated index template that uses deprecated components, Elasticsearch will emit a deprecation warning.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

POST /_index_template/{name}
PUT /_index_template/template_1
{
  "index_patterns" : ["template*"],
  "priority" : 1,
  "template": {
    "settings" : {
      "number_of_shards" : 2
    }
  }
}
resp = client.indices.put_index_template(
    name="template_1",
    index_patterns=[
        "template*"
    ],
    priority=1,
    template={
        "settings": {
            "number_of_shards": 2
        }
    },
)
const response = await client.indices.putIndexTemplate({
  name: "template_1",
  index_patterns: ["template*"],
  priority: 1,
  template: {
    settings: {
      number_of_shards: 2,
    },
  },
});
response = client.indices.put_index_template(
  name: "template_1",
  body: {
    "index_patterns": [
      "template*"
    ],
    "priority": 1,
    "template": {
      "settings": {
        "number_of_shards": 2
      }
    }
  }
)
$resp = $client->indices()->putIndexTemplate([
    "name" => "template_1",
    "body" => [
        "index_patterns" => array(
            "template*",
        ),
        "priority" => 1,
        "template" => [
            "settings" => [
                "number_of_shards" => 2,
            ],
        ],
    ],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"index_patterns":["template*"],"priority":1,"template":{"settings":{"number_of_shards":2}}}' "$ELASTICSEARCH_URL/_index_template/template_1"
client.indices().putIndexTemplate(p -> p
    .indexPatterns("template*")
    .name("template_1")
    .priority(1L)
    .template(t -> t
        .settings(s -> s
            .numberOfShards("2")
        )
    )
);
Request examples
{
  "index_patterns" : ["template*"],
  "priority" : 1,
  "template": {
    "settings" : {
      "number_of_shards" : 2
    }
  }
}
You can include index aliases in an index template. During index creation, the `{index}` placeholder in the alias name will be replaced with the actual index name that the template gets applied to.
{
  "index_patterns": [
    "template*"
  ],
  "template": {
    "settings": {
      "number_of_shards": 1
    },
    "aliases": {
      "alias1": {},
      "alias2": {
        "filter": {
          "term": {
            "user.id": "kimchy"
          }
        },
        "routing": "shard-1"
      },
      "{index}-alias": {}
    }
  }
}




















Check existence of index templates Generally available

HEAD /_template/{name}

Get information about whether index templates exist. Index templates define settings, mappings, and aliases that can be applied automatically to new indices.

IMPORTANT: This documentation is about legacy index templates, which are deprecated and will be replaced by the composable templates introduced in Elasticsearch 7.8.

Required authorization

  • Cluster privileges: manage_index_templates
External documentation

Path parameters

  • name string | array[string] Required

    A comma-separated list of index template names used to limit the request. Wildcard (*) expressions are supported.

Query parameters

  • flat_settings boolean

    Indicates whether to use a flat format for the response.

  • local boolean

    Indicates whether to get information from the local node only.

  • master_timeout string

    The period to wait for the master node. If the master node is not available before the timeout expires, the request fails and returns an error. To indicate that the request should never timeout, set it to -1.

    Values are -1 or 0.

Responses

  • 200 application/json
HEAD /_template/{name}
HEAD /_template/template_1
resp = client.indices.exists_template(
    name="template_1",
)
const response = await client.indices.existsTemplate({
  name: "template_1",
});
response = client.indices.exists_template(
  name: "template_1"
)
$resp = $client->indices()->existsTemplate([
    "name" => "template_1",
]);
curl --head -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_template/template_1"
client.indices().existsTemplate(e -> e
    .name("template_1")
);




































Get index settings Generally available

GET /{index}/_settings/{name}

All methods and paths for this operation:

GET /_settings

GET /_settings/{name}
GET /{index}/_settings
GET /{index}/_settings/{name}

Get setting information for one or more indices. For data streams, it returns setting information for the stream's backing indices.

Required authorization

  • Index privileges: view_index_metadata

Path parameters

  • index string | array[string] Required

    Comma-separated list of data streams, indices, and aliases used to limit the request. Supports wildcards (*). To target all data streams and indices, omit this parameter or use * or _all.

  • name string | array[string] Required

    Comma-separated list or wildcard expression of settings to retrieve.

Query parameters

  • allow_no_indices boolean

    If false, the request returns an error if any wildcard expression, index alias, or _all value targets only missing or closed indices. This behavior applies even if the request targets other open indices. For example, a request targeting foo*,bar* returns an error if an index starts with foo but no index starts with bar.

  • expand_wildcards string | array[string]

    Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports comma-separated values, such as open,hidden.

    Supported values include:

    • all: Match any data stream or index, including hidden ones.
    • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
    • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
    • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
    • none: Wildcard expressions are not accepted.

    Values are all, open, closed, hidden, or none.

  • flat_settings boolean

    If true, returns settings in flat format.

  • ignore_unavailable boolean

    If false, the request returns an error if it targets a missing or closed index.

  • include_defaults boolean

    If true, return all default settings in the response.

  • local boolean

    If true, the request retrieves information from the local node only. If false, information is retrieved from the master node.

  • master_timeout string

    Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.

    Values are -1 or 0.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • * object
      Hide * attributes Show * attributes object
      • aliases object
        Hide aliases attribute Show aliases attribute object
        • * object Additional properties
          Hide * attributes Show * attributes object
          • filter object

            Query used to limit documents the alias can access.

            External documentation
          • index_routing string

            Value used to route indexing operations to a specific shard. If specified, this overwrites the routing value for indexing operations.

          • is_hidden boolean

            If true, the alias is hidden. All indices for the alias must have the same is_hidden value.

            Default value is false.

          • is_write_index boolean

            If true, the index is the write index for the alias.

            Default value is false.

          • routing string

            Value used to route indexing and search operations to a specific shard.

          • search_routing string

            Value used to route search operations to a specific shard. If specified, this overwrites the routing value for search operations.

      • mappings object
        Hide mappings attributes Show mappings attributes object
        • all_field object
          Hide all_field attributes Show all_field attributes object
          • analyzer string Required
          • enabled boolean Required
          • omit_norms boolean Required
          • search_analyzer string Required
          • similarity string Required
          • store boolean Required
          • store_term_vector_offsets boolean Required
          • store_term_vector_payloads boolean Required
          • store_term_vector_positions boolean Required
          • store_term_vectors boolean Required
        • date_detection boolean
        • dynamic string

          Values are strict, runtime, true, or false.

        • dynamic_date_formats array[string]
        • dynamic_templates array[object]
        • _field_names object
          Hide _field_names attribute Show _field_names attribute object
          • enabled boolean Required
        • index_field object
          Hide index_field attribute Show index_field attribute object
          • enabled boolean Required
        • _meta object
          Hide _meta attribute Show _meta attribute object
          • * object Additional properties
        • numeric_detection boolean
        • properties object
        • _routing object
          Hide _routing attribute Show _routing attribute object
          • required boolean Required
        • _size object
          Hide _size attribute Show _size attribute object
          • enabled boolean Required
        • _source object
          Hide _source attributes Show _source attributes object
          • compress boolean
          • compress_threshold string
          • enabled boolean
          • excludes array[string]
          • includes array[string]
        • runtime object
          Hide runtime attribute Show runtime attribute object
          • * object Additional properties
            Hide * attributes Show * attributes object
            • fields object

              For type composite

            • fetch_fields array[object]

              For type lookup

            • format string

              A custom format for date type runtime fields.

        • enabled boolean
        • subobjects string

          Values are true or false.

        • _data_stream_timestamp object
          Hide _data_stream_timestamp attribute Show _data_stream_timestamp attribute object
          • enabled boolean Required
      • settings object
        Index settings
      • defaults object

        Default settings, included when the request's include_default is true.

        Index settings
      • data_stream string
      • lifecycle object

        Data stream lifecycle applicable if this is a data stream.

        Hide lifecycle attributes Show lifecycle attributes object
        • data_retention string

          If defined, every document added to this data stream will be stored at least for this time frame. Any time after this duration the document could be deleted. When empty, every document in this data stream will be stored indefinitely.

        • downsampling object

          The downsampling configuration to execute for the managed backing index after rollover.

          Hide downsampling attribute Show downsampling attribute object
          • rounds array[object] Required

            The list of downsampling rounds to execute as part of this downsampling configuration

        • enabled boolean

          If defined, it turns data stream lifecycle on/off (true/false) for this data stream. A data stream lifecycle that's disabled (enabled: false) will have no effect on the data stream.

          Default value is true.

GET /{index}/_settings/{name}
GET _all/_settings?expand_wildcards=all&filter_path=*.settings.index.*.slowlog
resp = client.indices.get_settings(
    index="_all",
    expand_wildcards="all",
    filter_path="*.settings.index.*.slowlog",
)
const response = await client.indices.getSettings({
  index: "_all",
  expand_wildcards: "all",
  filter_path: "*.settings.index.*.slowlog",
});
response = client.indices.get_settings(
  index: "_all",
  expand_wildcards: "all",
  filter_path: "*.settings.index.*.slowlog"
)
$resp = $client->indices()->getSettings([
    "index" => "_all",
    "expand_wildcards" => "all",
    "filter_path" => "*.settings.index.*.slowlog",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_all/_settings?expand_wildcards=all&filter_path=*.settings.index.*.slowlog"




Update index settings Generally available

PUT /{index}/_settings

All methods and paths for this operation:

PUT /_settings

PUT /{index}/_settings

Changes dynamic index settings in real time. For data streams, index setting changes are applied to all backing indices by default.

To revert a setting to the default value, use a null value. The list of per-index settings that can be updated dynamically on live indices can be found in index settings documentation. To preserve existing settings from being updated, set the preserve_existing parameter to true.

There are multiple valid ways to represent index settings in the request body. You can specify only the setting, for example:

{
  "number_of_replicas": 1
}

Or you can use an index setting object:

{
  "index": {
    "number_of_replicas": 1
  }
}

Or you can use dot annotation:

{
  "index.number_of_replicas": 1
}

Or you can embed any of the aforementioned options in a settings object. For example:

{
  "settings": {
    "index": {
      "number_of_replicas": 1
    }
  }
}

NOTE: You can only define new analyzers on closed indices. To add an analyzer, you must close the index, define the analyzer, and reopen the index. You cannot close the write index of a data stream. To update the analyzer for a data stream's write index and future backing indices, update the analyzer in the index template used by the stream. Then roll over the data stream to apply the new analyzer to the stream's write index and future backing indices. This affects searches and any new data added to the stream after the rollover. However, it does not affect the data stream's backing indices or their existing data. To change the analyzer for existing backing indices, you must create a new data stream and reindex your data into it.

Required authorization

  • Index privileges: manage
External documentation

Path parameters

  • index string | array[string] Required

    Comma-separated list of data streams, indices, and aliases used to limit the request. Supports wildcards (*). To target all data streams and indices, omit this parameter or use * or _all.

Query parameters

  • allow_no_indices boolean

    If false, the request returns an error if any wildcard expression, index alias, or _all value targets only missing or closed indices. This behavior applies even if the request targets other open indices. For example, a request targeting foo*,bar* returns an error if an index starts with foo but no index starts with bar.

  • expand_wildcards string | array[string]

    Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports comma-separated values, such as open,hidden.

    Supported values include:

    • all: Match any data stream or index, including hidden ones.
    • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
    • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
    • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
    • none: Wildcard expressions are not accepted.

    Values are all, open, closed, hidden, or none.

  • flat_settings boolean

    If true, returns settings in flat format.

  • ignore_unavailable boolean

    If true, returns settings in flat format.

  • master_timeout string

    Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.

    Values are -1 or 0.

  • preserve_existing boolean

    If true, existing index settings remain unchanged.

  • reopen boolean

    Whether to close and reopen the index to apply non-dynamic settings. If set to true the indices to which the settings are being applied will be closed temporarily and then reopened in order to apply the changes.

  • timeout string

    Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.

    Values are -1 or 0.

application/json

Body Required

object object
Index settings

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

PUT /{index}/_settings
PUT /my-index-000001/_settings
{
  "index" : {
    "number_of_replicas" : 2
  }
}
resp = client.indices.put_settings(
    index="my-index-000001",
    settings={
        "index": {
            "number_of_replicas": 2
        }
    },
)
const response = await client.indices.putSettings({
  index: "my-index-000001",
  settings: {
    index: {
      number_of_replicas: 2,
    },
  },
});
response = client.indices.put_settings(
  index: "my-index-000001",
  body: {
    "index": {
      "number_of_replicas": 2
    }
  }
)
$resp = $client->indices()->putSettings([
    "index" => "my-index-000001",
    "body" => [
        "index" => [
            "number_of_replicas" => 2,
        ],
    ],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"index":{"number_of_replicas":2}}' "$ELASTICSEARCH_URL/my-index-000001/_settings"
client.indices().putSettings(p -> p
    .index("my-index-000001")
    .settings(s -> s
        .index(i -> i
            .numberOfReplicas("2")
        )
    )
);
Request examples
{
  "index" : {
    "number_of_replicas" : 2
  }
}
To revert a setting to the default value, use `null`.
{
  "index" : {
    "refresh_interval" : null
  }
}
To add an analyzer, you must close the index (`POST /my-index-000001/_close`), define the analyzer, then reopen the index (`POST /my-index-000001/_open`).
{
  "analysis": {
    "analyzer": {
      "content": {
        "type": "custom",
        "tokenizer": "whitespace"
      }
    }
  }
}




























Get index shard stores Generally available

GET /{index}/_shard_stores

All methods and paths for this operation:

GET /_shard_stores

GET /{index}/_shard_stores

Get store information about replica shards in one or more indices. For data streams, the API retrieves store information for the stream's backing indices.

The index shard stores API returns the following information:

  • The node on which each replica shard exists.
  • The allocation ID for each replica shard.
  • A unique ID for each replica shard.
  • Any errors encountered while opening the shard index or from an earlier failure.

By default, the API returns store information only for primary shards that are unassigned or have one or more unassigned replica shards.

Required authorization

  • Index privileges: monitor

Path parameters

  • index string | array[string] Required

    List of data streams, indices, and aliases used to limit the request.

Query parameters

  • allow_no_indices boolean

    If false, the request returns an error if any wildcard expression, index alias, or _all value targets only missing or closed indices. This behavior applies even if the request targets other open indices.

  • expand_wildcards string | array[string]

    Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams.

    Supported values include:

    • all: Match any data stream or index, including hidden ones.
    • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
    • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
    • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
    • none: Wildcard expressions are not accepted.

    Values are all, open, closed, hidden, or none.

  • ignore_unavailable boolean

    If true, missing or closed indices are not included in the response.

  • status string | array[string]

    List of shard health statuses used to limit the request.

    Supported values include:

    • green: The primary shard and all replica shards are assigned.
    • yellow: One or more replica shards are unassigned.
    • red: The primary shard is unassigned.
    • all: Return all shards, regardless of health status.

    Values are green, yellow, red, or all.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • indices object Required
      Hide indices attribute Show indices attribute object
      • * object Additional properties
        Hide * attribute Show * attribute object
        • shards object Required
          Hide shards attribute Show shards attribute object
          • * object Additional properties
            Hide * attribute Show * attribute object
            • stores array[object] Required
GET /{index}/_shard_stores
GET /_shard_stores?status=green
resp = client.indices.shard_stores(
    status="green",
)
const response = await client.indices.shardStores({
  status: "green",
});
response = client.indices.shard_stores(
  status: "green"
)
$resp = $client->indices()->shardStores([
    "status" => "green",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_shard_stores?status=green"
Response examples (200)
An abbreviated response from `GET /_shard_stores?status=green`.
{
  "indices": {
    "my-index-000001": {
      "shards": {
        "0": {
          "stores": [
            {
              "sPa3OgxLSYGvQ4oPs-Tajw": {
                "name": "node_t0",
                "ephemeral_id": "9NlXRFGCT1m8tkvYCMK-8A",
                "transport_address": "local[1]",
                "external_id": "node_t0",
                "attributes": {},
                "roles": [],
                "version": "8.10.0",
                "min_index_version": 7000099,
                "max_index_version": 8100099
              },
              "allocation_id": "2iNySv_OQVePRX-yaRH_lQ",
              "allocation": "primary",
              "store_exception": {}
            }
          ]
        }
      }
    }
  }
}

Shrink an index Generally available; Added in 5.0.0

POST /{index}/_shrink/{target}

All methods and paths for this operation:

PUT /{index}/_shrink/{target}

POST /{index}/_shrink/{target}

Shrink an index into a new index with fewer primary shards.

Before you can shrink an index:

  • The index must be read-only.
  • A copy of every shard in the index must reside on the same node.
  • The index must have a green health status.

To make shard allocation easier, we recommend you also remove the index's replica shards. You can later re-add replica shards as part of the shrink operation.

The requested number of primary shards in the target index must be a factor of the number of shards in the source index. For example an index with 8 primary shards can be shrunk into 4, 2 or 1 primary shards or an index with 15 primary shards can be shrunk into 5, 3 or 1. If the number of shards in the index is a prime number it can only be shrunk into a single primary shard Before shrinking, a (primary or replica) copy of every shard in the index must be present on the same node.

The current write index on a data stream cannot be shrunk. In order to shrink the current write index, the data stream must first be rolled over so that a new write index is created and then the previous write index can be shrunk.

A shrink operation:

  • Creates a new target index with the same definition as the source index, but with a smaller number of primary shards.
  • Hard-links segments from the source index into the target index. If the file system does not support hard-linking, then all segments are copied into the new index, which is a much more time consuming process. Also if using multiple data paths, shards on different data paths require a full copy of segment files if they are not on the same disk since hardlinks do not work across disks.
  • Recovers the target index as though it were a closed index which had just been re-opened. Recovers shards to the .routing.allocation.initial_recovery._id index setting.

IMPORTANT: Indices can only be shrunk if they satisfy the following requirements:

  • The target index must not exist.
  • The source index must have more primary shards than the target index.
  • The number of primary shards in the target index must be a factor of the number of primary shards in the source index. The source index must have more primary shards than the target index.
  • The index must not contain more than 2,147,483,519 documents in total across all shards that will be shrunk into a single shard on the target index as this is the maximum number of docs that can fit into a single shard.
  • The node handling the shrink process must have sufficient free disk space to accommodate a second copy of the existing index.

Required authorization

  • Index privileges: manage

Path parameters

  • index string Required

    Name of the source index to shrink.

  • target string Required

    Name of the target index to create.

Query parameters

  • master_timeout string

    Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.

    Values are -1 or 0.

  • timeout string

    Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.

    Values are -1 or 0.

  • wait_for_active_shards number | string

    The number of shard copies that must be active before proceeding with the operation. Set to all or any positive integer up to the total number of shards in the index (number_of_replicas+1).

    Values are all or index-setting.

application/json

Body

  • aliases object

    The key is the alias name. Index alias names support date math.

    Hide aliases attribute Show aliases attribute object
    • * object Additional properties
      Hide * attributes Show * attributes object
      • filter object

        Query used to limit documents the alias can access.

        External documentation
      • index_routing string

        Value used to route indexing operations to a specific shard. If specified, this overwrites the routing value for indexing operations.

      • is_hidden boolean

        If true, the alias is hidden. All indices for the alias must have the same is_hidden value.

        Default value is false.

      • is_write_index boolean

        If true, the index is the write index for the alias.

        Default value is false.

      • routing string

        Value used to route indexing and search operations to a specific shard.

      • search_routing string

        Value used to route search operations to a specific shard. If specified, this overwrites the routing value for search operations.

  • settings object

    Configuration options for the target index.

    Hide settings attribute Show settings attribute object
    • * object Additional properties

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • acknowledged boolean Required
    • shards_acknowledged boolean Required
    • index string Required
POST /{index}/_shrink/{target}
POST /my_source_index/_shrink/my_target_index
{
  "settings": {
    "index.routing.allocation.require._name": null,
    "index.blocks.write": null
  }
}
resp = client.indices.shrink(
    index="my_source_index",
    target="my_target_index",
    settings={
        "index.routing.allocation.require._name": None,
        "index.blocks.write": None
    },
)
const response = await client.indices.shrink({
  index: "my_source_index",
  target: "my_target_index",
  settings: {
    "index.routing.allocation.require._name": null,
    "index.blocks.write": null,
  },
});
response = client.indices.shrink(
  index: "my_source_index",
  target: "my_target_index",
  body: {
    "settings": {
      "index.routing.allocation.require._name": nil,
      "index.blocks.write": nil
    }
  }
)
$resp = $client->indices()->shrink([
    "index" => "my_source_index",
    "target" => "my_target_index",
    "body" => [
        "settings" => [
            "index.routing.allocation.require._name" => null,
            "index.blocks.write" => null,
        ],
    ],
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"settings":{"index.routing.allocation.require._name":null,"index.blocks.write":null}}' "$ELASTICSEARCH_URL/my_source_index/_shrink/my_target_index"
client.indices().shrink(s -> s
    .index("my_source_index")
    .settings(Map.of("index.blocks.write", JsonData.fromJson("null"),"index.routing.allocation.require._name", JsonData.fromJson("null")))
    .target("my_target_index")
);
Request example
{
  "settings": {
    "index.routing.allocation.require._name": null,
    "index.blocks.write": null
  }
}
















Unfreeze an index Deprecated Generally available; Added in 6.6.0

POST /{index}/_unfreeze

When a frozen index is unfrozen, the index goes through the normal recovery process and becomes writeable again.

Required authorization

  • Index privileges: manage

Path parameters

  • index string Required

    Identifier for the index.

Query parameters

  • allow_no_indices boolean

    If false, the request returns an error if any wildcard expression, index alias, or _all value targets only missing or closed indices. This behavior applies even if the request targets other open indices.

  • expand_wildcards string | array[string]

    Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports comma-separated values, such as open,hidden. Valid values are: all, open, closed, hidden, none.

    Supported values include:

    • all: Match any data stream or index, including hidden ones.
    • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
    • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
    • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
    • none: Wildcard expressions are not accepted.

    Values are all, open, closed, hidden, or none.

  • ignore_unavailable boolean

    If false, the request returns an error if it targets a missing or closed index.

  • master_timeout string

    Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.

    Values are -1 or 0.

  • timeout string

    Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.

    Values are -1 or 0.

  • wait_for_active_shards string

    The number of shard copies that must be active before proceeding with the operation. Set to all or any positive integer up to the total number of shards in the index (number_of_replicas+1).

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • acknowledged boolean Required
    • shards_acknowledged boolean Required
POST /{index}/_unfreeze
curl \
 --request POST 'http://api.example.com/{index}/_unfreeze' \
 --header "Authorization: $API_KEY"




Validate a query Generally available; Added in 1.3.0

POST /{index}/_validate/query

All methods and paths for this operation:

GET /_validate/query

POST /_validate/query
GET /{index}/_validate/query
POST /{index}/_validate/query

Validates a query without running it.

Path parameters

  • index string | array[string] Required

    Comma-separated list of data streams, indices, and aliases to search. Supports wildcards (*). To search all data streams or indices, omit this parameter or use * or _all.

Query parameters

  • allow_no_indices boolean

    If false, the request returns an error if any wildcard expression, index alias, or _all value targets only missing or closed indices. This behavior applies even if the request targets other open indices.

  • all_shards boolean

    If true, the validation is executed on all shards instead of one random shard per index.

  • analyzer string

    Analyzer to use for the query string. This parameter can only be used when the q query string parameter is specified.

  • analyze_wildcard boolean

    If true, wildcard and prefix queries are analyzed.

  • default_operator string

    The default operator for query string query: AND or OR.

    Values are and, AND, or, or OR.

  • df string

    Field to use as default where no field prefix is given in the query string. This parameter can only be used when the q query string parameter is specified.

  • expand_wildcards string | array[string]

    Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports comma-separated values, such as open,hidden.

    Supported values include:

    • all: Match any data stream or index, including hidden ones.
    • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
    • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
    • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
    • none: Wildcard expressions are not accepted.

    Values are all, open, closed, hidden, or none.

  • explain boolean

    If true, the response returns detailed information if an error has occurred.

  • ignore_unavailable boolean

    If false, the request returns an error if it targets a missing or closed index.

  • lenient boolean

    If true, format-based query failures (such as providing text to a numeric field) in the query string will be ignored.

  • rewrite boolean

    If true, returns a more detailed explanation showing the actual Lucene query that will be executed.

  • q string

    Query in the Lucene query string syntax.

application/json

Body

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • explanations array[object]
      Hide explanations attributes Show explanations attributes object
      • error string
      • explanation string
      • index string Required
      • valid boolean Required
    • _shards object
      Hide _shards attributes Show _shards attributes object
      • failed number Required

        The number of shards the operation or search attempted to run on but failed.

      • successful number Required

        The number of shards the operation or search succeeded on.

      • total number Required

        The number of shards the operation or search will run on overall.

      • failures array[object]
        Hide failures attributes Show failures attributes object
        • index string
        • node string
        • reason object Required

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

        • shard number
        • status string
        • primary boolean
      • skipped number
    • valid boolean Required
    • error string
POST /{index}/_validate/query
GET my-index-000001/_validate/query?q=user.id:kimchy
resp = client.indices.validate_query(
    index="my-index-000001",
    q="user.id:kimchy",
)
const response = await client.indices.validateQuery({
  index: "my-index-000001",
  q: "user.id:kimchy",
});
response = client.indices.validate_query(
  index: "my-index-000001",
  q: "user.id:kimchy"
)
$resp = $client->indices()->validateQuery([
    "index" => "my-index-000001",
    "q" => "user.id:kimchy",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/my-index-000001/_validate/query?q=user.id:kimchy"
client.indices().validateQuery(v -> v
    .index("my-index-000001")
    .q("user.id:kimchy")
);

























Move to a lifecycle step Generally available; Added in 6.6.0

POST /_ilm/move/{index}

Manually move an index into a specific step in the lifecycle policy and run that step.

WARNING: This operation can result in the loss of data. Manually moving an index into a specific step runs that step even if it has already been performed. This is a potentially destructive action and this should be considered an expert level API.

You must specify both the current step and the step to be executed in the body of the request. The request will fail if the current step does not match the step currently running for the index This is to prevent the index from being moved from an unexpected step into the next step.

When specifying the target (next_step) to which the index will be moved, either the name or both the action and name fields are optional. If only the phase is specified, the index will move to the first step of the first action in the target phase. If the phase and action are specified, the index will move to the first step of the specified action in the specified phase. Only actions specified in the ILM policy are considered valid. An index cannot move to a step that is not part of its policy.

Required authorization

  • Index privileges: manage_ilm

Path parameters

  • index string Required

    The name of the index whose lifecycle step is to change

application/json

Body

  • current_step object Required

    The step that the index is expected to be in.

    Hide current_step attributes Show current_step attributes object
    • action string

      The optional action to which the index will be moved.

    • name string

      The optional step name to which the index will be moved.

    • phase string Required
  • next_step object Required

    The step that you want to run.

    Hide next_step attributes Show next_step attributes object
    • action string

      The optional action to which the index will be moved.

    • name string

      The optional step name to which the index will be moved.

    • phase string Required

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

POST /_ilm/move/{index}
POST _ilm/move/my-index-000001
{
  "current_step": {
    "phase": "new",
    "action": "complete",
    "name": "complete"
  },
  "next_step": {
    "phase": "warm",
    "action": "forcemerge",
    "name": "forcemerge"
  }
}
resp = client.ilm.move_to_step(
    index="my-index-000001",
    current_step={
        "phase": "new",
        "action": "complete",
        "name": "complete"
    },
    next_step={
        "phase": "warm",
        "action": "forcemerge",
        "name": "forcemerge"
    },
)
const response = await client.ilm.moveToStep({
  index: "my-index-000001",
  current_step: {
    phase: "new",
    action: "complete",
    name: "complete",
  },
  next_step: {
    phase: "warm",
    action: "forcemerge",
    name: "forcemerge",
  },
});
response = client.ilm.move_to_step(
  index: "my-index-000001",
  body: {
    "current_step": {
      "phase": "new",
      "action": "complete",
      "name": "complete"
    },
    "next_step": {
      "phase": "warm",
      "action": "forcemerge",
      "name": "forcemerge"
    }
  }
)
$resp = $client->ilm()->moveToStep([
    "index" => "my-index-000001",
    "body" => [
        "current_step" => [
            "phase" => "new",
            "action" => "complete",
            "name" => "complete",
        ],
        "next_step" => [
            "phase" => "warm",
            "action" => "forcemerge",
            "name" => "forcemerge",
        ],
    ],
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"current_step":{"phase":"new","action":"complete","name":"complete"},"next_step":{"phase":"warm","action":"forcemerge","name":"forcemerge"}}' "$ELASTICSEARCH_URL/_ilm/move/my-index-000001"
client.ilm().moveToStep(m -> m
    .currentStep(c -> c
        .action("complete")
        .name("complete")
        .phase("new")
    )
    .index("my-index-000001")
    .nextStep(n -> n
        .action("forcemerge")
        .name("forcemerge")
        .phase("warm")
    )
);
Request examples
Run `POST _ilm/move/my-index-000001` to move `my-index-000001` from the initial step to the `forcemerge` step.
{
  "current_step": {
    "phase": "new",
    "action": "complete",
    "name": "complete"
  },
  "next_step": {
    "phase": "warm",
    "action": "forcemerge",
    "name": "forcemerge"
  }
}
Run `POST _ilm/move/my-index-000001` to move `my-index-000001` from the end of hot phase into the start of warm.
{
  "current_step": {
    "phase": "hot",
    "action": "complete",
    "name": "complete"
  },
  "next_step": {
    "phase": "warm"
  }
}
Response examples (200)
A successful response when running a specific step in a lifecycle policy.
{
  "acknowledged": true
}





























Create an inference endpoint Generally available; Added in 8.11.0

PUT /_inference/{task_type}/{inference_id}

All methods and paths for this operation:

PUT /_inference/{inference_id}

PUT /_inference/{task_type}/{inference_id}

IMPORTANT: The inference APIs enable you to use certain services, such as built-in machine learning models (ELSER, E5), models uploaded through Eland, Cohere, OpenAI, Mistral, Azure OpenAI, Google AI Studio, Google Vertex AI, Anthropic, Watsonx.ai, or Hugging Face. For built-in models and models uploaded through Eland, the inference APIs offer an alternative way to use and manage trained models. However, if you do not plan to use the inference APIs to use these models or if you want to use non-NLP models, use the machine learning trained model APIs.

The following integrations are available through the inference API. You can find the available task types next to the integration name:

  • AlibabaCloud AI Search (completion, rerank, sparse_embedding, text_embedding)
  • Amazon Bedrock (completion, text_embedding)
  • Amazon SageMaker (chat_completion, completion, rerank, sparse_embedding, text_embedding)
  • Anthropic (completion)
  • Azure AI Studio (completion, text_embedding)
  • Azure OpenAI (completion, text_embedding)
  • Cohere (completion, rerank, text_embedding)
  • DeepSeek (chat_completion, completion)
  • Elasticsearch (rerank, sparse_embedding, text_embedding - this service is for built-in models and models uploaded through Eland)
  • ELSER (sparse_embedding)
  • Google AI Studio (completion, text_embedding)
  • Google Vertex AI (chat_completion, completion, rerank, text_embedding)
  • Hugging Face (chat_completion, completion, rerank, text_embedding)
  • JinaAI (rerank, text_embedding)
  • Llama (chat_completion, completion, text_embedding)
  • Mistral (chat_completion, completion, text_embedding)
  • OpenAI (chat_completion, completion, text_embedding)
  • VoyageAI (rerank, text_embedding)
  • Watsonx inference integration (text_embedding)

Required authorization

  • Cluster privileges: manage_inference

Path parameters

  • task_type string Required

    The task type. Refer to the integration list in the API description for the available task types.

    Values are sparse_embedding, text_embedding, rerank, completion, or chat_completion.

  • inference_id string Required

    The inference Id

Query parameters

  • timeout string

    Specifies the amount of time to wait for the inference endpoint to be created.

    Values are -1 or 0.

application/json

Body Required

  • chunking_settings object

    Chunking configuration object

    Hide chunking_settings attributes Show chunking_settings attributes object
    • max_chunk_size number

      The maximum size of a chunk in words. This value cannot be higher than 300 or lower than 20 (for sentence strategy) or 10 (for word strategy).

      Default value is 250.

    • overlap number

      The number of overlapping words for chunks. It is applicable only to a word chunking strategy. This value cannot be higher than half the max_chunk_size value.

      Default value is 100.

    • sentence_overlap number

      The number of overlapping sentences for chunks. It is applicable only for a sentence chunking strategy. It can be either 1 or 0.

      Default value is 1.

    • strategy string

      The chunking strategy: sentence or word.

      Default value is sentence.

  • service string Required

    The service type

  • service_settings object Required

    Settings specific to the service

  • task_settings object

    Task settings specific to the service and task type

Responses

  • 200 application/json
    Hide response attributes Show response attributes object

    Represents an inference endpoint as returned by the GET API

    • chunking_settings object

      Chunking configuration object

      Hide chunking_settings attributes Show chunking_settings attributes object
      • max_chunk_size number

        The maximum size of a chunk in words. This value cannot be higher than 300 or lower than 20 (for sentence strategy) or 10 (for word strategy).

        Default value is 250.

      • overlap number

        The number of overlapping words for chunks. It is applicable only to a word chunking strategy. This value cannot be higher than half the max_chunk_size value.

        Default value is 100.

      • sentence_overlap number

        The number of overlapping sentences for chunks. It is applicable only for a sentence chunking strategy. It can be either 1 or 0.

        Default value is 1.

      • strategy string

        The chunking strategy: sentence or word.

        Default value is sentence.

    • service string Required

      The service type

    • service_settings object Required

      Settings specific to the service

    • task_settings object

      Task settings specific to the service and task type

    • inference_id string Required

      The inference Id

    • task_type string Required

      The task type

      Values are sparse_embedding, text_embedding, rerank, completion, or chat_completion.

PUT /_inference/{task_type}/{inference_id}
PUT _inference/rerank/my-rerank-model
{
 "service": "cohere",
 "service_settings": {
   "model_id": "rerank-english-v3.0",
   "api_key": "{{COHERE_API_KEY}}"
 }
}
resp = client.inference.put(
    task_type="rerank",
    inference_id="my-rerank-model",
    inference_config={
        "service": "cohere",
        "service_settings": {
            "model_id": "rerank-english-v3.0",
            "api_key": "{{COHERE_API_KEY}}"
        }
    },
)
const response = await client.inference.put({
  task_type: "rerank",
  inference_id: "my-rerank-model",
  inference_config: {
    service: "cohere",
    service_settings: {
      model_id: "rerank-english-v3.0",
      api_key: "{{COHERE_API_KEY}}",
    },
  },
});
response = client.inference.put(
  task_type: "rerank",
  inference_id: "my-rerank-model",
  body: {
    "service": "cohere",
    "service_settings": {
      "model_id": "rerank-english-v3.0",
      "api_key": "{{COHERE_API_KEY}}"
    }
  }
)
$resp = $client->inference()->put([
    "task_type" => "rerank",
    "inference_id" => "my-rerank-model",
    "body" => [
        "service" => "cohere",
        "service_settings" => [
            "model_id" => "rerank-english-v3.0",
            "api_key" => "{{COHERE_API_KEY}}",
        ],
    ],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"service":"cohere","service_settings":{"model_id":"rerank-english-v3.0","api_key":"{{COHERE_API_KEY}}"}}' "$ELASTICSEARCH_URL/_inference/rerank/my-rerank-model"
client.inference().put(p -> p
    .inferenceId("my-rerank-model")
    .taskType(TaskType.Rerank)
    .inferenceConfig(i -> i
        .service("cohere")
        .serviceSettings(JsonData.fromJson("{\"model_id\":\"rerank-english-v3.0\",\"api_key\":\"{{COHERE_API_KEY}}\"}"))
    )
);
Request example
An example body for a `PUT _inference/rerank/my-rerank-model` request.
{
 "service": "cohere",
 "service_settings": {
   "model_id": "rerank-english-v3.0",
   "api_key": "{{COHERE_API_KEY}}"
 }
}













































































































































































































Get machine learning memory usage info Generally available; Added in 8.2.0

GET /_ml/memory/{node_id}/_stats

All methods and paths for this operation:

GET /_ml/memory/_stats

GET /_ml/memory/{node_id}/_stats

Get information about how machine learning jobs and trained models are using memory, on each node, both within the JVM heap, and natively, outside of the JVM.

Required authorization

  • Cluster privileges: monitor_ml

Path parameters

  • node_id string Required

    The names of particular nodes in the cluster to target. For example, nodeId1,nodeId2 or ml:true

Query parameters

  • master_timeout string

    Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.

    Values are -1 or 0.

  • timeout string

    Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.

    Values are -1 or 0.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • _nodes object Required

      Contains statistics about the number of nodes selected by the request.

      Hide _nodes attributes Show _nodes attributes object
      • failures array[object]

        Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

        Hide failures attributes Show failures attributes object
        • type string Required

          The type of error

        • reason string | null

          A human-readable explanation of the error, in English.

        • stack_trace string

          The server stack trace. Present only if the error_trace=true parameter was sent with the request.

        • caused_by object

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

        • root_cause array[object]

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

        • suppressed array[object]

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

      • total number Required

        Total number of nodes selected by the request.

      • successful number Required

        Number of nodes that responded successfully to the request.

      • failed number Required

        Number of nodes that rejected the request or failed to respond. If this value is not 0, a reason for the rejection or failure is included in the response.

    • cluster_name string Required
    • nodes object Required
      Hide nodes attribute Show nodes attribute object
      • * object Additional properties
        Hide * attributes Show * attributes object
        • attributes object Required
          Hide attributes attribute Show attributes attribute object
          • * string Additional properties
        • jvm object Required

          Contains Java Virtual Machine (JVM) statistics for the node.

          Hide jvm attributes Show jvm attributes object
          • heap_max
          • heap_max_in_bytes number Required

            Maximum amount of memory, in bytes, available for use by the heap.

          • java_inference
          • java_inference_in_bytes number Required

            Amount of Java heap, in bytes, currently being used for caching inference models.

          • java_inference_max
          • java_inference_max_in_bytes number Required

            Maximum amount of Java heap, in bytes, to be used for caching inference models.

        • mem object Required

          Contains statistics about memory usage for the node.

          Hide mem attributes Show mem attributes object
          • adjusted_total
          • adjusted_total_in_bytes number Required

            If the amount of physical memory has been overridden using the es.total_memory_bytes system property then this reports the overridden value in bytes. Otherwise it reports the same value as total_in_bytes.

          • total
          • total_in_bytes number Required

            Total amount of physical memory in bytes.

          • ml object Required

            Contains statistics about machine learning use of native memory on the node.

        • name string Required

          Human-readable identifier for the node. Based on the Node name setting setting.

        • roles array[string] Required

          Roles assigned to the node.

        • transport_address string Required

          The host and port where transport HTTP connections are accepted.

        • ephemeral_id string Required
GET /_ml/memory/{node_id}/_stats
GET _ml/memory/_stats?human
resp = client.ml.get_memory_stats(
    human=True,
)
const response = await client.ml.getMemoryStats({
  human: "true",
});
response = client.ml.get_memory_stats(
  human: "true"
)
$resp = $client->ml()->getMemoryStats([
    "human" => "true",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_ml/memory/_stats?human"

Get machine learning information Generally available; Added in 6.3.0

GET /_ml/info

Get defaults and limits used by machine learning. This endpoint is designed to be used by a user interface that needs to fully understand machine learning configurations where some options are not specified, meaning that the defaults should be used. This endpoint may be used to find out what those defaults are. It also provides information about the maximum size of machine learning jobs that could run in the current cluster configuration.

Required authorization

  • Cluster privileges: monitor_ml

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • defaults object Required
      Hide defaults attributes Show defaults attributes object
      • anomaly_detectors object Required
        Hide anomaly_detectors attributes Show anomaly_detectors attributes object
        • categorization_analyzer
        • categorization_examples_limit number Required
        • model_memory_limit string Required
        • model_snapshot_retention_days number Required
        • daily_model_snapshot_retention_after_days number Required
      • datafeeds object Required
        Hide datafeeds attribute Show datafeeds attribute object
        • scroll_size number Required
    • limits object Required
      Hide limits attributes Show limits attributes object
    • upgrade_mode boolean Required
    • native_code object Required
      Hide native_code attributes Show native_code attributes object
      • build_hash string Required
      • version string Required
GET /_ml/info
GET _ml/info
resp = client.ml.info()
const response = await client.ml.info();
response = client.ml.info
$resp = $client->ml()->info();
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_ml/info"
client.ml().info();




Machine learning anomaly detection

























Delete anomaly jobs from a calendar Generally available; Added in 6.2.0

DELETE /_ml/calendars/{calendar_id}/jobs/{job_id}

Required authorization

  • Cluster privileges: manage_ml

Path parameters

  • calendar_id string Required

    A string that uniquely identifies a calendar.

  • job_id string | array[string] Required

    An identifier for the anomaly detection jobs. It can be a job identifier, a group name, or a comma-separated list of jobs or groups.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • calendar_id string Required

      A string that uniquely identifies a calendar.

    • description string

      A description of the calendar.

    • job_ids string | array[string]

      A list of anomaly detection job identifiers or group names.

      One of:

      A list of anomaly detection job identifiers or group names.

DELETE /_ml/calendars/{calendar_id}/jobs/{job_id}
DELETE _ml/calendars/planned-outages/jobs/total-requests
resp = client.ml.delete_calendar_job(
    calendar_id="planned-outages",
    job_id="total-requests",
)
const response = await client.ml.deleteCalendarJob({
  calendar_id: "planned-outages",
  job_id: "total-requests",
});
response = client.ml.delete_calendar_job(
  calendar_id: "planned-outages",
  job_id: "total-requests"
)
$resp = $client->ml()->deleteCalendarJob([
    "calendar_id" => "planned-outages",
    "job_id" => "total-requests",
]);
curl -X DELETE -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_ml/calendars/planned-outages/jobs/total-requests"
client.ml().deleteCalendarJob(d -> d
    .calendarId("planned-outages")
    .jobId("total-requests")
);
Response examples (200)
A successful response when deleting an anomaly detection job from a calendar.
{
  "calendar_id": "planned-outages",
  "job_ids": []
}




























Delete forecasts from a job Generally available; Added in 6.5.0

DELETE /_ml/anomaly_detectors/{job_id}/_forecast/{forecast_id}

All methods and paths for this operation:

DELETE /_ml/anomaly_detectors/{job_id}/_forecast

DELETE /_ml/anomaly_detectors/{job_id}/_forecast/{forecast_id}

By default, forecasts are retained for 14 days. You can specify a different retention period with the expires_in parameter in the forecast jobs API. The delete forecast API enables you to delete one or more forecasts before they expire.

Required authorization

  • Cluster privileges: manage_ml

Path parameters

  • job_id string Required

    Identifier for the anomaly detection job.

  • forecast_id string Required

    A comma-separated list of forecast identifiers. If you do not specify this optional parameter or if you specify _all or * the API deletes all forecasts from the job.

Query parameters

  • allow_no_forecasts boolean

    Specifies whether an error occurs when there are no forecasts. In particular, if this parameter is set to false and there are no forecasts associated with the job, attempts to delete all forecasts return an error.

  • timeout string

    Specifies the period of time to wait for the completion of the delete operation. When this period of time elapses, the API fails and returns an error.

    Values are -1 or 0.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

DELETE /_ml/anomaly_detectors/{job_id}/_forecast/{forecast_id}
DELETE _ml/anomaly_detectors/total-requests/_forecast/_all
resp = client.ml.delete_forecast(
    job_id="total-requests",
    forecast_id="_all",
)
const response = await client.ml.deleteForecast({
  job_id: "total-requests",
  forecast_id: "_all",
});
response = client.ml.delete_forecast(
  job_id: "total-requests",
  forecast_id: "_all"
)
$resp = $client->ml()->deleteForecast([
    "job_id" => "total-requests",
    "forecast_id" => "_all",
]);
curl -X DELETE -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_ml/anomaly_detectors/total-requests/_forecast/_all"
client.ml().deleteForecast(d -> d
    .forecastId("_all")
    .jobId("total-requests")
);
Response examples (200)
A successful response when deleting a forecast from an anomaly detection job.
{
  "acknowledged": true
}




































Get info about events in calendars Generally available; Added in 6.2.0

GET /_ml/calendars/{calendar_id}/events

Required authorization

  • Cluster privileges: monitor_ml

Path parameters

  • calendar_id string Required

    A string that uniquely identifies a calendar. You can get information for multiple calendars by using a comma-separated list of ids or a wildcard expression. You can get information for all calendars by using _all or * or by omitting the calendar identifier.

Query parameters

  • end string | number

    Specifies to get events with timestamps earlier than this time.

  • from number

    Skips the specified number of events.

  • job_id string

    Specifies to get events for a specific anomaly detection job identifier or job group. It must be used with a calendar identifier of _all or *.

  • size number

    Specifies the maximum number of events to obtain.

  • start string | number

    Specifies to get events with timestamps after this time.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • count number Required
    • events array[object] Required
      Hide events attributes Show events attributes object
      • calendar_id string

        A string that uniquely identifies a calendar.

      • event_id string
      • description string Required

        A description of the scheduled event.

      • end_time string | number

        The timestamp for the end of the scheduled event in milliseconds since the epoch or ISO 8601 format.

        One of:

        The timestamp for the end of the scheduled event in milliseconds since the epoch or ISO 8601 format.

      • start_time string | number

        The timestamp for the beginning of the scheduled event in milliseconds since the epoch or ISO 8601 format.

        One of:

        The timestamp for the beginning of the scheduled event in milliseconds since the epoch or ISO 8601 format.

GET /_ml/calendars/{calendar_id}/events
GET _ml/calendars/planned-outages/events
resp = client.ml.get_calendar_events(
    calendar_id="planned-outages",
)
const response = await client.ml.getCalendarEvents({
  calendar_id: "planned-outages",
});
response = client.ml.get_calendar_events(
  calendar_id: "planned-outages"
)
$resp = $client->ml()->getCalendarEvents([
    "calendar_id" => "planned-outages",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_ml/calendars/planned-outages/events"
client.ml().getCalendarEvents(g -> g
    .calendarId("planned-outages")
);








Get datafeed stats Generally available; Added in 5.5.0

GET /_ml/datafeeds/{datafeed_id}/_stats

All methods and paths for this operation:

GET /_ml/datafeeds/_stats

GET /_ml/datafeeds/{datafeed_id}/_stats

You can get statistics for multiple datafeeds in a single API request by using a comma-separated list of datafeeds or a wildcard expression. You can get statistics for all datafeeds by using _all, by specifying * as the <feed_id>, or by omitting the <feed_id>. If the datafeed is stopped, the only information you receive is the datafeed_id and the state. This API returns a maximum of 10,000 datafeeds.

Required authorization

  • Cluster privileges: monitor_ml

Path parameters

  • datafeed_id string | array[string] Required

    Identifier for the datafeed. It can be a datafeed identifier or a wildcard expression. If you do not specify one of these options, the API returns information about all datafeeds.

Query parameters

  • allow_no_match boolean

    Specifies what to do when the request:

    1. Contains wildcard expressions and there are no datafeeds that match.
    2. Contains the _all string or no identifiers and there are no matches.
    3. Contains wildcard expressions and there are only partial matches.

    The default value is true, which returns an empty datafeeds array when there are no matches and the subset of results when there are partial matches. If this parameter is false, the request returns a 404 status code when there are no matches or only partial matches.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • count number Required
    • datafeeds array[object] Required
      Hide datafeeds attributes Show datafeeds attributes object
      • assignment_explanation string

        For started datafeeds only, contains messages relating to the selection of a node.

      • datafeed_id string Required

        A numerical character string that uniquely identifies the datafeed. This identifier can contain lowercase alphanumeric characters (a-z and 0-9), hyphens, and underscores. It must start and end with alphanumeric characters.

      • node object

        For started datafeeds only, this information pertains to the node upon which the datafeed is started.

        Hide node attributes Show node attributes object
        • name string Required
        • ephemeral_id string Required
        • id string Required
        • transport_address string Required
        • attributes object Required
          Hide attributes attribute Show attributes attribute object
          • * string Additional properties
      • state string Required

        The status of the datafeed, which can be one of the following values: starting, started, stopping, stopped.

        Values are started, stopped, starting, or stopping.

      • timing_stats object

        An object that provides statistical information about timing aspect of this datafeed.

        Hide timing_stats attributes Show timing_stats attributes object
        • bucket_count number Required

          The number of buckets processed.

        • exponential_average_calculation_context object
        • job_id string Required

          Identifier for the anomaly detection job.

        • search_count number Required

          The number of searches run by the datafeed.

      • running_state object

        An object containing the running state for this datafeed. It is only provided if the datafeed is started.

        Hide running_state attributes Show running_state attributes object
        • real_time_configured boolean Required

          Indicates if the datafeed is "real-time"; meaning that the datafeed has no configured end time.

        • real_time_running boolean Required

          Indicates whether the datafeed has finished running on the available past data. For datafeeds without a configured end time, this means that the datafeed is now running on "real-time" data.

        • search_interval object

          Provides the latest time interval the datafeed has searched.

GET /_ml/datafeeds/{datafeed_id}/_stats
GET _ml/datafeeds/datafeed-high_sum_total_sales/_stats
resp = client.ml.get_datafeed_stats(
    datafeed_id="datafeed-high_sum_total_sales",
)
const response = await client.ml.getDatafeedStats({
  datafeed_id: "datafeed-high_sum_total_sales",
});
response = client.ml.get_datafeed_stats(
  datafeed_id: "datafeed-high_sum_total_sales"
)
$resp = $client->ml()->getDatafeedStats([
    "datafeed_id" => "datafeed-high_sum_total_sales",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_ml/datafeeds/datafeed-high_sum_total_sales/_stats"
client.ml().getDatafeedStats(g -> g
    .datafeedId("datafeed-high_sum_total_sales")
);
































Reset an anomaly detection job Generally available; Added in 7.14.0

POST /_ml/anomaly_detectors/{job_id}/_reset

All model state and results are deleted. The job is ready to start over as if it had just been created. It is not currently possible to reset multiple jobs using wildcards or a comma separated list.

Required authorization

  • Cluster privileges: manage_ml

Path parameters

  • job_id string Required

    The ID of the job to reset.

Query parameters

  • wait_for_completion boolean

    Should this request wait until the operation has completed before returning.

  • delete_user_annotations boolean

    Specifies whether annotations that have been added by the user should be deleted along with any auto-generated annotations when the job is reset.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

POST /_ml/anomaly_detectors/{job_id}/_reset
POST _ml/anomaly_detectors/total-requests/_reset
resp = client.ml.reset_job(
    job_id="total-requests",
)
const response = await client.ml.resetJob({
  job_id: "total-requests",
});
response = client.ml.reset_job(
  job_id: "total-requests"
)
$resp = $client->ml()->resetJob([
    "job_id" => "total-requests",
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_ml/anomaly_detectors/total-requests/_reset"
client.ml().resetJob(r -> r
    .jobId("total-requests")
);








Stop datafeeds Generally available; Added in 5.4.0

POST /_ml/datafeeds/{datafeed_id}/_stop

A datafeed that is stopped ceases to retrieve data from Elasticsearch. A datafeed can be started and stopped multiple times throughout its lifecycle.

Required authorization

  • Cluster privileges: manage_ml

Path parameters

  • datafeed_id string Required

    Identifier for the datafeed. You can stop multiple datafeeds in a single API request by using a comma-separated list of datafeeds or a wildcard expression. You can close all datafeeds by using _all or by specifying * as the identifier.

Query parameters

  • allow_no_match boolean

    Specifies what to do when the request:

    • Contains wildcard expressions and there are no datafeeds that match.
    • Contains the _all string or no identifiers and there are no matches.
    • Contains wildcard expressions and there are only partial matches.

    If true, the API returns an empty datafeeds array when there are no matches and the subset of results when there are partial matches. If false, the API returns a 404 status code when there are no matches or only partial matches.

  • force boolean

    If true, the datafeed is stopped forcefully.

  • timeout string

    Specifies the amount of time to wait until a datafeed stops.

    Values are -1 or 0.

application/json

Body

  • allow_no_match boolean

    Refer to the description for the allow_no_match query parameter.

    Default value is true.

  • force boolean

    Refer to the description for the force query parameter.

    Default value is false.

  • timeout string

    Refer to the description for the timeout query parameter.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • stopped boolean Required
POST /_ml/datafeeds/{datafeed_id}/_stop
POST _ml/datafeeds/datafeed-low_request_rate/_stop
{
  "timeout": "30s"
}
resp = client.ml.stop_datafeed(
    datafeed_id="datafeed-low_request_rate",
    timeout="30s",
)
const response = await client.ml.stopDatafeed({
  datafeed_id: "datafeed-low_request_rate",
  timeout: "30s",
});
response = client.ml.stop_datafeed(
  datafeed_id: "datafeed-low_request_rate",
  body: {
    "timeout": "30s"
  }
)
$resp = $client->ml()->stopDatafeed([
    "datafeed_id" => "datafeed-low_request_rate",
    "body" => [
        "timeout" => "30s",
    ],
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"timeout":"30s"}' "$ELASTICSEARCH_URL/_ml/datafeeds/datafeed-low_request_rate/_stop"
client.ml().stopDatafeed(s -> s
    .datafeedId("datafeed-low_request_rate")
    .timeout(t -> t
        .time("30s")
    )
);
Request example
An example body for a `POST _ml/datafeeds/datafeed-low_request_rate/_stop` request.
{
  "timeout": "30s"
}





















Get data frame analytics job configuration info Generally available; Added in 7.3.0

GET /_ml/data_frame/analytics/{id}

All methods and paths for this operation:

GET /_ml/data_frame/analytics

GET /_ml/data_frame/analytics/{id}

You can get information for multiple data frame analytics jobs in a single API request by using a comma-separated list of data frame analytics jobs or a wildcard expression.

Required authorization

  • Cluster privileges: monitor_ml

Path parameters

  • id string Required

    Identifier for the data frame analytics job. If you do not specify this option, the API returns information for the first hundred data frame analytics jobs.

Query parameters

  • allow_no_match boolean

    Specifies what to do when the request:

    1. Contains wildcard expressions and there are no data frame analytics jobs that match.
    2. Contains the _all string or no identifiers and there are no matches.
    3. Contains wildcard expressions and there are only partial matches.

    The default value returns an empty data_frame_analytics array when there are no matches and the subset of results when there are partial matches. If this parameter is false, the request returns a 404 status code when there are no matches or only partial matches.

  • from number

    Skips the specified number of data frame analytics jobs.

  • size number

    Specifies the maximum number of data frame analytics jobs to obtain.

  • exclude_generated boolean

    Indicates if certain fields should be removed from the configuration on retrieval. This allows the configuration to be in an acceptable format to be retrieved and then added to another cluster.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • count number Required
    • data_frame_analytics array[object] Required

      An array of data frame analytics job resources, which are sorted by the id value in ascending order.

      Hide data_frame_analytics attributes Show data_frame_analytics attributes object
      • allow_lazy_start boolean
      • analysis object Required
        Hide analysis attribute Show analysis attribute object
        • outlier_detection object

          The configuration information necessary to perform outlier detection. NOTE: Advanced parameters are for fine-tuning classification analysis. They are set automatically by hyperparameter optimization to give the minimum validation error. It is highly recommended to use the default values unless you fully understand the function of these parameters.

      • analyzed_fields object
        Hide analyzed_fields attributes Show analyzed_fields attributes object
        • includes array[string] Required

          An array of strings that defines the fields that will be excluded from the analysis. You do not need to add fields with unsupported data types to excludes, these fields are excluded from the analysis automatically.

        • excludes array[string] Required

          An array of strings that defines the fields that will be included in the analysis.

      • authorization object

        The security privileges that the job uses to run its queries. If Elastic Stack security features were disabled at the time of the most recent update to the job, this property is omitted.

        Hide authorization attributes Show authorization attributes object
        • api_key object

          If an API key was used for the most recent update to the job, its name and identifier are listed in the response.

        • roles array[string]

          If a user ID was used for the most recent update to the job, its roles at the time of the update are listed in the response.

        • service_account string

          If a service account was used for the most recent update to the job, the account name is listed in the response.

      • create_time number

        Time unit for milliseconds

      • description string
      • dest object Required
        Hide dest attributes Show dest attributes object
        • index string Required

          Defines the destination index to store the results of the data frame analytics job.

        • results_field string

          Defines the name of the field in which to store the results of the analysis. Defaults to ml.

      • id string Required
      • max_num_threads number
      • model_memory_limit string
      • source object Required
        Hide source attributes Show source attributes object
        • index string | array[string] Required

          Index or indices on which to perform the analysis. It can be a single index or index pattern as well as an array of indices or patterns. NOTE: If your source indices contain documents with the same IDs, only the document that is indexed last appears in the destination index.

        • runtime_mappings object

          Definitions of runtime fields that will become part of the mapping of the destination index.

        • _source object

          Specify includes and/or `excludes patterns to select which fields will be present in the destination. Fields that are excluded cannot be included in the analysis.

        • query object

          The Elasticsearch query domain-specific language (DSL). This value corresponds to the query object in an Elasticsearch search POST body. All the options that are supported by Elasticsearch can be used, as this object is passed verbatim to Elasticsearch. By default, this property has the following value: {"match_all": {}}.

          Query DSL
      • version string
      • _meta object
        Hide _meta attribute Show _meta attribute object
        • * object Additional properties
GET /_ml/data_frame/analytics/{id}
GET _ml/data_frame/analytics/loganalytics
resp = client.ml.get_data_frame_analytics(
    id="loganalytics",
)
const response = await client.ml.getDataFrameAnalytics({
  id: "loganalytics",
});
response = client.ml.get_data_frame_analytics(
  id: "loganalytics"
)
$resp = $client->ml()->getDataFrameAnalytics([
    "id" => "loganalytics",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_ml/data_frame/analytics/loganalytics"
client.ml().getDataFrameAnalytics(g -> g
    .id("loganalytics")
);






































































































Reindex legacy backing indices Technical preview; Added in 8.18.0

POST /_migration/reindex

Reindex all legacy backing indices for a data stream. This operation occurs in a persistent task. The persistent task ID is returned immediately and the reindexing work is completed in that task.

application/json

Body Required

  • mode string Required

    Reindex mode. Currently only 'upgrade' is supported.

    Value is upgrade.

  • source object Required

    The source index or data stream (only data streams are currently supported).

    Hide source attribute Show source attribute object
    • index string Required

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

POST /_migration/reindex
POST _migration/reindex
{
    "source": {
        "index": "my-data-stream"
    },
    "mode": "upgrade"
}
resp = client.perform_request(
    "POST",
    "/_migration/reindex",
    headers={"Content-Type": "application/json"},
    body={
        "source": {
            "index": "my-data-stream"
        },
        "mode": "upgrade"
    },
)
const response = await client.transport.request({
  method: "POST",
  path: "/_migration/reindex",
  body: {
    source: {
      index: "my-data-stream",
    },
    mode: "upgrade",
  },
});
response = client.perform_request(
  "POST",
  "/_migration/reindex",
  {},
  {
    "source": {
      "index": "my-data-stream"
    },
    "mode": "upgrade"
  },
  { "Content-Type": "application/json" },
)
$requestFactory = Psr17FactoryDiscovery::findRequestFactory();
$streamFactory = Psr17FactoryDiscovery::findStreamFactory();
$request = $requestFactory->createRequest(
    "POST",
    "/_migration/reindex",
);
$request = $request->withHeader("Content-Type", "application/json");
$request = $request->withBody($streamFactory->createStream(
    json_encode([
        "source" => [
            "index" => "my-data-stream",
        ],
        "mode" => "upgrade",
    ]),
));
$resp = $client->sendRequest($request);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"source":{"index":"my-data-stream"},"mode":"upgrade"}' "$ELASTICSEARCH_URL/_migration/reindex"
client.indices().migrateReindex(m -> m
    .reindex(r -> r
        .mode(ModeEnum.Upgrade)
        .source(s -> s
            .index("my-data-stream")
        )
    )
);
Request example
An example body for a `POST _migration/reindex` request.
{
    "source": {
        "index": "my-data-stream"
    },
    "mode": "upgrade"
}






































































Start rollup jobs Deprecated Technical preview; Added in 6.3.0

POST /_rollup/job/{id}/_start

If you try to start a job that does not exist, an exception occurs. If you try to start a job that is already started, nothing happens.

Required authorization

  • Cluster privileges: manage_rollup

Path parameters

  • id string Required

    Identifier for the rollup job.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • started boolean Required
POST /_rollup/job/{id}/_start
POST _rollup/job/sensor/_start
resp = client.rollup.start_job(
    id="sensor",
)
const response = await client.rollup.startJob({
  id: "sensor",
});
response = client.rollup.start_job(
  id: "sensor"
)
$resp = $client->rollup()->startJob([
    "id" => "sensor",
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_rollup/job/sensor/_start"
client.rollup().startJob(s -> s
    .id("sensor")
);
Response examples (200)
A successful response from `POST _rollup/job/sensor/_start`.
{
  "started": true
}






























Get async search results Generally available; Added in 7.7.0

GET /_async_search/{id}

Retrieve the results of a previously submitted asynchronous search request. If the Elasticsearch security features are enabled, access to the results of a specific async search is restricted to the user or API key that submitted it.

Path parameters

  • id string

    A unique identifier for the async search.

Query parameters

  • keep_alive string

    The length of time that the async search should be available in the cluster. When not specified, the keep_alive set with the corresponding submit async request will be used. Otherwise, it is possible to override the value and extend the validity of the request. When this period expires, the search, if still running, is cancelled. If the search is completed, its saved results are deleted.

    Values are -1 or 0.

  • typed_keys boolean

    Specify whether aggregation and suggester names should be prefixed by their respective types in the response

  • wait_for_completion_timeout string

    Specifies to wait for the search to be completed up until the provided timeout. Final results will be returned if available before the timeout expires, otherwise the currently available results will be returned once the timeout expires. By default no timeout is set meaning that the currently available results will be returned without any additional wait.

    Values are -1 or 0.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • id string
    • is_partial boolean Required

      When the query is no longer running, this property indicates whether the search failed or was successfully completed on all shards. While the query is running, is_partial is always set to true.

    • is_running boolean Required

      Indicates whether the search is still running or has completed.


      If the search failed after some shards returned their results or the node that is coordinating the async search dies, results may be partial even though is_running is false.

    • expiration_time string | number

      Indicates when the async search will expire.

      One of:

      Indicates when the async search will expire.

    • Time unit for milliseconds

    • start_time string | number

      One of:
    • Time unit for milliseconds

    • completion_time string | number

      Indicates when the async search completed. It is present only when the search has completed.

      One of:

      Indicates when the async search completed. It is present only when the search has completed.

    • Time unit for milliseconds

    • response object Required
      Hide response attributes Show response attributes object
      • aggregations object

        Partial aggregations results, coming from the shards that have already completed running the query.

      • _clusters object
        Hide _clusters attributes Show _clusters attributes object
        • skipped number Required
        • successful number Required
        • total number Required
        • running number Required
        • partial number Required
        • failed number Required
        • details object
      • fields object
        Hide fields attribute Show fields attribute object
        • * object Additional properties
      • hits object Required
        Hide hits attributes Show hits attributes object
        • total
        • hits array[object] Required
        • max_score
      • max_score number
      • num_reduce_phases number

        Indicates how many reductions of the results have been performed. If this number increases compared to the last retrieved results for a get asynch search request, you can expect additional results included in the search response.

      • profile object
        Hide profile attribute Show profile attribute object
        • shards array[object] Required
      • pit_id string
      • _scroll_id string
      • _shards object Required

        Indicates how many shards have run the query. Note that in order for shard results to be included in the search response, they need to be reduced first.

        Hide _shards attribute Show _shards attribute object
        • failures array[object]
      • suggest object
        Hide suggest attribute Show suggest attribute object
        • * array[object] Additional properties
      • terminated_early boolean
      • timed_out boolean Required
      • took number Required
GET /_async_search/{id}
GET /_async_search/FmRldE8zREVEUzA2ZVpUeGs2ejJFUFEaMkZ5QTVrSTZSaVN3WlNFVmtlWHJsdzoxMDc=
resp = client.async_search.get(
    id="FmRldE8zREVEUzA2ZVpUeGs2ejJFUFEaMkZ5QTVrSTZSaVN3WlNFVmtlWHJsdzoxMDc=",
)
const response = await client.asyncSearch.get({
  id: "FmRldE8zREVEUzA2ZVpUeGs2ejJFUFEaMkZ5QTVrSTZSaVN3WlNFVmtlWHJsdzoxMDc=",
});
response = client.async_search.get(
  id: "FmRldE8zREVEUzA2ZVpUeGs2ejJFUFEaMkZ5QTVrSTZSaVN3WlNFVmtlWHJsdzoxMDc="
)
$resp = $client->asyncSearch()->get([
    "id" => "FmRldE8zREVEUzA2ZVpUeGs2ejJFUFEaMkZ5QTVrSTZSaVN3WlNFVmtlWHJsdzoxMDc=",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_async_search/FmRldE8zREVEUzA2ZVpUeGs2ejJFUFEaMkZ5QTVrSTZSaVN3WlNFVmtlWHJsdzoxMDc="
client.asyncSearch().get(g -> g
    .id("FmRldE8zREVEUzA2ZVpUeGs2ejJFUFEaMkZ5QTVrSTZSaVN3WlNFVmtlWHJsdzoxMDc=")
);
Response examples (200)
A succesful response from `GET /_async_search/FmRldE8zREVEUzA2ZVpUeGs2ejJFUFEaMkZ5QTVrSTZSaVN3WlNFVmtlWHJsdzoxMDc=`.
{
  "id" : "FmRldE8zREVEUzA2ZVpUeGs2ejJFUFEaMkZ5QTVrSTZSaVN3WlNFVmtlWHJsdzoxMDc=",
  "is_partial" : false, 
  "is_running" : false, 
  "start_time_in_millis" : 1583945890986,
  "expiration_time_in_millis" : 1584377890986, 
  "completion_time_in_millis" : 1583945903130, 
  "response" : {
    "took" : 12144,
    "timed_out" : false,
    "num_reduce_phases" : 46, 
    "_shards" : {
      "total" : 562,
      "successful" : 188, 
      "skipped" : 0,
      "failed" : 0
    },
    "hits" : {
      "total" : {
        "value" : 456433,
        "relation" : "eq"
      },
      "max_score" : null,
      "hits" : [ ]
    },
    "aggregations" : { 
      "sale_date" :  {
        "buckets" : []
      }
    }
  }
}




























Explain a document match result Generally available

POST /{index}/_explain/{id}

All methods and paths for this operation:

GET /{index}/_explain/{id}

POST /{index}/_explain/{id}

Get information about why a specific document matches, or doesn't match, a query. It computes a score explanation for a query and a specific document.

Required authorization

  • Index privileges: read

Path parameters

  • index string Required

    Index names that are used to limit the request. Only a single index name can be provided to this parameter.

  • id string Required

    The document identifier.

Query parameters

  • analyzer string

    The analyzer to use for the query string. This parameter can be used only when the q query string parameter is specified.

  • analyze_wildcard boolean

    If true, wildcard and prefix queries are analyzed. This parameter can be used only when the q query string parameter is specified.

  • default_operator string

    The default operator for query string query: AND or OR. This parameter can be used only when the q query string parameter is specified.

    Values are and, AND, or, or OR.

  • df string

    The field to use as default where no field prefix is given in the query string. This parameter can be used only when the q query string parameter is specified.

  • lenient boolean

    If true, format-based query failures (such as providing text to a numeric field) in the query string will be ignored. This parameter can be used only when the q query string parameter is specified.

  • preference string

    The node or shard the operation should be performed on. It is random by default.

  • routing string

    A custom value used to route operations to a specific shard.

  • _source boolean | string | array[string]

    True or false to return the _source field or not or a list of fields to return.

  • _source_excludes string | array[string]

    A comma-separated list of source fields to exclude from the response. You can also use this parameter to exclude fields from the subset specified in _source_includes query parameter. If the _source parameter is false, this parameter is ignored.

  • _source_includes string | array[string]

    A comma-separated list of source fields to include in the response. If this parameter is specified, only these source fields are returned. You can exclude fields from this subset using the _source_excludes query parameter. If the _source parameter is false, this parameter is ignored.

  • stored_fields string | array[string]

    A comma-separated list of stored fields to return in the response.

  • q string

    The query in the Lucene query string syntax.

application/json

Body

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • _index string Required
    • _id string Required
    • matched boolean Required
    • explanation object
      Hide explanation attributes Show explanation attributes object
      • description string Required
      • details array[object]
      • value number Required
    • get object
      Hide get attributes Show get attributes object
      • fields object
        Hide fields attribute Show fields attribute object
        • * object Additional properties
      • found boolean Required
      • _seq_no number
      • _primary_term number
      • _routing string
      • _source object
POST /{index}/_explain/{id}
GET /my-index-000001/_explain/0
{
  "query" : {
    "match" : { "message" : "elasticsearch" }
  }
}
resp = client.explain(
    index="my-index-000001",
    id="0",
    query={
        "match": {
            "message": "elasticsearch"
        }
    },
)
const response = await client.explain({
  index: "my-index-000001",
  id: 0,
  query: {
    match: {
      message: "elasticsearch",
    },
  },
});
response = client.explain(
  index: "my-index-000001",
  id: "0",
  body: {
    "query": {
      "match": {
        "message": "elasticsearch"
      }
    }
  }
)
$resp = $client->explain([
    "index" => "my-index-000001",
    "id" => "0",
    "body" => [
        "query" => [
            "match" => [
                "message" => "elasticsearch",
            ],
        ],
    ],
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"query":{"match":{"message":"elasticsearch"}}}' "$ELASTICSEARCH_URL/my-index-000001/_explain/0"
client.explain(e -> e
    .id("0")
    .index("my-index-000001")
    .query(q -> q
        .match(m -> m
            .field("message")
            .query(FieldValue.of("elasticsearch"))
        )
    )
);
Request example
Run `GET /my-index-000001/_explain/0` with the request body. Alternatively, run `GET /my-index-000001/_explain/0?q=message:elasticsearch`
{
  "query" : {
    "match" : { "message" : "elasticsearch" }
  }
}
Response examples (200)
A successful response from `GET /my-index-000001/_explain/0`.
{
  "_index":"my-index-000001",
  "_id":"0",
  "matched":true,
  "explanation":{
      "value":1.6943598,
      "description":"weight(message:elasticsearch in 0) [PerFieldSimilarity], result of:",
      "details":[
        {
            "value":1.6943598,
            "description":"score(freq=1.0), computed as boost * idf * tf from:",
            "details":[
              {
                  "value":2.2,
                  "description":"boost",
                  "details":[]
              },
              {
                  "value":1.3862944,
                  "description":"idf, computed as log(1 + (N - n + 0.5) / (n + 0.5)) from:",
                  "details":[
                    {
                        "value":1,
                        "description":"n, number of documents containing term",
                        "details":[]
                    },
                    {
                        "value":5,
                        "description":"N, total number of documents with field",
                        "details":[]
                    }
                  ]
              },
              {
                  "value":0.5555556,
                  "description":"tf, computed as freq / (freq + k1 * (1 - b + b * dl / avgdl)) from:",
                  "details":[
                    {
                        "value":1.0,
                        "description":"freq, occurrences of term within document",
                        "details":[]
                    },
                    {
                        "value":1.2,
                        "description":"k1, term saturation parameter",
                        "details":[]
                    },
                    {
                        "value":0.75,
                        "description":"b, length normalization parameter",
                        "details":[]
                    },
                    {
                        "value":3.0,
                        "description":"dl, length of field",
                        "details":[]
                    },
                    {
                        "value":5.4,
                        "description":"avgdl, average length of field",
                        "details":[]
                    }
                  ]
              }
            ]
        }
      ]
  }
}