Elasticsearch API

Base URL
http://api.example.com

Elasticsearch provides REST APIs that are used by the UI components and can be called directly to configure and access Elasticsearch features.

Documentation source and versions

This documentation is derived from the 8.19 branch of the elasticsearch-specification repository. It is provided under license Attribution-NonCommercial-NoDerivatives 4.0 International. This documentation contains work-in-progress information for future Elastic Stack releases.

Last update on Oct 23, 2025.

This API is provided under license Apache 2.0.




































Compact and aligned text (CAT)

The compact and aligned text (CAT) APIs aim are intended only for human consumption using the Kibana console or command line. They are not intended for use by applications. For application consumption, it's recommend to use a corresponding JSON API. All the cat commands accept a query string parameter help to see all the headers and info they provide, and the /_cat command alone lists all the available commands.





















Get the cluster health status Generally available

GET /_cat/health

IMPORTANT: CAT APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the cluster health API. This API is often used to check malfunctioning clusters. To help you track cluster health alongside log files and alerting systems, the API returns timestamps in two formats: HH:MM:SS, which is human-readable but includes no date information; Unix epoch time, which is machine-sortable and includes date information. The latter format is useful for cluster recoveries that take multiple days. You can use the cat health API to verify cluster health across multiple nodes. You also can use the API to track the recovery of a large cluster over a longer period of time.

Required authorization

  • Cluster privileges: monitor

Query parameters

  • ts boolean

    If true, returns HH:MM:SS and Unix epoch timestamps.

  • h string | array[string]

    List of columns to appear in the response. Supports simple wildcards.

  • s string | array[string]

    List of columns that determine how the table should be sorted. Sorting defaults to ascending and can be changed by setting :asc or :desc as a suffix to the column name.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • epoch number | string

      seconds since 1970-01-01 00:00:00

      One of:

      seconds since 1970-01-01 00:00:00

    • timestamp string

      time in HH:MM:SS

    • cluster string

      cluster name

    • status string

      health status

    • node.total string

      total number of nodes

    • node.data string

      number of nodes that can store data

    • shards string

      total number of shards

    • pri string

      number of primary shards

    • relo string

      number of relocating nodes

    • init string

      number of initializing nodes

    • unassign.pri string

      number of unassigned primary shards

    • unassign string

      number of unassigned shards

    • pending_tasks string

      number of pending tasks

    • max_task_wait_time string

      wait time of longest task pending

    • active_shards_percent string

      active number of shards in percent

GET /_cat/health
GET /_cat/health?v=true&format=json
resp = client.cat.health(
    v=True,
    format="json",
)
const response = await client.cat.health({
  v: "true",
  format: "json",
});
response = client.cat.health(
  v: "true",
  format: "json"
)
$resp = $client->cat()->health([
    "v" => "true",
    "format" => "json",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_cat/health?v=true&format=json"
client.cat().health();
Response examples (200)
A successful response from `GET /_cat/health?v=true&format=json`. By default, it returns `HH:MM:SS` and Unix epoch timestamps.
[
  {
    "epoch": "1475871424",
    "timestamp": "16:17:04",
    "cluster": "elasticsearch",
    "status": "green",
    "node.total": "1",
    "node.data": "1",
    "shards": "1",
    "pri": "1",
    "relo": "0",
    "init": "0",
    "unassign": "0",
    "unassign.pri": "0",
    "pending_tasks": "0",
    "max_task_wait_time": "-",
    "active_shards_percent": "100.0%"
  }
]








































Get plugin information Generally available

GET /_cat/plugins

Get a list of plugins running on each node of a cluster. IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the nodes info API.

Required authorization

  • Cluster privileges: monitor

Query parameters

  • h string | array[string]

    List of columns to appear in the response. Supports simple wildcards.

  • s string | array[string]

    List of columns that determine how the table should be sorted. Sorting defaults to ascending and can be changed by setting :asc or :desc as a suffix to the column name.

  • include_bootstrap boolean

    Include bootstrap plugins in the response

  • local boolean

    If true, the request computes the list of selected nodes from the local cluster state. If false the list of selected nodes are computed from the cluster state of the master node. In both cases the coordinating node will send requests for further information to each selected node.

  • master_timeout string

    Period to wait for a connection to the master node.

    External documentation

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • id string

      The unique node identifier.

    • name string

      The node name.

    • component string

      The component name.

    • version string

      The component version.

    • description string

      The plugin details.

    • type string

      The plugin type.

GET /_cat/plugins
GET /_cat/plugins?v=true&s=component&h=name,component,version,description&format=json
resp = client.cat.plugins(
    v=True,
    s="component",
    h="name,component,version,description",
    format="json",
)
const response = await client.cat.plugins({
  v: "true",
  s: "component",
  h: "name,component,version,description",
  format: "json",
});
response = client.cat.plugins(
  v: "true",
  s: "component",
  h: "name,component,version,description",
  format: "json"
)
$resp = $client->cat()->plugins([
    "v" => "true",
    "s" => "component",
    "h" => "name,component,version,description",
    "format" => "json",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_cat/plugins?v=true&s=component&h=name,component,version,description&format=json"
client.cat().plugins();
Response examples (200)
A successful response from `GET /_cat/plugins?v=true&s=component&h=name,component,version,description&format=json`.
[
  { "name": "U7321H6", "component": "analysis-icu", "version": "8.17.0", "description": "The ICU Analysis plugin integrates the Lucene ICU module into Elasticsearch, adding ICU-related analysis components."},
  {"name": "U7321H6", "component": "analysis-kuromoji",   "verison":  "8.17.0", description: "The Japanese (kuromoji) Analysis plugin integrates Lucene kuromoji analysis module into elasticsearch."},
  {"name" "U7321H6", "component": "analysis-nori", "version":         "8.17.0", "description": "The Korean (nori) Analysis plugin integrates Lucene nori analysis module into elasticsearch."},
  {"name": "U7321H6", "component": "analysis-phonetic",   "verison":  "8.17.0", "description": "The Phonetic Analysis plugin integrates phonetic token filter analysis with elasticsearch."},
  {"name": "U7321H6", "component": "analysis-smartcn",   "verison":  "8.17.0", "description": "Smart Chinese Analysis plugin integrates Lucene Smart Chinese analysis module into elasticsearch."},
  {"name": "U7321H6", "component": "analysis-stempel",   "verison":  "8.17.0", "description": "The Stempel (Polish) Analysis plugin integrates Lucene stempel (polish) analysis module into elasticsearch."},
  {"name": "U7321H6", "component": "analysis-ukrainian",   "verison":  "8.17.0", "description": "The Ukrainian Analysis plugin integrates the Lucene UkrainianMorfologikAnalyzer into elasticsearch."},
  {"name": "U7321H6", "component": "discovery-azure-classic",   "verison":  "8.17.0", "description": "The Azure Classic Discovery plugin allows to use Azure Classic API for the unicast discovery mechanism"},
  {"name": "U7321H6", "component": "discovery-ec2",   "verison":  "8.17.0", "description": "The EC2 discovery plugin allows to use AWS API for the unicast discovery mechanism."},
  {"name": "U7321H6", "component": "discovery-gce",   "verison":  "8.17.0", "description": "The Google Compute Engine (GCE) Discovery plugin allows to use GCE API for the unicast discovery mechanism."},
  {"name": "U7321H6", "component": "mapper-annotated-text",   "verison":  "8.17.0", "description": "The Mapper Annotated_text plugin adds support for text fields with markup used to inject annotation tokens into the index."},
  {"name": "U7321H6", "component": "mapper-murmur3",   "verison":  "8.17.0", "description": "The Mapper Murmur3 plugin allows to compute hashes of a field's values at index-time and to store them in the index."},
  {"name": "U7321H6", "component": "mapper-size",   "verison":  "8.17.0", "description": "The Mapper Size plugin allows document to record their uncompressed size at index time."},
  {"name": "U7321H6", "component": "store-smb",   "verison":  "8.17.0", "description": "The Store SMB plugin adds support for SMB stores."}
]









































Update voting configuration exclusions Generally available; Added in 7.0.0

POST /_cluster/voting_config_exclusions

Update the cluster voting config exclusions by node IDs or node names. By default, if there are more than three master-eligible nodes in the cluster and you remove fewer than half of the master-eligible nodes in the cluster at once, the voting configuration automatically shrinks. If you want to shrink the voting configuration to contain fewer than three nodes or to remove half or more of the master-eligible nodes in the cluster at once, use this API to remove departing nodes from the voting configuration manually. The API adds an entry for each specified node to the cluster’s voting configuration exclusions list. It then waits until the cluster has reconfigured its voting configuration to exclude the specified nodes.

Clusters should have no voting configuration exclusions in normal operation. Once the excluded nodes have stopped, clear the voting configuration exclusions with DELETE /_cluster/voting_config_exclusions. This API waits for the nodes to be fully removed from the cluster before it returns. If your cluster has voting configuration exclusions for nodes that you no longer intend to remove, use DELETE /_cluster/voting_config_exclusions?wait_for_removal=false to clear the voting configuration exclusions without waiting for the nodes to leave the cluster.

A response to POST /_cluster/voting_config_exclusions with an HTTP status code of 200 OK guarantees that the node has been removed from the voting configuration and will not be reinstated until the voting configuration exclusions are cleared by calling DELETE /_cluster/voting_config_exclusions. If the call to POST /_cluster/voting_config_exclusions fails or returns a response with an HTTP status code other than 200 OK then the node may not have been removed from the voting configuration. In that case, you may safely retry the call.

NOTE: Voting exclusions are required only when you remove at least half of the master-eligible nodes from a cluster in a short time period. They are not required when removing master-ineligible nodes or when removing fewer than half of the master-eligible nodes.

External documentation

Query parameters

  • node_names string | array[string]

    A comma-separated list of the names of the nodes to exclude from the voting configuration. If specified, you may not also specify node_ids.

  • node_ids string | array[string]

    A comma-separated list of the persistent ids of the nodes to exclude from the voting configuration. If specified, you may not also specify node_names.

  • master_timeout string

    Period to wait for a connection to the master node.

    External documentation
  • timeout string

    When adding a voting configuration exclusion, the API waits for the specified nodes to be excluded from the voting configuration before returning. If the timeout expires before the appropriate condition is satisfied, the request fails and returns an error.

    External documentation

Responses

  • 200 application/json
POST /_cluster/voting_config_exclusions
curl \
 --request POST 'http://api.example.com/_cluster/voting_config_exclusions' \
 --header "Authorization: $API_KEY"




Get cluster-wide settings Generally available

GET /_cluster/settings

By default, it returns only settings that have been explicitly defined.

Required authorization

  • Cluster privileges: monitor
External documentation

Query parameters

  • flat_settings boolean

    If true, returns settings in flat format.

  • include_defaults boolean

    If true, also returns default values for all other cluster settings, reflecting the values in the elasticsearch.yml file of one of the nodes in the cluster. If the nodes in your cluster do not all have the same values in their elasticsearch.yml config files then the values returned by this API may vary from invocation to invocation and may not reflect the values that Elasticsearch uses in all situations. Use the GET _nodes/settings API to fetch the settings for each individual node in your cluster.

  • master_timeout string

    Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.

    External documentation
  • timeout string

    Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.

    External documentation

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • persistent object Required

      The settings that persist after the cluster restarts.

      Hide persistent attribute Show persistent attribute object
      • * object Additional properties
    • transient object Required

      The settings that do not persist after the cluster restarts.

      Hide transient attribute Show transient attribute object
      • * object Additional properties
    • defaults object

      The default setting values.

      Hide defaults attribute Show defaults attribute object
      • * object Additional properties
GET /_cluster/settings
GET /_cluster/settings?filter_path=persistent.cluster.remote
resp = client.cluster.get_settings(
    filter_path="persistent.cluster.remote",
)
const response = await client.cluster.getSettings({
  filter_path: "persistent.cluster.remote",
});
response = client.cluster.get_settings(
  filter_path: "persistent.cluster.remote"
)
$resp = $client->cluster()->getSettings([
    "filter_path" => "persistent.cluster.remote",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_cluster/settings?filter_path=persistent.cluster.remote"




Get the cluster health status Generally available; Added in 1.3.0

GET /_cluster/health/{index}

All methods and paths for this operation:

GET /_cluster/health

GET /_cluster/health/{index}

You can also use the API to get the health status of only specified data streams and indices. For data streams, the API retrieves the health status of the stream’s backing indices.

The cluster health status is: green, yellow or red. On the shard level, a red status indicates that the specific shard is not allocated in the cluster. Yellow means that the primary shard is allocated but replicas are not. Green means that all shards are allocated. The index level status is controlled by the worst shard status.

One of the main benefits of the API is the ability to wait until the cluster reaches a certain high watermark health level. The cluster status is controlled by the worst index status.

Required authorization

  • Cluster privileges: monitor,manage

Path parameters

  • index string | array[string] Required

    Comma-separated list of data streams, indices, and index aliases used to limit the request. Wildcard expressions (*) are supported. To target all data streams and indices in a cluster, omit this parameter or use _all or *.

Query parameters

  • expand_wildcards string | array[string]

    Whether to expand wildcard expression to concrete indices that are open, closed or both.

    Supported values include:

    • all: Match any data stream or index, including hidden ones.
    • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
    • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
    • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
    • none: Wildcard expressions are not accepted.

    Values are all, open, closed, hidden, or none.

  • level string

    Can be one of cluster, indices or shards. Controls the details level of the health information returned.

    Values are cluster, indices, or shards.

  • local boolean

    If true, the request retrieves information from the local node only. Defaults to false, which means information is retrieved from the master node.

  • master_timeout string

    Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.

    External documentation
  • timeout string

    Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.

    External documentation
  • wait_for_active_shards number | string

    A number controlling to how many active shards to wait for, all to wait for all shards in the cluster to be active, or 0 to not wait.

    Values are all or index-setting.

  • wait_for_events string

    Can be one of immediate, urgent, high, normal, low, languid. Wait until all currently queued events with the given priority are processed.

    Values are immediate, urgent, high, normal, low, or languid.

  • wait_for_nodes string | number

    The request waits until the specified number N of nodes is available. It also accepts >=N, <=N, >N and <N. Alternatively, it is possible to use ge(N), le(N), gt(N) and lt(N) notation.

  • wait_for_no_initializing_shards boolean

    A boolean value which controls whether to wait (until the timeout provided) for the cluster to have no shard initializations. Defaults to false, which means it will not wait for initializing shards.

  • wait_for_no_relocating_shards boolean

    A boolean value which controls whether to wait (until the timeout provided) for the cluster to have no shard relocations. Defaults to false, which means it will not wait for relocating shards.

  • wait_for_status string

    One of green, yellow or red. Will wait (until the timeout provided) until the status of the cluster changes to the one provided or better, i.e. green > yellow > red. By default, will not wait for any status.

    Supported values include:

    • green (or GREEN): All shards are assigned.
    • yellow (or YELLOW): All primary shards are assigned, but one or more replica shards are unassigned. If a node in the cluster fails, some data could be unavailable until that node is repaired.
    • red (or RED): One or more primary shards are unassigned, so some data is unavailable. This can occur briefly during cluster startup as primary shards are assigned.
    • unknown
    • unavailable

    Values are green, GREEN, yellow, YELLOW, red, RED, unknown, or unavailable.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • active_primary_shards number Required

      The number of active primary shards.

    • active_shards number Required

      The total number of active primary and replica shards.

    • active_shards_percent string

      The ratio of active shards in the cluster expressed as a string formatted percentage.

    • active_shards_percent_as_number number Required

      The ratio of active shards in the cluster expressed as a percentage.

    • cluster_name string Required

      The name of the cluster.

    • delayed_unassigned_shards number Required

      The number of shards whose allocation has been delayed by the timeout settings.

    • indices object
      Hide indices attribute Show indices attribute object
      • * object Additional properties
        Hide * attributes Show * attributes object
        • active_primary_shards number Required
        • active_shards number Required
        • initializing_shards number Required
        • number_of_replicas number Required
        • number_of_shards number Required
        • relocating_shards number Required
        • shards object
          Hide shards attribute Show shards attribute object
          • * object Additional properties
            Hide * attributes Show * attributes object
            • active_shards number Required
            • initializing_shards number Required
            • primary_active boolean Required
            • relocating_shards number Required
            • status string Required

              Supported values include:

              • green (or GREEN): All shards are assigned.
              • yellow (or YELLOW): All primary shards are assigned, but one or more replica shards are unassigned. If a node in the cluster fails, some data could be unavailable until that node is repaired.
              • red (or RED): One or more primary shards are unassigned, so some data is unavailable. This can occur briefly during cluster startup as primary shards are assigned.
              • unknown
              • unavailable

              Values are green, GREEN, yellow, YELLOW, red, RED, unknown, or unavailable.

            • unassigned_shards number Required
            • unassigned_primary_shards number Required
        • status string Required

          Supported values include:

          • green (or GREEN): All shards are assigned.
          • yellow (or YELLOW): All primary shards are assigned, but one or more replica shards are unassigned. If a node in the cluster fails, some data could be unavailable until that node is repaired.
          • red (or RED): One or more primary shards are unassigned, so some data is unavailable. This can occur briefly during cluster startup as primary shards are assigned.
          • unknown
          • unavailable

          Values are green, GREEN, yellow, YELLOW, red, RED, unknown, or unavailable.

        • unassigned_shards number Required
        • unassigned_primary_shards number Required
    • initializing_shards number Required

      The number of shards that are under initialization.

    • number_of_data_nodes number Required

      The number of nodes that are dedicated data nodes.

    • number_of_in_flight_fetch number Required

      The number of unfinished fetches.

    • number_of_nodes number Required

      The number of nodes within the cluster.

    • number_of_pending_tasks number Required

      The number of cluster-level changes that have not yet been executed.

    • relocating_shards number Required

      The number of shards that are under relocation.

    • status string Required

      Supported values include:

      • green (or GREEN): All shards are assigned.
      • yellow (or YELLOW): All primary shards are assigned, but one or more replica shards are unassigned. If a node in the cluster fails, some data could be unavailable until that node is repaired.
      • red (or RED): One or more primary shards are unassigned, so some data is unavailable. This can occur briefly during cluster startup as primary shards are assigned.
      • unknown
      • unavailable

      Values are green, GREEN, yellow, YELLOW, red, RED, unknown, or unavailable.

    • task_max_waiting_in_queue string

      The time since the earliest initiated task is waiting for being performed.

      External documentation
    • task_max_waiting_in_queue_millis number

      Time unit for milliseconds

    • timed_out boolean Required

      If false the response returned within the period of time that is specified by the timeout parameter (30s by default)

    • unassigned_primary_shards number Required

      The number of primary shards that are not allocated.

    • unassigned_shards number Required

      The number of shards that are not allocated.

GET /_cluster/health/{index}
GET _cluster/health
resp = client.cluster.health()
const response = await client.cluster.health();
response = client.cluster.health
$resp = $client->cluster()->health();
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_cluster/health"
client.cluster().health(h -> h);
Response examples (200)
A successful response from `GET _cluster/health`. It is the health status of a quiet single node cluster with a single index with one shard and one replica.
{
  "cluster_name" : "testcluster",
  "status" : "yellow",
  "timed_out" : false,
  "number_of_nodes" : 1,
  "number_of_data_nodes" : 1,
  "active_primary_shards" : 1,
  "active_shards" : 1,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 1,
  "delayed_unassigned_shards": 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch": 0,
  "task_max_waiting_in_queue_millis": 0,
  "active_shards_percent_as_number": 50.0
}




Get the pending cluster tasks Generally available

GET /_cluster/pending_tasks

Get information about cluster-level changes (such as create index, update mapping, allocate or fail shard) that have not yet taken effect.

NOTE: This API returns a list of any pending updates to the cluster state. These are distinct from the tasks reported by the task management API which include periodic tasks and tasks initiated by the user, such as node stats, search queries, or create index requests. However, if a user-initiated task such as a create index command causes a cluster state update, the activity of this task might be reported by both task api and pending cluster tasks API.

Required authorization

  • Cluster privileges: monitor

Query parameters

  • local boolean

    If true, the request retrieves information from the local node only. If false, information is retrieved from the master node.

  • master_timeout string

    Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.

    External documentation

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • tasks array[object] Required
      Hide tasks attributes Show tasks attributes object
      • executing boolean Required

        Indicates whether the pending tasks are currently executing or not.

      • insert_order number Required

        The number that represents when the task has been inserted into the task queue.

      • priority string Required

        The priority of the pending task. The valid priorities in descending priority order are: IMMEDIATE > URGENT > HIGH > NORMAL > LOW > LANGUID.

      • source string Required

        A general description of the cluster task that may include a reason and origin.

      • time_in_queue string

        The time since the task is waiting for being performed.

        External documentation
      • time_in_queue_millis number

        Time unit for milliseconds

GET /_cluster/pending_tasks
GET /_cluster/pending_tasks
resp = client.cluster.pending_tasks()
const response = await client.cluster.pendingTasks();
response = client.cluster.pending_tasks
$resp = $client->cluster()->pendingTasks();
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_cluster/pending_tasks"
client.cluster().pendingTasks(p -> p);












Get cluster statistics Generally available; Added in 1.3.0

GET /_cluster/stats/nodes/{node_id}

All methods and paths for this operation:

GET /_cluster/stats

GET /_cluster/stats/nodes/{node_id}

Get basic index metrics (shard numbers, store size, memory usage) and information about the current nodes that form the cluster (number, roles, os, jvm versions, memory usage, cpu and installed plugins).

Required authorization

  • Cluster privileges: monitor

Path parameters

  • node_id string | array[string] Required

    Comma-separated list of node filters used to limit returned information. Defaults to all nodes in the cluster.

Query parameters

  • include_remotes boolean

    Include remote cluster data into the response

  • timeout string

    Period to wait for each node to respond. If a node does not respond before its timeout expires, the response does not include its stats. However, timed out nodes are included in the response’s _nodes.failed property. Defaults to no timeout.

    External documentation

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • _nodes object

      Contains statistics about the number of nodes selected by the request’s node filters.

      Hide _nodes attributes Show _nodes attributes object
      • failures array[object]

        Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

        Hide failures attributes Show failures attributes object
        • type string Required

          The type of error

        • reason
        • stack_trace string

          The server stack trace. Present only if the error_trace=true parameter was sent with the request.

        • caused_by
        • root_cause array[object]
        • suppressed array[object]
      • total number Required

        Total number of nodes selected by the request.

      • successful number Required

        Number of nodes that responded successfully to the request.

      • failed number Required

        Number of nodes that rejected the request or failed to respond. If this value is not 0, a reason for the rejection or failure is included in the response.

    • cluster_name string Required

      Name of the cluster, based on the cluster name setting.

    • cluster_uuid string Required

      Unique identifier for the cluster.

    • indices object Required

      Contains statistics about indices with shards assigned to selected nodes.

      Hide indices attributes Show indices attributes object
      • analysis object

        Contains statistics about analyzers and analyzer components used in selected nodes.

        Hide analysis attributes Show analysis attributes object
        • analyzer_types array[object] Required

          Contains statistics about analyzer types used in selected nodes.

        • built_in_analyzers array[object] Required

          Contains statistics about built-in analyzers used in selected nodes.

        • built_in_char_filters array[object] Required

          Contains statistics about built-in character filters used in selected nodes.

        • built_in_filters array[object] Required

          Contains statistics about built-in token filters used in selected nodes.

        • built_in_tokenizers array[object] Required

          Contains statistics about built-in tokenizers used in selected nodes.

        • char_filter_types array[object] Required

          Contains statistics about character filter types used in selected nodes.

        • filter_types array[object] Required

          Contains statistics about token filter types used in selected nodes.

        • tokenizer_types array[object] Required

          Contains statistics about tokenizer types used in selected nodes.

        • synonyms object Required

          Contains statistics about synonyms types used in selected nodes.

      • completion object Required

        Contains statistics about memory used for completion in selected nodes.

        Hide completion attributes Show completion attributes object
        • size_in_bytes number Required

          Total amount, in bytes, of memory used for completion across all shards assigned to selected nodes.

        • fields object
      • count number Required

        Total number of indices with shards assigned to selected nodes.

      • docs object Required

        Contains counts for documents in selected nodes.

        Hide docs attributes Show docs attributes object
        • count number Required

          Total number of non-deleted documents across all primary shards assigned to selected nodes. This number is based on documents in Lucene segments and may include documents from nested fields.

        • deleted number

          Total number of deleted documents across all primary shards assigned to selected nodes. This number is based on documents in Lucene segments. Elasticsearch reclaims the disk space of deleted Lucene documents when a segment is merged.

        • total_size_in_bytes number Required

          Returns the total size in bytes of all documents in this stats. This value may be more reliable than store_stats.size_in_bytes in estimating the index size.

      • fielddata object Required

        Contains statistics about the field data cache of selected nodes.

        Hide fielddata attributes Show fielddata attributes object
        • evictions number
        • memory_size_in_bytes number Required
        • fields object
      • query_cache object Required

        Contains statistics about the query cache of selected nodes.

        Hide query_cache attributes Show query_cache attributes object
        • cache_count number Required

          Total number of entries added to the query cache across all shards assigned to selected nodes. This number includes current and evicted entries.

        • cache_size number Required

          Total number of entries currently in the query cache across all shards assigned to selected nodes.

        • evictions number Required

          Total number of query cache evictions across all shards assigned to selected nodes.

        • hit_count number Required

          Total count of query cache hits across all shards assigned to selected nodes.

        • memory_size_in_bytes number Required

          Total amount, in bytes, of memory used for the query cache across all shards assigned to selected nodes.

        • miss_count number Required

          Total count of query cache misses across all shards assigned to selected nodes.

        • total_count number Required

          Total count of hits and misses in the query cache across all shards assigned to selected nodes.

      • segments object Required

        Contains statistics about segments in selected nodes.

        Hide segments attributes Show segments attributes object
        • count number Required

          Total number of segments across all shards assigned to selected nodes.

        • doc_values_memory_in_bytes number Required

          Total amount, in bytes, of memory used for doc values across all shards assigned to selected nodes.

        • file_sizes object Required

          This object is not populated by the cluster stats API. To get information on segment files, use the node stats API.

        • fixed_bit_set_memory_in_bytes number Required

          Total amount of memory, in bytes, used by fixed bit sets across all shards assigned to selected nodes.

        • index_writer_memory_in_bytes number Required

          Total amount, in bytes, of memory used by all index writers across all shards assigned to selected nodes.

        • max_unsafe_auto_id_timestamp number Required

          Unix timestamp, in milliseconds, of the most recently retried indexing request.

        • memory_in_bytes number Required

          Total amount, in bytes, of memory used for segments across all shards assigned to selected nodes.

        • norms_memory_in_bytes number Required

          Total amount, in bytes, of memory used for normalization factors across all shards assigned to selected nodes.

        • points_memory_in_bytes number Required

          Total amount, in bytes, of memory used for points across all shards assigned to selected nodes.

        • stored_fields_memory_in_bytes number Required

          Total amount, in bytes, of memory used for stored fields across all shards assigned to selected nodes.

        • terms_memory_in_bytes number Required

          Total amount, in bytes, of memory used for terms across all shards assigned to selected nodes.

        • term_vectors_memory_in_bytes number Required

          Total amount, in bytes, of memory used for term vectors across all shards assigned to selected nodes.

        • version_map_memory_in_bytes number Required

          Total amount, in bytes, of memory used by all version maps across all shards assigned to selected nodes.

      • shards object Required

        Contains statistics about indices with shards assigned to selected nodes.

        Hide shards attributes Show shards attributes object
        • primaries number

          Number of primary shards assigned to selected nodes.

        • replication number

          Ratio of replica shards to primary shards across all selected nodes.

        • total number

          Total number of shards assigned to selected nodes.

      • store object Required

        Contains statistics about the size of shards assigned to selected nodes.

        Hide store attributes Show store attributes object
        • size_in_bytes number Required

          Total size, in bytes, of all shards assigned to selected nodes.

        • reserved_in_bytes number Required

          A prediction, in bytes, of how much larger the shard stores will eventually grow due to ongoing peer recoveries, restoring snapshots, and similar activities.

        • total_data_set_size_in_bytes number

          Total data set size, in bytes, of all shards assigned to selected nodes. This includes the size of shards not stored fully on the nodes, such as the cache for partially mounted indices.

      • mappings object

        Contains statistics about field mappings in selected nodes.

        Hide mappings attributes Show mappings attributes object
        • field_types array[object] Required

          Contains statistics about field data types used in selected nodes.

        • runtime_field_types array[object] Required

          Contains statistics about runtime field data types used in selected nodes.

        • total_field_count number

          Total number of fields in all non-system indices.

        • total_deduplicated_field_count number

          Total number of fields in all non-system indices, accounting for mapping deduplication.

        • total_deduplicated_mapping_size_in_bytes number

          Total size of all mappings, in bytes, after deduplication and compression.

        • source_modes object Required

          Source mode usage count.

      • versions array[object]

        Contains statistics about analyzers and analyzer components used in selected nodes.

        Hide versions attributes Show versions attributes object
        • index_count number Required
        • primary_shard_count number Required
        • total_primary_bytes number Required
        • total_primary_size
        • version
      • dense_vector object Required

        Contains statistics about indexed dense vector

        Hide dense_vector attribute Show dense_vector attribute object
        • value_count number Required
      • sparse_vector object Required

        Contains statistics about indexed sparse vector

        Hide sparse_vector attribute Show sparse_vector attribute object
        • value_count number Required
    • nodes object Required

      Contains statistics about nodes selected by the request’s node filters.

      Hide nodes attributes Show nodes attributes object
      • count object Required

        Contains counts for nodes selected by the request’s node filters.

        Hide count attributes Show count attributes object
        • total number Required
        • coordinating_only number
        • data number
        • data_cold number
        • data_content number
        • data_frozen number Generally available; Added in 7.13.0
        • data_hot number
        • data_warm number
        • index number
        • ingest number
        • master number
        • ml number
        • remote_cluster_client number
        • transform number
        • voting_only number
      • discovery_types object Required

        Contains statistics about the discovery types used by selected nodes.

        Hide discovery_types attribute Show discovery_types attribute object
        • * number Additional properties
      • fs object Required

        Contains statistics about file stores by selected nodes.

        Hide fs attributes Show fs attributes object
        • path string
        • mount string
        • type string
        • available_in_bytes number

          Total number of bytes available to JVM in file stores across all selected nodes. Depending on operating system or process-level restrictions, this number may be less than nodes.fs.free_in_byes. This is the actual amount of free disk space the selected Elasticsearch nodes can use.

        • free_in_bytes number

          Total number, in bytes, of unallocated bytes in file stores across all selected nodes.

        • total_in_bytes number

          Total size, in bytes, of all file stores across all selected nodes.

        • low_watermark_free_space_in_bytes number
        • high_watermark_free_space_in_bytes number
        • flood_stage_free_space_in_bytes number
        • frozen_flood_stage_free_space_in_bytes number
      • indexing_pressure object Required
      • ingest object Required
        Hide ingest attributes Show ingest attributes object
        • number_of_pipelines number Required
        • processor_stats object Required
      • jvm object Required

        Contains statistics about the Java Virtual Machines (JVMs) used by selected nodes.

        Hide jvm attributes Show jvm attributes object
        • threads number Required

          Number of active threads in use by JVM across all selected nodes.

        • versions array[object] Required

          Contains statistics about the JVM versions used by selected nodes.

      • network_types object Required

        Contains statistics about the transport and HTTP networks used by selected nodes.

        Hide network_types attributes Show network_types attributes object
        • http_types object Required

          Contains statistics about the HTTP network types used by selected nodes.

        • transport_types object Required

          Contains statistics about the transport network types used by selected nodes.

      • os object Required

        Contains statistics about the operating systems used by selected nodes.

        Hide os attributes Show os attributes object
        • allocated_processors number Required

          Number of processors used to calculate thread pool size across all selected nodes. This number can be set with the processors setting of a node and defaults to the number of processors reported by the operating system. In both cases, this number will never be larger than 32.

        • architectures array[object]

          Contains statistics about processor architectures (for example, x86_64 or aarch64) used by selected nodes.

        • available_processors number Required

          Number of processors available to JVM across all selected nodes.

        • names array[object] Required

          Contains statistics about operating systems used by selected nodes.

        • pretty_names array[object] Required

          Contains statistics about operating systems used by selected nodes.

      • packaging_types array[object] Required

        Contains statistics about Elasticsearch distributions installed on selected nodes.

        Hide packaging_types attributes Show packaging_types attributes object
        • count number Required

          Number of selected nodes using the distribution flavor and file type.

        • flavor string Required

          Type of Elasticsearch distribution. This is always default.

        • type string Required

          File type (such as tar or zip) used for the distribution package.

      • plugins array[object] Required

        Contains statistics about installed plugins and modules by selected nodes. If no plugins or modules are installed, this array is empty.

        Hide plugins attributes Show plugins attributes object
        • classname string Required
        • description string Required
        • elasticsearch_version
        • extended_plugins array[string] Required
        • has_native_controller boolean Required
        • java_version
        • name
        • version
        • licensed boolean Required
      • process object Required

        Contains statistics about processes used by selected nodes.

      • versions array[string] Required

        Array of Elasticsearch versions used on selected nodes.

    • repositories object Required

      Contains stats on repository feature usage exposed in cluster stats for telemetry.

      Hide repositories attribute Show repositories attribute object
      • * object Additional properties
        Hide * attribute Show * attribute object
        • * number Additional properties
    • snapshots object Required

      Contains stats cluster snapshots.

      Hide snapshots attributes Show snapshots attributes object
      • current_counts object Required
        Hide current_counts attributes Show current_counts attributes object
        • snapshots number Required

          Snapshots currently in progress

        • shard_snapshots number Required

          Incomplete shard snapshots

        • snapshot_deletions number Required

          Snapshots deletions in progress

        • concurrent_operations number Required

          Sum of snapshots and snapshot_deletions

        • cleanups number Required

          Cleanups in progress, not counted in concurrent_operations as they are not concurrent

      • repositories object Required
        Hide repositories attribute Show repositories attribute object
        • * object Additional properties
          Hide * attribute Show * attribute object
          • type string Required
    • status string

      Health status of the cluster, based on the state of its primary and replica shards.

      Supported values include:

      • green (or GREEN): All shards are assigned.
      • yellow (or YELLOW): All primary shards are assigned, but one or more replica shards are unassigned. If a node in the cluster fails, some data could be unavailable until that node is repaired.
      • red (or RED): One or more primary shards are unassigned, so some data is unavailable. This can occur briefly during cluster startup as primary shards are assigned.
      • unknown
      • unavailable

      Values are green, GREEN, yellow, YELLOW, red, RED, unknown, or unavailable.

    • timestamp number Required

      Unix timestamp, in milliseconds, for the last time the cluster statistics were refreshed.

    • ccs object Required

      Cross-cluster stats

      Hide ccs attributes Show ccs attributes object
      • clusters object

        Contains remote cluster settings and metrics collected from them. The keys are cluster names, and the values are per-cluster data. Only present if include_remotes option is set to true.

        Hide clusters attribute Show clusters attribute object
        • * object Additional properties
          Hide * attributes Show * attributes object
          • cluster_uuid string Required

            The UUID of the remote cluster.

          • mode string Required

            The connection mode used to communicate with the remote cluster.

          • skip_unavailable boolean Required

            The skip_unavailable setting used for this remote cluster.

          • transport.compress string Required

            Transport compression setting used for this remote cluster.

          • version array[string] Required

            The list of Elasticsearch versions used by the nodes on the remote cluster.

          • nodes_count number Required

            The total count of nodes in the remote cluster.

          • shards_count number Required

            The total number of shards in the remote cluster.

          • indices_count number Required

            The total number of indices in the remote cluster.

          • indices_total_size_in_bytes number Required

            Total data set size, in bytes, of all shards assigned to selected nodes.

          • indices_total_size string

            Total data set size of all shards assigned to selected nodes, as a human-readable string.

          • max_heap_in_bytes number Required

            Maximum amount of memory, in bytes, available for use by the heap across the nodes of the remote cluster.

          • max_heap string

            Maximum amount of memory available for use by the heap across the nodes of the remote cluster, as a human-readable string.

          • mem_total_in_bytes number Required

            Total amount, in bytes, of physical memory across the nodes of the remote cluster.

          • mem_total string

            Total amount of physical memory across the nodes of the remote cluster, as a human-readable string.

      • _esql object

        Information about ES|QL cross-cluster query usage.

        Hide _esql attributes Show _esql attributes object
        • total number Required

          The total number of cross-cluster search requests that have been executed by the cluster.

        • success number Required

          The total number of cross-cluster search requests that have been successfully executed by the cluster.

        • skipped number Required

          The total number of cross-cluster search requests (successful or failed) that had at least one remote cluster skipped.

        • remotes_per_search_max number Required

          The maximum number of remote clusters that were queried in a single cross-cluster search request.

        • remotes_per_search_avg number Required

          The average number of remote clusters that were queried in a single cross-cluster search request.

        • failure_reasons object Required

          Statistics about the reasons for cross-cluster search request failures. The keys are the failure reason names and the values are the number of requests that failed for that reason.

        • features object Required

          The keys are the names of the search feature, and the values are the number of requests that used that feature. Single request can use more than one feature (e.g. both async and wildcard).

        • clients object Required

          Statistics about the clients that executed cross-cluster search requests. The keys are the names of the clients, and the values are the number of requests that were executed by that client. Only known clients (such as kibana or elasticsearch) are counted.

        • clusters object Required

          Statistics about the clusters that were queried in cross-cluster search requests. The keys are cluster names, and the values are per-cluster telemetry data. This also includes the local cluster itself, which uses the name (local).

GET /_cluster/stats/nodes/{node_id}
GET _cluster/stats?human&filter_path=indices.mappings.total_deduplicated_mapping_size*
resp = client.cluster.stats(
    human=True,
    filter_path="indices.mappings.total_deduplicated_mapping_size*",
)
const response = await client.cluster.stats({
  human: "true",
  filter_path: "indices.mappings.total_deduplicated_mapping_size*",
});
response = client.cluster.stats(
  human: "true",
  filter_path: "indices.mappings.total_deduplicated_mapping_size*"
)
$resp = $client->cluster()->stats([
    "human" => "true",
    "filter_path" => "indices.mappings.total_deduplicated_mapping_size*",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_cluster/stats?human&filter_path=indices.mappings.total_deduplicated_mapping_size*"














































Create or update a connector Beta; Added in 8.12.0

PUT /_connector/{connector_id}

All methods and paths for this operation:

PUT /_connector

PUT /_connector/{connector_id}

Path parameters

  • connector_id string Required

    The unique identifier of the connector to be created or updated. ID is auto-generated if not provided.

application/json

Body

  • description string
  • index_name string
  • is_native boolean
  • language string
  • name string
  • service_type string

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • result string Required

      Values are created, updated, deleted, not_found, or noop.

    • id string Required
PUT /_connector/{connector_id}
PUT _connector/my-connector
{
  "index_name": "search-google-drive",
  "name": "My Connector",
  "service_type": "google_drive"
}
resp = client.connector.put(
    connector_id="my-connector",
    index_name="search-google-drive",
    name="My Connector",
    service_type="google_drive",
)
const response = await client.connector.put({
  connector_id: "my-connector",
  index_name: "search-google-drive",
  name: "My Connector",
  service_type: "google_drive",
});
response = client.connector.put(
  connector_id: "my-connector",
  body: {
    "index_name": "search-google-drive",
    "name": "My Connector",
    "service_type": "google_drive"
  }
)
$resp = $client->connector()->put([
    "connector_id" => "my-connector",
    "body" => [
        "index_name" => "search-google-drive",
        "name" => "My Connector",
        "service_type" => "google_drive",
    ],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"index_name":"search-google-drive","name":"My Connector","service_type":"google_drive"}' "$ELASTICSEARCH_URL/_connector/my-connector"
client.connector().put(p -> p
    .connectorId("my-connector")
    .indexName("search-google-drive")
    .name("My Connector")
    .serviceType("google_drive")
);
Request examples
{
  "index_name": "search-google-drive",
  "name": "My Connector",
  "service_type": "google_drive"
}
{
  "index_name": "search-google-drive",
  "name": "My Connector",
  "description": "My Connector to sync data to Elastic index from Google Drive",
  "service_type": "google_drive",
  "language": "english"
}
Response examples (200)
{
  "result": "created",
  "id": "my-connector"
}




























Delete a connector sync job Beta; Added in 8.12.0

DELETE /_connector/_sync_job/{connector_sync_job_id}

Remove a connector sync job and its associated data. This is a destructive action that is not recoverable.

Path parameters

  • connector_sync_job_id string Required

    The unique identifier of the connector sync job to be deleted

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

DELETE /_connector/_sync_job/{connector_sync_job_id}
DELETE _connector/_sync_job/my-connector-sync-job-id
resp = client.connector.sync_job_delete(
    connector_sync_job_id="my-connector-sync-job-id",
)
const response = await client.connector.syncJobDelete({
  connector_sync_job_id: "my-connector-sync-job-id",
});
response = client.connector.sync_job_delete(
  connector_sync_job_id: "my-connector-sync-job-id"
)
$resp = $client->connector()->syncJobDelete([
    "connector_sync_job_id" => "my-connector-sync-job-id",
]);
curl -X DELETE -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_connector/_sync_job/my-connector-sync-job-id"
client.connector().syncJobDelete(s -> s
    .connectorSyncJobId("my-connector-sync-job-id")
);
Response examples (200)
{
  "acknowledged": true
}








Create a connector sync job Beta; Added in 8.12.0

POST /_connector/_sync_job

Create a connector sync job document in the internal index and initialize its counters and timestamps with default values.

application/json

Body Required

  • id string Required

    The id of the associated connector

  • job_type string

    Values are full, incremental, or access_control.

  • trigger_method string

    Values are on_demand or scheduled.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • id string Required
POST /_connector/_sync_job
POST _connector/_sync_job
{
  "id": "connector-id",
  "job_type": "full",
  "trigger_method": "on_demand"
}
resp = client.connector.sync_job_post(
    id="connector-id",
    job_type="full",
    trigger_method="on_demand",
)
const response = await client.connector.syncJobPost({
  id: "connector-id",
  job_type: "full",
  trigger_method: "on_demand",
});
response = client.connector.sync_job_post(
  body: {
    "id": "connector-id",
    "job_type": "full",
    "trigger_method": "on_demand"
  }
)
$resp = $client->connector()->syncJobPost([
    "body" => [
        "id" => "connector-id",
        "job_type" => "full",
        "trigger_method" => "on_demand",
    ],
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"id":"connector-id","job_type":"full","trigger_method":"on_demand"}' "$ELASTICSEARCH_URL/_connector/_sync_job"
client.connector().syncJobPost(s -> s
    .id("connector-id")
    .jobType(SyncJobType.Full)
    .triggerMethod(SyncJobTriggerMethod.OnDemand)
);
Request example
{
  "id": "connector-id",
  "job_type": "full",
  "trigger_method": "on_demand"
}




















































Update the connector service type Beta; Added in 8.12.0

PUT /_connector/{connector_id}/_service_type

Path parameters

  • connector_id string Required

    The unique identifier of the connector to be updated

application/json

Body Required

  • service_type string Required

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • result string Required

      Values are created, updated, deleted, not_found, or noop.

PUT /_connector/{connector_id}/_service_type
PUT _connector/my-connector/_service_type
{
    "service_type": "sharepoint_online"
}
resp = client.connector.update_service_type(
    connector_id="my-connector",
    service_type="sharepoint_online",
)
const response = await client.connector.updateServiceType({
  connector_id: "my-connector",
  service_type: "sharepoint_online",
});
response = client.connector.update_service_type(
  connector_id: "my-connector",
  body: {
    "service_type": "sharepoint_online"
  }
)
$resp = $client->connector()->updateServiceType([
    "connector_id" => "my-connector",
    "body" => [
        "service_type" => "sharepoint_online",
    ],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"service_type":"sharepoint_online"}' "$ELASTICSEARCH_URL/_connector/my-connector/_service_type"
client.connector().updateServiceType(u -> u
    .connectorId("my-connector")
    .serviceType("sharepoint_online")
);
Request example
{
    "service_type": "sharepoint_online"
}
Response examples (200)
{
  "result": "updated"
}

















Create a follower Generally available; Added in 6.5.0

PUT /{index}/_ccr/follow

Create a cross-cluster replication follower index that follows a specific leader index. When the API returns, the follower index exists and cross-cluster replication starts replicating operations from the leader index to the follower index.

Path parameters

  • index string Required

    The name of the follower index.

Query parameters

  • master_timeout string

    Period to wait for a connection to the master node.

    External documentation
  • wait_for_active_shards number | string

    Specifies the number of shards to wait on being active before responding. This defaults to waiting on none of the shards to be active. A shard must be restored from the leader index before being active. Restoring a follower shard requires transferring all the remote Lucene segment files to the follower index.

    Values are all or index-setting.

application/json

Body Required

  • data_stream_name string

    If the leader index is part of a data stream, the name to which the local data stream for the followed index should be renamed.

  • leader_index string Required

    The name of the index in the leader cluster to follow.

  • max_outstanding_read_requests number

    The maximum number of outstanding reads requests from the remote cluster.

  • max_outstanding_write_requests number

    The maximum number of outstanding write requests on the follower.

  • max_read_request_operation_count number

    The maximum number of operations to pull per read from the remote cluster.

  • max_read_request_size number | string

    The maximum size in bytes of per read of a batch of operations pulled from the remote cluster.

    One of:

    The maximum size in bytes of per read of a batch of operations pulled from the remote cluster.

  • max_retry_delay string

    The maximum time to wait before retrying an operation that failed exceptionally. An exponential backoff strategy is employed when retrying.

    External documentation
  • max_write_buffer_count number

    The maximum number of operations that can be queued for writing. When this limit is reached, reads from the remote cluster will be deferred until the number of queued operations goes below the limit.

  • max_write_buffer_size number | string

    The maximum total bytes of operations that can be queued for writing. When this limit is reached, reads from the remote cluster will be deferred until the total bytes of queued operations goes below the limit.

    One of:

    The maximum total bytes of operations that can be queued for writing. When this limit is reached, reads from the remote cluster will be deferred until the total bytes of queued operations goes below the limit.

  • max_write_request_operation_count number

    The maximum number of operations per bulk write request executed on the follower.

  • max_write_request_size number | string

    The maximum total bytes of operations per bulk write request executed on the follower.

    One of:

    The maximum total bytes of operations per bulk write request executed on the follower.

  • read_poll_timeout string

    The maximum time to wait for new operations on the remote cluster when the follower index is synchronized with the leader index. When the timeout has elapsed, the poll for operations will return to the follower so that it can update some statistics. Then the follower will immediately attempt to read from the leader again.

    External documentation
  • remote_cluster string Required

    The remote cluster containing the leader index.

  • settings object

    Settings to override from the leader index.

    Index settings

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • follow_index_created boolean Required
    • follow_index_shards_acked boolean Required
    • index_following_started boolean Required
PUT /{index}/_ccr/follow
PUT /follower_index/_ccr/follow?wait_for_active_shards=1
{
  "remote_cluster" : "remote_cluster",
  "leader_index" : "leader_index",
  "settings": {
    "index.number_of_replicas": 0
  },
  "max_read_request_operation_count" : 1024,
  "max_outstanding_read_requests" : 16,
  "max_read_request_size" : "1024k",
  "max_write_request_operation_count" : 32768,
  "max_write_request_size" : "16k",
  "max_outstanding_write_requests" : 8,
  "max_write_buffer_count" : 512,
  "max_write_buffer_size" : "512k",
  "max_retry_delay" : "10s",
  "read_poll_timeout" : "30s"
}
resp = client.ccr.follow(
    index="follower_index",
    wait_for_active_shards="1",
    remote_cluster="remote_cluster",
    leader_index="leader_index",
    settings={
        "index.number_of_replicas": 0
    },
    max_read_request_operation_count=1024,
    max_outstanding_read_requests=16,
    max_read_request_size="1024k",
    max_write_request_operation_count=32768,
    max_write_request_size="16k",
    max_outstanding_write_requests=8,
    max_write_buffer_count=512,
    max_write_buffer_size="512k",
    max_retry_delay="10s",
    read_poll_timeout="30s",
)
const response = await client.ccr.follow({
  index: "follower_index",
  wait_for_active_shards: 1,
  remote_cluster: "remote_cluster",
  leader_index: "leader_index",
  settings: {
    "index.number_of_replicas": 0,
  },
  max_read_request_operation_count: 1024,
  max_outstanding_read_requests: 16,
  max_read_request_size: "1024k",
  max_write_request_operation_count: 32768,
  max_write_request_size: "16k",
  max_outstanding_write_requests: 8,
  max_write_buffer_count: 512,
  max_write_buffer_size: "512k",
  max_retry_delay: "10s",
  read_poll_timeout: "30s",
});
response = client.ccr.follow(
  index: "follower_index",
  wait_for_active_shards: "1",
  body: {
    "remote_cluster": "remote_cluster",
    "leader_index": "leader_index",
    "settings": {
      "index.number_of_replicas": 0
    },
    "max_read_request_operation_count": 1024,
    "max_outstanding_read_requests": 16,
    "max_read_request_size": "1024k",
    "max_write_request_operation_count": 32768,
    "max_write_request_size": "16k",
    "max_outstanding_write_requests": 8,
    "max_write_buffer_count": 512,
    "max_write_buffer_size": "512k",
    "max_retry_delay": "10s",
    "read_poll_timeout": "30s"
  }
)
$resp = $client->ccr()->follow([
    "index" => "follower_index",
    "wait_for_active_shards" => "1",
    "body" => [
        "remote_cluster" => "remote_cluster",
        "leader_index" => "leader_index",
        "settings" => [
            "index.number_of_replicas" => 0,
        ],
        "max_read_request_operation_count" => 1024,
        "max_outstanding_read_requests" => 16,
        "max_read_request_size" => "1024k",
        "max_write_request_operation_count" => 32768,
        "max_write_request_size" => "16k",
        "max_outstanding_write_requests" => 8,
        "max_write_buffer_count" => 512,
        "max_write_buffer_size" => "512k",
        "max_retry_delay" => "10s",
        "read_poll_timeout" => "30s",
    ],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"remote_cluster":"remote_cluster","leader_index":"leader_index","settings":{"index.number_of_replicas":0},"max_read_request_operation_count":1024,"max_outstanding_read_requests":16,"max_read_request_size":"1024k","max_write_request_operation_count":32768,"max_write_request_size":"16k","max_outstanding_write_requests":8,"max_write_buffer_count":512,"max_write_buffer_size":"512k","max_retry_delay":"10s","read_poll_timeout":"30s"}' "$ELASTICSEARCH_URL/follower_index/_ccr/follow?wait_for_active_shards=1"
client.ccr().follow(f -> f
    .index("follower_index")
    .leaderIndex("leader_index")
    .maxOutstandingReadRequests(16L)
    .maxOutstandingWriteRequests(8)
    .maxReadRequestOperationCount(1024)
    .maxReadRequestSize("1024k")
    .maxRetryDelay(m -> m
        .time("10s")
    )
    .maxWriteBufferCount(512)
    .maxWriteBufferSize("512k")
    .maxWriteRequestOperationCount(32768)
    .maxWriteRequestSize("16k")
    .readPollTimeout(r -> r
        .time("30s")
    )
    .remoteCluster("remote_cluster")
    .settings(s -> s
        .otherSettings("index.number_of_replicas", JsonData.fromJson("0"))
    )
    .waitForActiveShards(w -> w
        .count(1)
    )
);
Request example
Run `PUT /follower_index/_ccr/follow?wait_for_active_shards=1` to create a follower index named `follower_index`.
{
  "remote_cluster" : "remote_cluster",
  "leader_index" : "leader_index",
  "settings": {
    "index.number_of_replicas": 0
  },
  "max_read_request_operation_count" : 1024,
  "max_outstanding_read_requests" : 16,
  "max_read_request_size" : "1024k",
  "max_write_request_operation_count" : 32768,
  "max_write_request_size" : "16k",
  "max_outstanding_write_requests" : 8,
  "max_write_buffer_count" : 512,
  "max_write_buffer_size" : "512k",
  "max_retry_delay" : "10s",
  "read_poll_timeout" : "30s"
}
Response examples (200)
A successful response from `PUT /follower_index/_ccr/follow?wait_for_active_shards=1`.
{
  "follow_index_created" : true,
  "follow_index_shards_acked" : true,
  "index_following_started" : true
}

Get follower information Generally available; Added in 6.7.0

GET /{index}/_ccr/info

Get information about all cross-cluster replication follower indices. For example, the results include follower index names, leader index names, replication options, and whether the follower indices are active or paused.

Required authorization

  • Cluster privileges: monitor
External documentation

Path parameters

  • index string | array[string] Required

    A comma-delimited list of follower index patterns.

Query parameters

  • master_timeout string

    The period to wait for a connection to the master node. If the master node is not available before the timeout expires, the request fails and returns an error. It can also be set to -1 to indicate that the request should never timeout.

    External documentation

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • follower_indices array[object] Required
      Hide follower_indices attributes Show follower_indices attributes object
      • follower_index string Required

        The name of the follower index.

      • leader_index string Required

        The name of the index in the leader cluster that is followed.

      • parameters object

        An object that encapsulates cross-cluster replication parameters. If the follower index's status is paused, this object is omitted.

        Hide parameters attributes Show parameters attributes object
        • max_outstanding_read_requests number

          The maximum number of outstanding reads requests from the remote cluster.

        • max_outstanding_write_requests number

          The maximum number of outstanding write requests on the follower.

        • max_read_request_operation_count number

          The maximum number of operations to pull per read from the remote cluster.

        • max_read_request_size
        • max_retry_delay string

          The maximum time to wait before retrying an operation that failed exceptionally. An exponential backoff strategy is employed when retrying.

        • max_write_buffer_count number

          The maximum number of operations that can be queued for writing. When this limit is reached, reads from the remote cluster will be deferred until the number of queued operations goes below the limit.

        • max_write_buffer_size
        • max_write_request_operation_count number

          The maximum number of operations per bulk write request executed on the follower.

        • max_write_request_size
        • read_poll_timeout string

          The maximum time to wait for new operations on the remote cluster when the follower index is synchronized with the leader index. When the timeout has elapsed, the poll for operations will return to the follower so that it can update some statistics. Then the follower will immediately attempt to read from the leader again.

      • remote_cluster string Required

        The remote cluster that contains the leader index.

      • status string Required

        The status of the index following: active or paused.

        Values are active or paused.

GET /{index}/_ccr/info
GET /follower_index/_ccr/info
resp = client.ccr.follow_info(
    index="follower_index",
)
const response = await client.ccr.followInfo({
  index: "follower_index",
});
response = client.ccr.follow_info(
  index: "follower_index"
)
$resp = $client->ccr()->followInfo([
    "index" => "follower_index",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/follower_index/_ccr/info"
client.ccr().followInfo(f -> f
    .index("follower_index")
);
Response examples (200)
A successful response from `GET /follower_index/_ccr/info` when the follower index is active.
{
  "follower_indices": [
    {
      "follower_index": "follower_index",
      "remote_cluster": "remote_cluster",
      "leader_index": "leader_index",
      "status": "active",
      "parameters": {
        "max_read_request_operation_count": 5120,
        "max_read_request_size": "32mb",
        "max_outstanding_read_requests": 12,
        "max_write_request_operation_count": 5120,
        "max_write_request_size": "9223372036854775807b",
        "max_outstanding_write_requests": 9,
        "max_write_buffer_count": 2147483647,
        "max_write_buffer_size": "512mb",
        "max_retry_delay": "500ms",
        "read_poll_timeout": "1m"
      }
    }
  ]
}
A successful response from `GET /follower_index/_ccr/info` when the follower index is paused.
{
  "follower_indices": [
    {
      "follower_index": "follower_index",
      "remote_cluster": "remote_cluster",
      "leader_index": "leader_index",
      "status": "paused"
    }
  ]
}

Get follower stats Generally available; Added in 6.5.0

GET /{index}/_ccr/stats

Get cross-cluster replication follower stats. The API returns shard-level stats about the "following tasks" associated with each shard for the specified indices.

Required authorization

  • Cluster privileges: monitor
External documentation

Path parameters

  • index string | array[string] Required

    A comma-delimited list of index patterns.

Query parameters

  • timeout string

    The period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.

    External documentation

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • indices array[object] Required

      An array of follower index statistics.

      Hide indices attributes Show indices attributes object
      • index string Required

        The name of the follower index.

      • shards array[object] Required

        An array of shard-level following task statistics.

        Hide shards attributes Show shards attributes object
        • bytes_read number Required

          The total of transferred bytes read from the leader. This is only an estimate and does not account for compression if enabled.

        • failed_read_requests number Required

          The number of failed reads.

        • failed_write_requests number Required

          The number of failed bulk write requests on the follower.

        • fatal_exception object

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

        • follower_aliases_version number Required

          The index aliases version the follower is synced up to.

        • follower_global_checkpoint number Required

          The current global checkpoint on the follower. The difference between the leader_global_checkpoint and the follower_global_checkpoint is an indication of how much the follower is lagging the leader.

        • follower_index string Required

          The name of the follower index.

        • follower_mapping_version number Required

          The mapping version the follower is synced up to.

        • follower_max_seq_no number Required

          The current maximum sequence number on the follower.

        • follower_settings_version number Required

          The index settings version the follower is synced up to.

        • last_requested_seq_no number Required

          The starting sequence number of the last batch of operations requested from the leader.

        • leader_global_checkpoint number Required

          The current global checkpoint on the leader known to the follower task.

        • leader_index string Required

          The name of the index in the leader cluster being followed.

        • leader_max_seq_no number Required

          The current maximum sequence number on the leader known to the follower task.

        • operations_read number Required

          The total number of operations read from the leader.

        • operations_written number Required

          The number of operations written on the follower.

        • outstanding_read_requests number Required

          The number of active read requests from the follower.

        • outstanding_write_requests number Required

          The number of active bulk write requests on the follower.

        • read_exceptions array[object] Required

          An array of objects representing failed reads.

        • remote_cluster string Required

          The remote cluster containing the leader index.

        • shard_id number Required

          The numerical shard ID, with values from 0 to one less than the number of replicas.

        • successful_read_requests number Required

          The number of successful fetches.

        • successful_write_requests number Required

          The number of bulk write requests run on the follower.

        • time_since_last_read string

          A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

        • time_since_last_read_millis
        • total_read_remote_exec_time string

          A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

        • total_read_remote_exec_time_millis
        • total_read_time string

          A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

        • total_read_time_millis
        • total_write_time string

          A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

        • total_write_time_millis
        • write_buffer_operation_count number Required

          The number of write operations queued on the follower.

        • write_buffer_size_in_bytes
GET /{index}/_ccr/stats
GET /follower_index/_ccr/stats
resp = client.ccr.follow_stats(
    index="follower_index",
)
const response = await client.ccr.followStats({
  index: "follower_index",
});
response = client.ccr.follow_stats(
  index: "follower_index"
)
$resp = $client->ccr()->followStats([
    "index" => "follower_index",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/follower_index/_ccr/stats"
client.ccr().followStats(f -> f
    .index("follower_index")
);
Response examples (200)
A successful response from `GET /follower_index/_ccr/stats`, which retrieves follower stats.
{
  "indices" : [
    {
      "index" : "follower_index",
      "total_global_checkpoint_lag" : 256,
      "shards" : [
        {
          "remote_cluster" : "remote_cluster",
          "leader_index" : "leader_index",
          "follower_index" : "follower_index",
          "shard_id" : 0,
          "leader_global_checkpoint" : 1024,
          "leader_max_seq_no" : 1536,
          "follower_global_checkpoint" : 768,
          "follower_max_seq_no" : 896,
          "last_requested_seq_no" : 897,
          "outstanding_read_requests" : 8,
          "outstanding_write_requests" : 2,
          "write_buffer_operation_count" : 64,
          "follower_mapping_version" : 4,
          "follower_settings_version" : 2,
          "follower_aliases_version" : 8,
          "total_read_time_millis" : 32768,
          "total_read_remote_exec_time_millis" : 16384,
          "successful_read_requests" : 32,
          "failed_read_requests" : 0,
          "operations_read" : 896,
          "bytes_read" : 32768,
          "total_write_time_millis" : 16384,
          "write_buffer_size_in_bytes" : 1536,
          "successful_write_requests" : 16,
          "failed_write_requests" : 0,
          "operations_written" : 832,
          "read_exceptions" : [ ],
          "time_since_last_read_millis" : 8
        }
      ]
    }
  ]
}












Resume an auto-follow pattern Generally available; Added in 7.5.0

POST /_ccr/auto_follow/{name}/resume

Resume a cross-cluster replication auto-follow pattern that was paused. The auto-follow pattern will resume configuring following indices for newly created indices that match its patterns on the remote cluster. Remote indices created while the pattern was paused will also be followed unless they have been deleted or closed in the interim.

Required authorization

  • Cluster privileges: manage_ccr
External documentation

Path parameters

  • name string Required

    The name of the auto-follow pattern to resume.

Query parameters

  • master_timeout string

    The period to wait for a connection to the master node. If the master node is not available before the timeout expires, the request fails and returns an error. It can also be set to -1 to indicate that the request should never timeout.

    External documentation

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

POST /_ccr/auto_follow/{name}/resume
POST /_ccr/auto_follow/my_auto_follow_pattern/resume
resp = client.ccr.resume_auto_follow_pattern(
    name="my_auto_follow_pattern",
)
const response = await client.ccr.resumeAutoFollowPattern({
  name: "my_auto_follow_pattern",
});
response = client.ccr.resume_auto_follow_pattern(
  name: "my_auto_follow_pattern"
)
$resp = $client->ccr()->resumeAutoFollowPattern([
    "name" => "my_auto_follow_pattern",
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_ccr/auto_follow/my_auto_follow_pattern/resume"
client.ccr().resumeAutoFollowPattern(r -> r
    .name("my_auto_follow_pattern")
);
Response examples (200)
A successful response `POST /_ccr/auto_follow/my_auto_follow_pattern/resume`, which resumes an auto-follow pattern.
{
  "acknowledged" : true
}

Resume a follower Generally available; Added in 6.5.0

POST /{index}/_ccr/resume_follow

Resume a cross-cluster replication follower index that was paused. The follower index could have been paused with the pause follower API. Alternatively it could be paused due to replication that cannot be retried due to failures during following tasks. When this API returns, the follower index will resume fetching operations from the leader index.

External documentation

Path parameters

  • index string Required

    The name of the follow index to resume following.

Query parameters

application/json

Body

  • max_outstanding_read_requests number
  • max_outstanding_write_requests number
  • max_read_request_operation_count number
  • max_read_request_size string
  • max_retry_delay string

    A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

    External documentation
  • max_write_buffer_count number
  • max_write_buffer_size string
  • max_write_request_operation_count number
  • max_write_request_size string
  • read_poll_timeout string

    A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

    External documentation

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

POST /{index}/_ccr/resume_follow
POST /follower_index/_ccr/resume_follow
{
  "max_read_request_operation_count" : 1024,
  "max_outstanding_read_requests" : 16,
  "max_read_request_size" : "1024k",
  "max_write_request_operation_count" : 32768,
  "max_write_request_size" : "16k",
  "max_outstanding_write_requests" : 8,
  "max_write_buffer_count" : 512,
  "max_write_buffer_size" : "512k",
  "max_retry_delay" : "10s",
  "read_poll_timeout" : "30s"
}
resp = client.ccr.resume_follow(
    index="follower_index",
    max_read_request_operation_count=1024,
    max_outstanding_read_requests=16,
    max_read_request_size="1024k",
    max_write_request_operation_count=32768,
    max_write_request_size="16k",
    max_outstanding_write_requests=8,
    max_write_buffer_count=512,
    max_write_buffer_size="512k",
    max_retry_delay="10s",
    read_poll_timeout="30s",
)
const response = await client.ccr.resumeFollow({
  index: "follower_index",
  max_read_request_operation_count: 1024,
  max_outstanding_read_requests: 16,
  max_read_request_size: "1024k",
  max_write_request_operation_count: 32768,
  max_write_request_size: "16k",
  max_outstanding_write_requests: 8,
  max_write_buffer_count: 512,
  max_write_buffer_size: "512k",
  max_retry_delay: "10s",
  read_poll_timeout: "30s",
});
response = client.ccr.resume_follow(
  index: "follower_index",
  body: {
    "max_read_request_operation_count": 1024,
    "max_outstanding_read_requests": 16,
    "max_read_request_size": "1024k",
    "max_write_request_operation_count": 32768,
    "max_write_request_size": "16k",
    "max_outstanding_write_requests": 8,
    "max_write_buffer_count": 512,
    "max_write_buffer_size": "512k",
    "max_retry_delay": "10s",
    "read_poll_timeout": "30s"
  }
)
$resp = $client->ccr()->resumeFollow([
    "index" => "follower_index",
    "body" => [
        "max_read_request_operation_count" => 1024,
        "max_outstanding_read_requests" => 16,
        "max_read_request_size" => "1024k",
        "max_write_request_operation_count" => 32768,
        "max_write_request_size" => "16k",
        "max_outstanding_write_requests" => 8,
        "max_write_buffer_count" => 512,
        "max_write_buffer_size" => "512k",
        "max_retry_delay" => "10s",
        "read_poll_timeout" => "30s",
    ],
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"max_read_request_operation_count":1024,"max_outstanding_read_requests":16,"max_read_request_size":"1024k","max_write_request_operation_count":32768,"max_write_request_size":"16k","max_outstanding_write_requests":8,"max_write_buffer_count":512,"max_write_buffer_size":"512k","max_retry_delay":"10s","read_poll_timeout":"30s"}' "$ELASTICSEARCH_URL/follower_index/_ccr/resume_follow"
client.ccr().resumeFollow(r -> r
    .index("follower_index")
    .maxOutstandingReadRequests(16L)
    .maxOutstandingWriteRequests(8L)
    .maxReadRequestOperationCount(1024L)
    .maxReadRequestSize("1024k")
    .maxRetryDelay(m -> m
        .time("10s")
    )
    .maxWriteBufferCount(512L)
    .maxWriteBufferSize("512k")
    .maxWriteRequestOperationCount(32768L)
    .maxWriteRequestSize("16k")
    .readPollTimeout(re -> re
        .time("30s")
    )
);
Request example
Run `POST /follower_index/_ccr/resume_follow` to resume the follower index.
{
  "max_read_request_operation_count" : 1024,
  "max_outstanding_read_requests" : 16,
  "max_read_request_size" : "1024k",
  "max_write_request_operation_count" : 32768,
  "max_write_request_size" : "16k",
  "max_outstanding_write_requests" : 8,
  "max_write_buffer_count" : 512,
  "max_write_buffer_size" : "512k",
  "max_retry_delay" : "10s",
  "read_poll_timeout" : "30s"
}
Response examples (200)
A successful response from resuming a folower index.
{
  "acknowledged" : true
}

























































Update data streams Generally available; Added in 7.16.0

POST /_data_stream/_modify

Performs one or more data stream modification actions in a single atomic operation.

application/json

Body Required

  • actions array[object] Required

    Actions to perform.

    Hide actions attributes Show actions attributes object
    • add_backing_index object

      Adds an existing index as a backing index for a data stream. The index is hidden as part of this operation. WARNING: Adding indices with the add_backing_index action can potentially result in improper data stream behavior. This should be considered an expert level API.

      Hide add_backing_index attributes Show add_backing_index attributes object
      • data_stream string Required

        Data stream targeted by the action.

      • index string Required

        Index for the action.

    • remove_backing_index object

      Removes a backing index from a data stream. The index is unhidden as part of this operation. A data stream’s write index cannot be removed.

      Hide remove_backing_index attributes Show remove_backing_index attributes object
      • data_stream string Required

        Data stream targeted by the action.

      • index string Required

        Index for the action.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

POST /_data_stream/_modify
POST _data_stream/_modify
{
  "actions": [
    {
      "remove_backing_index": {
        "data_stream": "my-data-stream",
        "index": ".ds-my-data-stream-2023.07.26-000001"
      }
    },
    {
      "add_backing_index": {
        "data_stream": "my-data-stream",
        "index": ".ds-my-data-stream-2023.07.26-000001-downsample"
      }
    }
  ]
}
resp = client.indices.modify_data_stream(
    actions=[
        {
            "remove_backing_index": {
                "data_stream": "my-data-stream",
                "index": ".ds-my-data-stream-2023.07.26-000001"
            }
        },
        {
            "add_backing_index": {
                "data_stream": "my-data-stream",
                "index": ".ds-my-data-stream-2023.07.26-000001-downsample"
            }
        }
    ],
)
const response = await client.indices.modifyDataStream({
  actions: [
    {
      remove_backing_index: {
        data_stream: "my-data-stream",
        index: ".ds-my-data-stream-2023.07.26-000001",
      },
    },
    {
      add_backing_index: {
        data_stream: "my-data-stream",
        index: ".ds-my-data-stream-2023.07.26-000001-downsample",
      },
    },
  ],
});
response = client.indices.modify_data_stream(
  body: {
    "actions": [
      {
        "remove_backing_index": {
          "data_stream": "my-data-stream",
          "index": ".ds-my-data-stream-2023.07.26-000001"
        }
      },
      {
        "add_backing_index": {
          "data_stream": "my-data-stream",
          "index": ".ds-my-data-stream-2023.07.26-000001-downsample"
        }
      }
    ]
  }
)
$resp = $client->indices()->modifyDataStream([
    "body" => [
        "actions" => array(
            [
                "remove_backing_index" => [
                    "data_stream" => "my-data-stream",
                    "index" => ".ds-my-data-stream-2023.07.26-000001",
                ],
            ],
            [
                "add_backing_index" => [
                    "data_stream" => "my-data-stream",
                    "index" => ".ds-my-data-stream-2023.07.26-000001-downsample",
                ],
            ],
        ),
    ],
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"actions":[{"remove_backing_index":{"data_stream":"my-data-stream","index":".ds-my-data-stream-2023.07.26-000001"}},{"add_backing_index":{"data_stream":"my-data-stream","index":".ds-my-data-stream-2023.07.26-000001-downsample"}}]}' "$ELASTICSEARCH_URL/_data_stream/_modify"
client.indices().modifyDataStream(m -> m
    .actions(List.of(Action.of(a -> a
            .removeBackingIndex(r -> r
                .dataStream("my-data-stream")
                .index(".ds-my-data-stream-2023.07.26-000001")
        )),Action.of(ac -> ac
            .addBackingIndex(ad -> ad
                .dataStream("my-data-stream")
                .index(".ds-my-data-stream-2023.07.26-000001-downsample")
        ))))
);
Request example
An example body for a `POST _data_stream/_modify` request.
{
  "actions": [
    {
      "remove_backing_index": {
        "data_stream": "my-data-stream",
        "index": ".ds-my-data-stream-2023.07.26-000001"
      }
    },
    {
      "add_backing_index": {
        "data_stream": "my-data-stream",
        "index": ".ds-my-data-stream-2023.07.26-000001-downsample"
      }
    }
  ]
}

























Check a document Generally available

HEAD /{index}/_doc/{id}

Verify that a document exists. For example, check to see if a document with the _id 0 exists:

HEAD my-index-000001/_doc/0

If the document exists, the API returns a status code of 200 - OK. If the document doesn’t exist, the API returns 404 - Not Found.

Versioning support

You can use the version parameter to check the document only if its current version is equal to the specified one.

Internally, Elasticsearch has marked the old document as deleted and added an entirely new document. The old version of the document doesn't disappear immediately, although you won't be able to access it. Elasticsearch cleans up deleted documents in the background as you continue to index more data.

Path parameters

  • index string Required

    A comma-separated list of data streams, indices, and aliases. It supports wildcards (*).

  • id string Required

    A unique document identifier.

Query parameters

  • preference string

    The node or shard the operation should be performed on. By default, the operation is randomized between the shard replicas.

    If it is set to _local, the operation will prefer to be run on a local allocated shard when possible. If it is set to a custom value, the value is used to guarantee that the same shards will be used for the same custom value. This can help with "jumping values" when hitting different shards in different refresh states. A sample value can be something like the web session ID or the user name.

  • realtime boolean

    If true, the request is real-time as opposed to near-real-time.

  • refresh boolean

    If true, the request refreshes the relevant shards before retrieving the document. Setting it to true should be done after careful thought and verification that this does not cause a heavy load on the system (and slow down indexing).

  • routing string

    A custom value used to route operations to a specific shard.

  • _source boolean | string | array[string]

    Indicates whether to return the _source field (true or false) or lists the fields to return.

  • _source_excludes string | array[string]

    A comma-separated list of source fields to exclude from the response. You can also use this parameter to exclude fields from the subset specified in _source_includes query parameter. If the _source parameter is false, this parameter is ignored.

  • _source_includes string | array[string]

    A comma-separated list of source fields to include in the response. If this parameter is specified, only these source fields are returned. You can exclude fields from this subset using the _source_excludes query parameter. If the _source parameter is false, this parameter is ignored.

  • stored_fields string | array[string]

    A comma-separated list of stored fields to return as part of a hit. If no fields are specified, no stored fields are included in the response. If this field is specified, the _source parameter defaults to false.

  • version number

    Explicit version number for concurrency control. The specified version must match the current version of the document for the request to succeed.

  • version_type string

    The version type.

    Supported values include:

    • internal: Use internal versioning that starts at 1 and increments with each update or delete.
    • external: Only index the document if the specified version is strictly higher than the version of the stored document or if there is no existing document.
    • external_gte: Only index the document if the specified version is equal or higher than the version of the stored document or if there is no existing document. NOTE: The external_gte version type is meant for special use cases and should be used with care. If used incorrectly, it can result in loss of data.

    Values are internal, external, or external_gte.

Responses

  • 200 application/json
HEAD /{index}/_doc/{id}
HEAD my-index-000001/_doc/0
resp = client.exists(
    index="my-index-000001",
    id="0",
)
const response = await client.exists({
  index: "my-index-000001",
  id: 0,
});
response = client.exists(
  index: "my-index-000001",
  id: "0"
)
$resp = $client->exists([
    "index" => "my-index-000001",
    "id" => "0",
]);
curl --head -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/my-index-000001/_doc/0"
client.exists(e -> e
    .id("0")
    .index("my-index-000001")
);
















Get multiple documents Generally available; Added in 1.3.0

POST /{index}/_mget

All methods and paths for this operation:

GET /_mget

POST /_mget
GET /{index}/_mget
POST /{index}/_mget

Get multiple JSON documents by ID from one or more indices. If you specify an index in the request URI, you only need to specify the document IDs in the request body. To ensure fast responses, this multi get (mget) API responds with partial results if one or more shards fail.

Filter source fields

By default, the _source field is returned for every document (if stored). Use the _source and _source_include or source_exclude attributes to filter what fields are returned for a particular document. You can include the _source, _source_includes, and _source_excludes query parameters in the request URI to specify the defaults to use when there are no per-document instructions.

Get stored fields

Use the stored_fields attribute to specify the set of stored fields you want to retrieve. Any requested fields that are not stored are ignored. You can include the stored_fields query parameter in the request URI to specify the defaults to use when there are no per-document instructions.

Required authorization

  • Index privileges: read

Path parameters

  • index string Required

    Name of the index to retrieve documents from when ids are specified, or when a document in the docs array does not specify an index.

Query parameters

  • preference string

    Specifies the node or shard the operation should be performed on. Random by default.

  • realtime boolean

    If true, the request is real-time as opposed to near-real-time.

  • refresh boolean

    If true, the request refreshes relevant shards before retrieving documents.

  • routing string

    Custom value used to route operations to a specific shard.

  • _source boolean | string | array[string]

    True or false to return the _source field or not, or a list of fields to return.

  • _source_excludes string | array[string]

    A comma-separated list of source fields to exclude from the response. You can also use this parameter to exclude fields from the subset specified in _source_includes query parameter.

  • _source_includes string | array[string]

    A comma-separated list of source fields to include in the response. If this parameter is specified, only these source fields are returned. You can exclude fields from this subset using the _source_excludes query parameter. If the _source parameter is false, this parameter is ignored.

  • stored_fields string | array[string]

    If true, retrieves the document fields stored in the index rather than the document _source.

application/json

Body Required

  • docs array[object]

    The documents you want to retrieve. Required if no index is specified in the request URI.

    Hide docs attributes Show docs attributes object
    • _id string Required

      The unique document ID.

    • _index string

      The index that contains the document.

    • routing string

      The key for the primary shard the document resides on. Required if routing is used during indexing.

    • _source boolean | object

      If false, excludes all _source fields.

      One of:

      If false, excludes all _source fields.

    • stored_fields string | array[string]

      The stored fields you want to retrieve.

    • version number
    • version_type string

      Supported values include:

      • internal: Use internal versioning that starts at 1 and increments with each update or delete.
      • external: Only index the document if the specified version is strictly higher than the version of the stored document or if there is no existing document.
      • external_gte: Only index the document if the specified version is equal or higher than the version of the stored document or if there is no existing document. NOTE: The external_gte version type is meant for special use cases and should be used with care. If used incorrectly, it can result in loss of data.

      Values are internal, external, or external_gte.

  • ids string | array[string]

    The IDs of the documents you want to retrieve. Allowed when the index is specified in the request URI.

    One of:

    The IDs of the documents you want to retrieve. Allowed when the index is specified in the request URI.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • docs array[object] Required

      The response includes a docs array that contains the documents in the order specified in the request. The structure of the returned documents is similar to that returned by the get API. If there is a failure getting a particular document, the error is included in place of the document.

      One of:
      Hide attributes Show attributes
      • _index string Required

        The name of the index the document belongs to.

      • fields object

        If the stored_fields parameter is set to true and found is true, it contains the document fields stored in the index.

        Hide fields attribute Show fields attribute object
        • * object Additional properties
      • _ignored array[string]
      • found boolean Required

        Indicates whether the document exists.

      • _id string Required

        The unique identifier for the document.

      • _primary_term number

        The primary term assigned to the document for the indexing operation.

      • _routing string

        The explicit routing, if set.

      • _seq_no number

        The sequence number assigned to the document for the indexing operation. Sequence numbers are used to ensure an older version of a document doesn't overwrite a newer version.

      • _source object

        If found is true, it contains the document data formatted in JSON. If the _source parameter is set to false or the stored_fields parameter is set to true, it is excluded.

      • _version number

        The document version, which is ncremented each time the document is updated.

POST /{index}/_mget
GET /my-index-000001/_mget
{
  "docs": [
    {
      "_id": "1"
    },
    {
      "_id": "2"
    }
  ]
}
resp = client.mget(
    index="my-index-000001",
    docs=[
        {
            "_id": "1"
        },
        {
            "_id": "2"
        }
    ],
)
const response = await client.mget({
  index: "my-index-000001",
  docs: [
    {
      _id: "1",
    },
    {
      _id: "2",
    },
  ],
});
response = client.mget(
  index: "my-index-000001",
  body: {
    "docs": [
      {
        "_id": "1"
      },
      {
        "_id": "2"
      }
    ]
  }
)
$resp = $client->mget([
    "index" => "my-index-000001",
    "body" => [
        "docs" => array(
            [
                "_id" => "1",
            ],
            [
                "_id" => "2",
            ],
        ),
    ],
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"docs":[{"_id":"1"},{"_id":"2"}]}' "$ELASTICSEARCH_URL/my-index-000001/_mget"
client.mget(m -> m
    .docs(List.of(MultiGetOperation.of(mu -> mu
            .id("1")),MultiGetOperation.of(mu -> mu
            .id("2"))))
    .index("my-index-000001")
);
Request examples
Run `GET /my-index-000001/_mget`. When you specify an index in the request URI, only the document IDs are required in the request body.
{
  "docs": [
    {
      "_id": "1"
    },
    {
      "_id": "2"
    }
  ]
}
Run `GET /_mget`. This request sets `_source` to `false` for document 1 to exclude the source entirely. It retrieves `field3` and `field4` from document 2. It retrieves the `user` field from document 3 but filters out the `user.location` field.
{
  "docs": [
    {
      "_index": "test",
      "_id": "1",
      "_source": false
    },
    {
      "_index": "test",
      "_id": "2",
      "_source": [ "field3", "field4" ]
    },
    {
      "_index": "test",
      "_id": "3",
      "_source": {
        "include": [ "user" ],
        "exclude": [ "user.location" ]
      }
    }
  ]
}
Run `GET /_mget`. This request retrieves `field1` and `field2` from document 1 and `field3` and `field4` from document 2.
{
  "docs": [
    {
      "_index": "test",
      "_id": "1",
      "stored_fields": [ "field1", "field2" ]
    },
    {
      "_index": "test",
      "_id": "2",
      "stored_fields": [ "field3", "field4" ]
    }
  ]
}
Run `GET /_mget?routing=key1`. If routing is used during indexing, you need to specify the routing value to retrieve documents. This request fetches `test/_doc/2` from the shard corresponding to routing key `key1`. It fetches `test/_doc/1` from the shard corresponding to routing key `key2`.
{
  "docs": [
    {
      "_index": "test",
      "_id": "1",
      "routing": "key2"
    },
    {
      "_index": "test",
      "_id": "2"
    }
  ]
}





































Delete an enrich policy Generally available; Added in 7.5.0

DELETE /_enrich/policy/{name}

Deletes an existing enrich policy and its enrich index.

Path parameters

  • name string Required

    Enrich policy to delete.

Query parameters

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

DELETE /_enrich/policy/{name}
DELETE /_enrich/policy/my-policy
resp = client.enrich.delete_policy(
    name="my-policy",
)
const response = await client.enrich.deletePolicy({
  name: "my-policy",
});
response = client.enrich.delete_policy(
  name: "my-policy"
)
$resp = $client->enrich()->deletePolicy([
    "name" => "my-policy",
]);
curl -X DELETE -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_enrich/policy/my-policy"
client.enrich().deletePolicy(d -> d
    .name("my-policy")
);






























Get async ES|QL query results Generally available; Added in 8.13.0

GET /_query/async/{id}

Get the current status and available results or stored results for an ES|QL asynchronous query. If the Elasticsearch security features are enabled, only the user who first submitted the ES|QL query can retrieve the results using this API.

External documentation

Path parameters

  • id string Required

    The unique identifier of the query. A query ID is provided in the ES|QL async query API response for a query that does not complete in the designated time. A query ID is also provided when the request was submitted with the keep_on_completion parameter set to true.

Query parameters

  • drop_null_columns boolean

    Indicates whether columns that are entirely null will be removed from the columns and values portion of the results. If true, the response will include an extra section under the name all_columns which has the name of all the columns.

  • format string

    A short version of the Accept header, for example json or yaml.

    Values are csv, json, tsv, txt, yaml, cbor, smile, or arrow.

  • keep_alive string

    The period for which the query and its results are stored in the cluster. When this period expires, the query and its results are deleted, even if the query is still ongoing.

    External documentation
  • wait_for_completion_timeout string

    The period to wait for the request to finish. By default, the request waits for complete query results. If the request completes during the period specified in this parameter, complete query results are returned. Otherwise, the response returns an is_running value of true and no results.

    External documentation

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • took number

      Time unit for milliseconds

    • is_partial boolean
    • all_columns array[object]
      Hide all_columns attributes Show all_columns attributes object
      • name string Required
      • type string Required
    • columns array[object] Required
      Hide columns attributes Show columns attributes object
      • name string Required
      • type string Required
    • values array[array] Required
    • _clusters object

      Cross-cluster search information. Present if include_ccs_metadata was true in the request and a cross-cluster search was performed.

      Hide _clusters attributes Show _clusters attributes object
      • total number Required
      • successful number Required
      • running number Required
      • skipped number Required
      • partial number Required
      • failed number Required
      • details object Required
        Hide details attribute Show details attribute object
        • * object Additional properties
          Hide * attribute Show * attribute object
          • indices string Required
    • profile object

      Profiling information. Present if profile was true in the request. The contents of this field are currently unstable.

    • id string

      The ID of the async query, to be used in subsequent requests to check the status or retrieve results.

      Also available in the X-Elasticsearch-Async-Id HTTP header.

    • is_running boolean Required

      Indicates whether the async query is still running or has completed.

      Also available in the X-Elasticsearch-Async-Is-Running HTTP header.

GET /_query/async/{id}
GET /_query/async/FmNJRUZ1YWZCU3dHY1BIOUhaenVSRkEaaXFlZ3h4c1RTWFNocDdnY2FSaERnUTozNDE=?wait_for_completion_timeout=30s
resp = client.esql.async_query_get(
    id="FmNJRUZ1YWZCU3dHY1BIOUhaenVSRkEaaXFlZ3h4c1RTWFNocDdnY2FSaERnUTozNDE=",
    wait_for_completion_timeout="30s",
)
const response = await client.esql.asyncQueryGet({
  id: "FmNJRUZ1YWZCU3dHY1BIOUhaenVSRkEaaXFlZ3h4c1RTWFNocDdnY2FSaERnUTozNDE=",
  wait_for_completion_timeout: "30s",
});
response = client.esql.async_query_get(
  id: "FmNJRUZ1YWZCU3dHY1BIOUhaenVSRkEaaXFlZ3h4c1RTWFNocDdnY2FSaERnUTozNDE=",
  wait_for_completion_timeout: "30s"
)
$resp = $client->esql()->asyncQueryGet([
    "id" => "FmNJRUZ1YWZCU3dHY1BIOUhaenVSRkEaaXFlZ3h4c1RTWFNocDdnY2FSaERnUTozNDE=",
    "wait_for_completion_timeout" => "30s",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_query/async/FmNJRUZ1YWZCU3dHY1BIOUhaenVSRkEaaXFlZ3h4c1RTWFNocDdnY2FSaERnUTozNDE=?wait_for_completion_timeout=30s"








Run an ES|QL query Generally available; Added in 8.11.0

POST /_query

Get search results for an ES|QL (Elasticsearch query language) query.

External documentation

Query parameters

  • format string

    A short version of the Accept header, e.g. json, yaml.

    csv, tsv, and txt formats will return results in a tabular format, excluding other metadata fields from the response.

    Values are csv, json, tsv, txt, yaml, cbor, smile, or arrow.

  • delimiter string

    The character to use between values within a CSV row. Only valid for the CSV format.

  • drop_null_columns boolean

    Should columns that are entirely null be removed from the columns and values portion of the results? Defaults to false. If true then the response will include an extra section under the name all_columns which has the name of all columns.

  • allow_partial_results boolean

    If true, partial results will be returned if there are shard failures, but the query can continue to execute on other clusters and shards. If false, the query will fail if there are any failures.

    To override the default behavior, you can set the esql.query.allow_partial_results cluster setting to false.

application/json

Body Required

  • columnar boolean

    By default, ES|QL returns results as rows. For example, FROM returns each individual document as one row. For the JSON, YAML, CBOR and smile formats, ES|QL can return the results in a columnar fashion where one row represents all the values of a certain column in the results.

  • filter object

    Specify a Query DSL query in the filter parameter to filter the set of documents that an ES|QL query runs on.

    External documentation
  • locale string
  • params array[number | string | boolean | null | object]

    To avoid any attempts of hacking or code injection, extract the values in a separate list of parameters. Use question mark placeholders (?) in the query string for each of the parameters.

    A field value.

  • profile boolean

    If provided and true the response will include an extra profile object with information on how the query was executed. This information is for human debugging and its format can change at any time but it can give some insight into the performance of each part of the query.

  • query string Required

    The ES|QL query API accepts an ES|QL query string in the query parameter, runs it, and returns the results.

  • tables object

    Tables to use with the LOOKUP operation. The top level key is the table name and the next level key is the column name.

    Hide tables attribute Show tables attribute object
  • include_ccs_metadata boolean

    When set to true and performing a cross-cluster query, the response will include an extra _clusters object with information about the clusters that participated in the search along with info such as shards count.

    Default value is false.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • took number

      Time unit for milliseconds

    • is_partial boolean
    • all_columns array[object]
      Hide all_columns attributes Show all_columns attributes object
      • name string Required
      • type string Required
    • columns array[object] Required
      Hide columns attributes Show columns attributes object
      • name string Required
      • type string Required
    • values array[array] Required
    • _clusters object

      Cross-cluster search information. Present if include_ccs_metadata was true in the request and a cross-cluster search was performed.

      Hide _clusters attributes Show _clusters attributes object
      • total number Required
      • successful number Required
      • running number Required
      • skipped number Required
      • partial number Required
      • failed number Required
      • details object Required
        Hide details attribute Show details attribute object
        • * object Additional properties
          Hide * attributes Show * attributes object
          • status string Required

            Values are running, successful, partial, skipped, or failed.

          • indices string Required
          • _shards object
    • profile object

      Profiling information. Present if profile was true in the request. The contents of this field are currently unstable.

POST /_query
POST /_query
{
  "query": """
    FROM library,remote-*:library
    | EVAL year = DATE_TRUNC(1 YEARS, release_date)
    | STATS MAX(page_count) BY year
    | SORT year
    | LIMIT 5
  """,
  "include_ccs_metadata": true
}
resp = client.esql.query(
    query="\n    FROM library,remote-*:library\n    | EVAL year = DATE_TRUNC(1 YEARS, release_date)\n    | STATS MAX(page_count) BY year\n    | SORT year\n    | LIMIT 5\n  ",
    include_ccs_metadata=True,
)
const response = await client.esql.query({
  query:
    "\n    FROM library,remote-*:library\n    | EVAL year = DATE_TRUNC(1 YEARS, release_date)\n    | STATS MAX(page_count) BY year\n    | SORT year\n    | LIMIT 5\n  ",
  include_ccs_metadata: true,
});
response = client.esql.query(
  body: {
    "query": "\n    FROM library,remote-*:library\n    | EVAL year = DATE_TRUNC(1 YEARS, release_date)\n    | STATS MAX(page_count) BY year\n    | SORT year\n    | LIMIT 5\n  ",
    "include_ccs_metadata": true
  }
)
$resp = $client->esql()->query([
    "body" => [
        "query" => "\n    FROM library,remote-*:library\n    | EVAL year = DATE_TRUNC(1 YEARS, release_date)\n    | STATS MAX(page_count) BY year\n    | SORT year\n    | LIMIT 5\n  ",
        "include_ccs_metadata" => true,
    ],
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"query":"\n    FROM library,remote-*:library\n    | EVAL year = DATE_TRUNC(1 YEARS, release_date)\n    | STATS MAX(page_count) BY year\n    | SORT year\n    | LIMIT 5\n  ","include_ccs_metadata":true}' "$ELASTICSEARCH_URL/_query"
client.esql().query(q -> q
    .includeCcsMetadata(true)
    .query(" FROM library,remote-*:library | EVAL year = DATE_TRUNC(1 YEARS, release_date) | STATS MAX(page_count) BY year | SORT year | LIMIT 5 ")
);
Request example
Run `POST /_query` to get results for an ES|QL query.
{
  "query": """
    FROM library,remote-*:library
    | EVAL year = DATE_TRUNC(1 YEARS, release_date)
    | STATS MAX(page_count) BY year
    | SORT year
    | LIMIT 5
  """,
  "include_ccs_metadata": true
}




































































































Delete data stream lifecycles Generally available; Added in 8.11.0

DELETE /_data_stream/{name}/_lifecycle

Removes the data stream lifecycle from a data stream, rendering it not managed by the data stream lifecycle.

External documentation

Path parameters

  • name string | array[string] Required

    A comma-separated list of data streams of which the data stream lifecycle will be deleted; use * to get all data streams

Query parameters

  • expand_wildcards string | array[string]

    Whether wildcard expressions should get expanded to open or closed indices (default: open)

    Supported values include:

    • all: Match any data stream or index, including hidden ones.
    • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
    • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
    • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
    • none: Wildcard expressions are not accepted.

    Values are all, open, closed, hidden, or none.

  • master_timeout string

    Specify timeout for connection to master

    External documentation
  • timeout string

    Explicit timestamp for the document

    External documentation

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

DELETE /_data_stream/{name}/_lifecycle
DELETE _data_stream/my-data-stream/_lifecycle
resp = client.indices.delete_data_lifecycle(
    name="my-data-stream",
)
const response = await client.indices.deleteDataLifecycle({
  name: "my-data-stream",
});
response = client.indices.delete_data_lifecycle(
  name: "my-data-stream"
)
$resp = $client->indices()->deleteDataLifecycle([
    "name" => "my-data-stream",
]);
curl -X DELETE -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_data_stream/my-data-stream/_lifecycle"
client.indices().deleteDataLifecycle(d -> d
    .name("my-data-stream")
);
Response examples (200)
A successful response for deleting a data stream lifecycle.
{
  "acknowledged": true
}




Create or update an index template Generally available; Added in 7.9.0

POST /_index_template/{name}

All methods and paths for this operation:

PUT /_index_template/{name}

POST /_index_template/{name}

Index templates define settings, mappings, and aliases that can be applied automatically to new indices.

Elasticsearch applies templates to new indices based on an wildcard pattern that matches the index name. Index templates are applied during data stream or index creation. For data streams, these settings and mappings are applied when the stream's backing indices are created. Settings and mappings specified in a create index API request override any settings or mappings specified in an index template. Changes to index templates do not affect existing indices, including the existing backing indices of a data stream.

You can use C-style /* *\/ block comments in index templates. You can include comments anywhere in the request body, except before the opening curly bracket.

Multiple matching templates

If multiple index templates match the name of a new index or data stream, the template with the highest priority is used.

Multiple templates with overlapping index patterns at the same priority are not allowed and an error will be thrown when attempting to create a template matching an existing index template at identical priorities.

Composing aliases, mappings, and settings

When multiple component templates are specified in the composed_of field for an index template, they are merged in the order specified, meaning that later component templates override earlier component templates. Any mappings, settings, or aliases from the parent index template are merged in next. Finally, any configuration on the index request itself is merged. Mapping definitions are merged recursively, which means that later mapping components can introduce new field mappings and update the mapping configuration. If a field mapping is already contained in an earlier component, its definition will be completely overwritten by the later one. This recursive merging strategy applies not only to field mappings, but also root options like dynamic_templates and meta. If an earlier component contains a dynamic_templates block, then by default new dynamic_templates entries are appended onto the end. If an entry already exists with the same key, then it is overwritten by the new definition.

Required authorization

  • Cluster privileges: manage_index_templates

Path parameters

  • name string Required

    Index or template name

Query parameters

  • create boolean

    If true, this request cannot replace or update existing index templates.

  • master_timeout string

    Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.

    External documentation
  • cause string

    User defined reason for creating/updating the index template

application/json

Body Required

  • index_patterns string | array[string]

    Name of the index template to create.

  • composed_of array[string]

    An ordered list of component template names. Component templates are merged in the order specified, meaning that the last component template specified has the highest precedence.

  • template object

    Template to be applied. It may optionally include an aliases, mappings, or settings configuration.

    Hide template attributes Show template attributes object
    • aliases object

      Aliases to add. If the index template includes a data_stream object, these are data stream aliases. Otherwise, these are index aliases. Data stream aliases ignore the index_routing, routing, and search_routing options.

      Hide aliases attribute Show aliases attribute object
      • * object Additional properties
        Hide * attributes Show * attributes object
        • filter object

          Query used to limit documents the alias can access.

          External documentation
        • index_routing string

          Value used to route indexing operations to a specific shard. If specified, this overwrites the routing value for indexing operations.

        • is_hidden boolean

          If true, the alias is hidden. All indices for the alias must have the same is_hidden value.

          Default value is false.

        • is_write_index boolean

          If true, the index is the write index for the alias.

          Default value is false.

        • routing string

          Value used to route indexing and search operations to a specific shard.

        • search_routing string

          Value used to route search operations to a specific shard. If specified, this overwrites the routing value for search operations.

    • mappings object

      Mapping for fields in the index. If specified, this mapping can include field names, field data types, and mapping parameters.

      Hide mappings attributes Show mappings attributes object
      • all_field object
        Hide all_field attributes Show all_field attributes object
        • analyzer string Required
        • enabled boolean Required
        • omit_norms boolean Required
        • search_analyzer string Required
        • similarity string Required
        • store boolean Required
        • store_term_vector_offsets boolean Required
        • store_term_vector_payloads boolean Required
        • store_term_vector_positions boolean Required
        • store_term_vectors boolean Required
      • date_detection boolean
      • dynamic string

        Values are strict, runtime, true, or false.

      • dynamic_date_formats array[string]
      • dynamic_templates array[object]
      • _field_names object
        Hide _field_names attribute Show _field_names attribute object
        • enabled boolean Required
      • index_field object
        Hide index_field attribute Show index_field attribute object
        • enabled boolean Required
      • _meta object
        Hide _meta attribute Show _meta attribute object
        • * object Additional properties
      • numeric_detection boolean
      • properties object
      • _routing object
        Hide _routing attribute Show _routing attribute object
        • required boolean Required
      • _size object
        Hide _size attribute Show _size attribute object
        • enabled boolean Required
      • _source object
        Hide _source attributes Show _source attributes object
        • compress boolean
        • compress_threshold string
        • enabled boolean
        • excludes array[string]
        • includes array[string]
      • runtime object
        Hide runtime attribute Show runtime attribute object
        • * object Additional properties
          Hide * attributes Show * attributes object
          • fields object

            For type composite

          • fetch_fields array[object]

            For type lookup

          • format string

            A custom format for date type runtime fields.

      • enabled boolean
      • subobjects string

        Values are true or false.

      • _data_stream_timestamp object
        Hide _data_stream_timestamp attribute Show _data_stream_timestamp attribute object
        • enabled boolean Required
    • settings object

      Configuration options for the index.

      Index settings
    • lifecycle object

      Data stream lifecycle denotes that a data stream is managed by the data stream lifecycle and contains the configuration.

      Hide lifecycle attributes Show lifecycle attributes object
      • data_retention string

        If defined, every document added to this data stream will be stored at least for this time frame. Any time after this duration the document could be deleted. When empty, every document in this data stream will be stored indefinitely.

        External documentation
      • downsampling object

        The downsampling configuration to execute for the managed backing index after rollover.

        Hide downsampling attribute Show downsampling attribute object
        • rounds array[object] Required

          The list of downsampling rounds to execute as part of this downsampling configuration

      • enabled boolean

        If defined, it turns data stream lifecycle on/off (true/false) for this data stream. A data stream lifecycle that's disabled (enabled: false) will have no effect on the data stream.

        Default value is true.

  • data_stream object

    If this object is included, the template is used to create data streams and their backing indices. Supports an empty object. Data streams require a matching index template with a data_stream object.

    Hide data_stream attributes Show data_stream attributes object
    • hidden boolean
    • allow_custom_routing boolean
  • priority number

    Priority to determine index template precedence when a new data stream or index is created. The index template with the highest priority is chosen. If no priority is specified the template is treated as though it is of priority 0 (lowest priority). This number is not automatically generated by Elasticsearch.

  • version number

    Version number used to manage index templates externally. This number is not automatically generated by Elasticsearch. External systems can use these version numbers to simplify template management. To unset a version, replace the template without specifying one.

  • _meta object

    Optional user metadata about the index template. It may have any contents. It is not automatically generated or used by Elasticsearch. This user-defined object is stored in the cluster state, so keeping it short is preferable To unset the metadata, replace the template without specifying it.

    Hide _meta attribute Show _meta attribute object
    • * object Additional properties
  • allow_auto_create boolean

    This setting overrides the value of the action.auto_create_index cluster setting. If set to true in a template, then indices can be automatically created using that template even if auto-creation of indices is disabled via actions.auto_create_index. If set to false, then indices or data streams matching the template must always be explicitly created, and may never be automatically created.

  • ignore_missing_component_templates array[string]

    The configuration option ignore_missing_component_templates can be used when an index template references a component template that might not exist

  • deprecated boolean

    Marks this index template as deprecated. When creating or updating a non-deprecated index template that uses deprecated components, Elasticsearch will emit a deprecation warning.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

POST /_index_template/{name}
PUT /_index_template/template_1
{
  "index_patterns" : ["template*"],
  "priority" : 1,
  "template": {
    "settings" : {
      "number_of_shards" : 2
    }
  }
}
resp = client.indices.put_index_template(
    name="template_1",
    index_patterns=[
        "template*"
    ],
    priority=1,
    template={
        "settings": {
            "number_of_shards": 2
        }
    },
)
const response = await client.indices.putIndexTemplate({
  name: "template_1",
  index_patterns: ["template*"],
  priority: 1,
  template: {
    settings: {
      number_of_shards: 2,
    },
  },
});
response = client.indices.put_index_template(
  name: "template_1",
  body: {
    "index_patterns": [
      "template*"
    ],
    "priority": 1,
    "template": {
      "settings": {
        "number_of_shards": 2
      }
    }
  }
)
$resp = $client->indices()->putIndexTemplate([
    "name" => "template_1",
    "body" => [
        "index_patterns" => array(
            "template*",
        ),
        "priority" => 1,
        "template" => [
            "settings" => [
                "number_of_shards" => 2,
            ],
        ],
    ],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"index_patterns":["template*"],"priority":1,"template":{"settings":{"number_of_shards":2}}}' "$ELASTICSEARCH_URL/_index_template/template_1"
client.indices().putIndexTemplate(p -> p
    .indexPatterns("template*")
    .name("template_1")
    .priority(1L)
    .template(t -> t
        .settings(s -> s
            .numberOfShards("2")
        )
    )
);
Request examples
{
  "index_patterns" : ["template*"],
  "priority" : 1,
  "template": {
    "settings" : {
      "number_of_shards" : 2
    }
  }
}
You can include index aliases in an index template. During index creation, the `{index}` placeholder in the alias name will be replaced with the actual index name that the template gets applied to.
{
  "index_patterns": [
    "template*"
  ],
  "template": {
    "settings": {
      "number_of_shards": 1
    },
    "aliases": {
      "alias1": {},
      "alias2": {
        "filter": {
          "term": {
            "user.id": "kimchy"
          }
        },
        "routing": "shard-1"
      },
      "{index}-alias": {}
    }
  }
}




Check index templates Generally available

HEAD /_index_template/{name}

Check whether index templates exist.

Required authorization

  • Cluster privileges: manage_index_templates

Path parameters

  • name string Required

    Comma-separated list of index template names used to limit the request. Wildcard (*) expressions are supported.

Query parameters

  • local boolean

    If true, the request retrieves information from the local node only. Defaults to false, which means information is retrieved from the master node.

  • flat_settings boolean

    If true, returns settings in flat format.

  • master_timeout string

    Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.

    External documentation

Responses

  • 200 application/json
HEAD /_index_template/{name}
curl \
 --request HEAD 'http://api.example.com/_index_template/{name}' \
 --header "Authorization: $API_KEY"




















Get aliases Generally available

GET /{index}/_alias/{name}

All methods and paths for this operation:

GET /_alias

GET /_alias/{name}
GET /{index}/_alias
GET /{index}/_alias/{name}

Retrieves information for one or more data stream or index aliases.

Required authorization

  • Index privileges: view_index_metadata

Path parameters

  • index string | array[string] Required

    Comma-separated list of data streams or indices used to limit the request. Supports wildcards (*). To target all data streams and indices, omit this parameter or use * or _all.

  • name string | array[string] Required

    Comma-separated list of aliases to retrieve. Supports wildcards (*). To retrieve all aliases, omit this parameter or use * or _all.

Query parameters

  • allow_no_indices boolean

    If false, the request returns an error if any wildcard expression, index alias, or _all value targets only missing or closed indices. This behavior applies even if the request targets other open indices.

  • expand_wildcards string | array[string]

    Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports comma-separated values, such as open,hidden.

    Supported values include:

    • all: Match any data stream or index, including hidden ones.
    • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
    • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
    • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
    • none: Wildcard expressions are not accepted.

    Values are all, open, closed, hidden, or none.

  • ignore_unavailable boolean

    If false, the request returns an error if it targets a missing or closed index.

  • local boolean Deprecated

    If true, the request retrieves information from the local node only.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • * object Additional properties
      Hide * attribute Show * attribute object
      • aliases object Required
        Hide aliases attribute Show aliases attribute object
        • * object Additional properties
          Hide * attributes Show * attributes object
          • filter object

            Query used to limit documents the alias can access.

            External documentation
          • index_routing string

            Value used to route indexing operations to a specific shard. If specified, this overwrites the routing value for indexing operations.

          • is_write_index boolean

            If true, the index is the write index for the alias.

            Default value is false.

          • routing string

            Value used to route indexing and search operations to a specific shard.

          • search_routing string

            Value used to route search operations to a specific shard. If specified, this overwrites the routing value for search operations.

          • is_hidden boolean Generally available; Added in 7.16.0

            If true, the alias is hidden. All indices for the alias must have the same is_hidden value.

            Default value is false.

GET /{index}/_alias/{name}
GET _alias
resp = client.indices.get_alias()
const response = await client.indices.getAlias();
response = client.indices.get_alias
$resp = $client->indices()->getAlias();
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_alias"
client.indices().getAlias(g -> g);








Flush data streams or indices Generally available

GET /{index}/_flush

All methods and paths for this operation:

POST /_flush

GET /_flush
POST /{index}/_flush
GET /{index}/_flush

Flushing a data stream or index is the process of making sure that any data that is currently only stored in the transaction log is also permanently stored in the Lucene index. When restarting, Elasticsearch replays any unflushed operations from the transaction log into the Lucene index to bring it back into the state that it was in before the restart. Elasticsearch automatically triggers flushes as needed, using heuristics that trade off the size of the unflushed transaction log against the cost of performing each flush.

After each operation has been flushed it is permanently stored in the Lucene index. This may mean that there is no need to maintain an additional copy of it in the transaction log. The transaction log is made up of multiple files, called generations, and Elasticsearch will delete any generation files when they are no longer needed, freeing up disk space.

It is also possible to trigger a flush on one or more indices using the flush API, although it is rare for users to need to call this API directly. If you call the flush API after indexing some documents then a successful response indicates that Elasticsearch has flushed all the documents that were indexed before the flush API was called.

Required authorization

  • Index privileges: maintenance

Path parameters

  • index string | array[string] Required

    Comma-separated list of data streams, indices, and aliases to flush. Supports wildcards (*). To flush all data streams and indices, omit this parameter or use * or _all.

Query parameters

  • allow_no_indices boolean

    If false, the request returns an error if any wildcard expression, index alias, or _all value targets only missing or closed indices. This behavior applies even if the request targets other open indices.

  • expand_wildcards string | array[string]

    Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports comma-separated values, such as open,hidden.

    Supported values include:

    • all: Match any data stream or index, including hidden ones.
    • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
    • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
    • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
    • none: Wildcard expressions are not accepted.

    Values are all, open, closed, hidden, or none.

  • force boolean

    If true, the request forces a flush even if there are no changes to commit to the index.

  • ignore_unavailable boolean

    If false, the request returns an error if it targets a missing or closed index.

  • wait_if_ongoing boolean

    If true, the flush operation blocks until execution when another flush operation is running. If false, Elasticsearch returns an error if you request a flush when another flush operation is running.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • _shards object
      Hide _shards attributes Show _shards attributes object
      • failed number Required

        The number of shards the operation or search attempted to run on but failed.

      • successful number Required

        The number of shards the operation or search succeeded on.

      • total number Required

        The number of shards the operation or search will run on overall.

      • failures array[object]
        Hide failures attributes Show failures attributes object
        • index string
        • node string
        • reason object Required

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

        • shard number
        • status string
        • primary boolean
      • skipped number
GET /{index}/_flush
POST /_flush
resp = client.indices.flush()
const response = await client.indices.flush();
response = client.indices.flush
$resp = $client->indices()->flush();
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_flush"
client.indices().flush(f -> f);




























































Shrink an index Generally available; Added in 5.0.0

POST /{index}/_shrink/{target}

All methods and paths for this operation:

PUT /{index}/_shrink/{target}

POST /{index}/_shrink/{target}

Shrink an index into a new index with fewer primary shards.

Before you can shrink an index:

  • The index must be read-only.
  • A copy of every shard in the index must reside on the same node.
  • The index must have a green health status.

To make shard allocation easier, we recommend you also remove the index's replica shards. You can later re-add replica shards as part of the shrink operation.

The requested number of primary shards in the target index must be a factor of the number of shards in the source index. For example an index with 8 primary shards can be shrunk into 4, 2 or 1 primary shards or an index with 15 primary shards can be shrunk into 5, 3 or 1. If the number of shards in the index is a prime number it can only be shrunk into a single primary shard Before shrinking, a (primary or replica) copy of every shard in the index must be present on the same node.

The current write index on a data stream cannot be shrunk. In order to shrink the current write index, the data stream must first be rolled over so that a new write index is created and then the previous write index can be shrunk.

A shrink operation:

  • Creates a new target index with the same definition as the source index, but with a smaller number of primary shards.
  • Hard-links segments from the source index into the target index. If the file system does not support hard-linking, then all segments are copied into the new index, which is a much more time consuming process. Also if using multiple data paths, shards on different data paths require a full copy of segment files if they are not on the same disk since hardlinks do not work across disks.
  • Recovers the target index as though it were a closed index which had just been re-opened. Recovers shards to the .routing.allocation.initial_recovery._id index setting.

IMPORTANT: Indices can only be shrunk if they satisfy the following requirements:

  • The target index must not exist.
  • The source index must have more primary shards than the target index.
  • The number of primary shards in the target index must be a factor of the number of primary shards in the source index. The source index must have more primary shards than the target index.
  • The index must not contain more than 2,147,483,519 documents in total across all shards that will be shrunk into a single shard on the target index as this is the maximum number of docs that can fit into a single shard.
  • The node handling the shrink process must have sufficient free disk space to accommodate a second copy of the existing index.

Required authorization

  • Index privileges: manage

Path parameters

  • index string Required

    Name of the source index to shrink.

  • target string Required

    Name of the target index to create.

Query parameters

  • master_timeout string

    Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.

    External documentation
  • timeout string

    Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.

    External documentation
  • wait_for_active_shards number | string

    The number of shard copies that must be active before proceeding with the operation. Set to all or any positive integer up to the total number of shards in the index (number_of_replicas+1).

    Values are all or index-setting.

application/json

Body Required

  • aliases object

    The key is the alias name. Index alias names support date math.

    Hide aliases attribute Show aliases attribute object
    • * object Additional properties
      Hide * attributes Show * attributes object
      • filter object

        Query used to limit documents the alias can access.

        External documentation
      • index_routing string

        Value used to route indexing operations to a specific shard. If specified, this overwrites the routing value for indexing operations.

      • is_hidden boolean

        If true, the alias is hidden. All indices for the alias must have the same is_hidden value.

        Default value is false.

      • is_write_index boolean

        If true, the index is the write index for the alias.

        Default value is false.

      • routing string

        Value used to route indexing and search operations to a specific shard.

      • search_routing string

        Value used to route search operations to a specific shard. If specified, this overwrites the routing value for search operations.

  • settings object

    Configuration options for the target index.

    Hide settings attribute Show settings attribute object
    • * object Additional properties

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • acknowledged boolean Required
    • shards_acknowledged boolean Required
    • index string Required
POST /{index}/_shrink/{target}
POST /my_source_index/_shrink/my_target_index
{
  "settings": {
    "index.routing.allocation.require._name": null,
    "index.blocks.write": null
  }
}
resp = client.indices.shrink(
    index="my_source_index",
    target="my_target_index",
    settings={
        "index.routing.allocation.require._name": None,
        "index.blocks.write": None
    },
)
const response = await client.indices.shrink({
  index: "my_source_index",
  target: "my_target_index",
  settings: {
    "index.routing.allocation.require._name": null,
    "index.blocks.write": null,
  },
});
response = client.indices.shrink(
  index: "my_source_index",
  target: "my_target_index",
  body: {
    "settings": {
      "index.routing.allocation.require._name": nil,
      "index.blocks.write": nil
    }
  }
)
$resp = $client->indices()->shrink([
    "index" => "my_source_index",
    "target" => "my_target_index",
    "body" => [
        "settings" => [
            "index.routing.allocation.require._name" => null,
            "index.blocks.write" => null,
        ],
    ],
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"settings":{"index.routing.allocation.require._name":null,"index.blocks.write":null}}' "$ELASTICSEARCH_URL/my_source_index/_shrink/my_target_index"
client.indices().shrink(s -> s
    .index("my_source_index")
    .settings(Map.of("index.blocks.write", JsonData.fromJson("null"),"index.routing.allocation.require._name", JsonData.fromJson("null")))
    .target("my_target_index")
);
Request example
{
  "settings": {
    "index.routing.allocation.require._name": null,
    "index.blocks.write": null
  }
}

























































Remove policies from an index Generally available; Added in 6.6.0

POST /{index}/_ilm/remove

Remove the assigned lifecycle policies from an index or a data stream's backing indices. It also stops managing the indices.

Required authorization

  • Index privileges: manage_ilm

Path parameters

  • index string Required

    The name of the index to remove policy on

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • failed_indexes array[string] Required
    • has_failures boolean Required
POST /{index}/_ilm/remove
POST logs-my_app-default/_ilm/remove
resp = client.ilm.remove_policy(
    index="logs-my_app-default",
)
const response = await client.ilm.removePolicy({
  index: "logs-my_app-default",
});
response = client.ilm.remove_policy(
  index: "logs-my_app-default"
)
$resp = $client->ilm()->removePolicy([
    "index" => "logs-my_app-default",
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/logs-my_app-default/_ilm/remove"
client.ilm().removePolicy(r -> r
    .index("logs-my_app-default")
);
Response examples (200)
A successful response when removing a lifecycle policy from an index.
{
  "has_failures" : false,
  "failed_indexes" : []
}

















Perform completion inference on the service Generally available; Added in 8.11.0

POST /_inference/completion/{inference_id}

Path parameters

  • inference_id string Required

    The inference Id

Query parameters

application/json

Body Required

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • completion array[object] Required

      The completion result object

      Hide completion attribute Show completion attribute object
      • result string Required
POST /_inference/completion/{inference_id}
POST _inference/completion/openai_chat_completions
{
  "input": "What is Elastic?"
}
resp = client.inference.completion(
    inference_id="openai_chat_completions",
    input="What is Elastic?",
)
const response = await client.inference.completion({
  inference_id: "openai_chat_completions",
  input: "What is Elastic?",
});
response = client.inference.completion(
  inference_id: "openai_chat_completions",
  body: {
    "input": "What is Elastic?"
  }
)
$resp = $client->inference()->completion([
    "inference_id" => "openai_chat_completions",
    "body" => [
        "input" => "What is Elastic?",
    ],
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"input":"What is Elastic?"}' "$ELASTICSEARCH_URL/_inference/completion/openai_chat_completions"
client.inference().completion(c -> c
    .inferenceId("openai_chat_completions")
    .input("What is Elastic?")
);
Request example
Run `POST _inference/completion/openai_chat_completions` to perform a completion on the example question.
{
  "input": "What is Elastic?"
}
Response examples (200)
A successful response from `POST _inference/completion/openai_chat_completions`.
{
  "completion": [
    {
      "result": "Elastic is a company that provides a range of software solutions for search, logging, security, and analytics. Their flagship product is Elasticsearch, an open-source, distributed search engine that allows users to search, analyze, and visualize large volumes of data in real-time. Elastic also offers products such as Kibana, a data visualization tool, and Logstash, a log management and pipeline tool, as well as various other tools and solutions for data analysis and management."
    }
  ]
}
























Create an Amazon SageMaker inference endpoint Generally available; Added in 8.19.0

PUT /_inference/{task_type}/{amazonsagemaker_inference_id}

Create an inference endpoint to perform an inference task with the amazon_sagemaker service.

Required authorization

  • Cluster privileges: manage_inference

Path parameters

  • task_type string

    The type of the inference task that the model will perform.

    Values are text_embedding, completion, chat_completion, sparse_embedding, or rerank.

  • amazonsagemaker_inference_id string Required

    The unique identifier of the inference endpoint.

Query parameters

application/json

Body Required

  • chunking_settings object

    The chunking configuration object.

    External documentation
    Hide chunking_settings attributes Show chunking_settings attributes object
    • max_chunk_size number

      The maximum size of a chunk in words. This value cannot be higher than 300 or lower than 20 (for sentence strategy) or 10 (for word strategy).

      Default value is 250.

    • overlap number

      The number of overlapping words for chunks. It is applicable only to a word chunking strategy. This value cannot be higher than half the max_chunk_size value.

      Default value is 100.

    • sentence_overlap number

      The number of overlapping sentences for chunks. It is applicable only for a sentence chunking strategy. It can be either 1 or 0.

      Default value is 1.

    • strategy string

      The chunking strategy: sentence or word.

      Default value is sentence.

  • service string Required

    The type of service supported for the specified task type. In this case, amazon_sagemaker.

    Value is amazon_sagemaker.

  • service_settings object Required

    Settings used to install the inference model. These settings are specific to the amazon_sagemaker service and service_settings.api you specified.

    Hide service_settings attributes Show service_settings attributes object
    • access_key string Required

      A valid AWS access key that has permissions to use Amazon SageMaker and access to models for invoking requests.

    • endpoint_name string Required

      The name of the SageMaker endpoint.

      External documentation
    • api string Required

      The API format to use when calling SageMaker. Elasticsearch will convert the POST _inference request to this data format when invoking the SageMaker endpoint.

      Values are openai or elastic.

    • region string Required

      The region that your endpoint or Amazon Resource Name (ARN) is deployed in. The list of available regions per model can be found in the Amazon SageMaker documentation.

      External documentation
    • secret_key string Required

      A valid AWS secret key that is paired with the access_key. For information about creating and managing access and secret keys, refer to the AWS documentation.

      External documentation
    • target_model string

      The model ID when calling a multi-model endpoint.

      External documentation
    • target_container_hostname string

      The container to directly invoke when calling a multi-container endpoint.

      External documentation
    • inference_component_name string

      The inference component to directly invoke when calling a multi-component endpoint.

      External documentation
    • batch_size number

      The maximum number of inputs in each batch. This value is used by inference ingestion pipelines when processing semantic values. It correlates to the number of times the SageMaker endpoint is invoked (one per batch of input).

      Default value is 256.

    • dimensions number

      The number of dimensions returned by the text embedding models. If this value is not provided, then it is guessed by making invoking the endpoint for the text_embedding task.

  • task_settings object

    Settings to configure the inference task. These settings are specific to the task type and service_settings.api you specified.

    Hide task_settings attributes Show task_settings attributes object
    • custom_attributes string

      The AWS custom attributes passed verbatim through to the model running in the SageMaker Endpoint. Values will be returned in the X-elastic-sagemaker-custom-attributes header.

      External documentation
    • enable_explanations string

      The optional JMESPath expression used to override the EnableExplanations provided during endpoint creation.

      External documentation
    • inference_id string

      The capture data ID when enabled in the endpoint.

      External documentation
    • session_id string

      The stateful session identifier for a new or existing session. New sessions will be returned in the X-elastic-sagemaker-new-session-id header. Closed sessions will be returned in the X-elastic-sagemaker-closed-session-id header.

      External documentation
    • target_variant string

      Specifies the variant when running with multi-variant Endpoints.

      External documentation

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • chunking_settings object

      Chunking configuration object

      Hide chunking_settings attributes Show chunking_settings attributes object
      • max_chunk_size number

        The maximum size of a chunk in words. This value cannot be higher than 300 or lower than 20 (for sentence strategy) or 10 (for word strategy).

        Default value is 250.

      • overlap number

        The number of overlapping words for chunks. It is applicable only to a word chunking strategy. This value cannot be higher than half the max_chunk_size value.

        Default value is 100.

      • sentence_overlap number

        The number of overlapping sentences for chunks. It is applicable only for a sentence chunking strategy. It can be either 1 or 0.

        Default value is 1.

      • strategy string

        The chunking strategy: sentence or word.

        Default value is sentence.

    • service string Required

      The service type

    • service_settings object Required

      Settings specific to the service

    • task_settings object

      Task settings specific to the service and task type

    • inference_id string Required

      The inference Id

    • task_type string Required

      The task type

      Values are text_embedding, completion, chat_completion, sparse_embedding, or rerank.

PUT /_inference/{task_type}/{amazonsagemaker_inference_id}
PUT _inference/text_embedding/amazon_sagemaker_embeddings
{
    "service": "amazon_sagemaker",
    "service_settings": {
        "access_key": "AWS-access-key",
        "secret_key": "AWS-secret-key",
        "region": "us-east-1",
        "api": "elastic",
        "endpoint_name": "my-endpoint",
        "dimensions": 384,
        "element_type": "float"
    }
}
resp = client.inference.put(
    task_type="text_embedding",
    inference_id="amazon_sagemaker_embeddings",
    inference_config={
        "service": "amazon_sagemaker",
        "service_settings": {
            "access_key": "AWS-access-key",
            "secret_key": "AWS-secret-key",
            "region": "us-east-1",
            "api": "elastic",
            "endpoint_name": "my-endpoint",
            "dimensions": 384,
            "element_type": "float"
        }
    },
)
const response = await client.inference.put({
  task_type: "text_embedding",
  inference_id: "amazon_sagemaker_embeddings",
  inference_config: {
    service: "amazon_sagemaker",
    service_settings: {
      access_key: "AWS-access-key",
      secret_key: "AWS-secret-key",
      region: "us-east-1",
      api: "elastic",
      endpoint_name: "my-endpoint",
      dimensions: 384,
      element_type: "float",
    },
  },
});
response = client.inference.put(
  task_type: "text_embedding",
  inference_id: "amazon_sagemaker_embeddings",
  body: {
    "service": "amazon_sagemaker",
    "service_settings": {
      "access_key": "AWS-access-key",
      "secret_key": "AWS-secret-key",
      "region": "us-east-1",
      "api": "elastic",
      "endpoint_name": "my-endpoint",
      "dimensions": 384,
      "element_type": "float"
    }
  }
)
$resp = $client->inference()->put([
    "task_type" => "text_embedding",
    "inference_id" => "amazon_sagemaker_embeddings",
    "body" => [
        "service" => "amazon_sagemaker",
        "service_settings" => [
            "access_key" => "AWS-access-key",
            "secret_key" => "AWS-secret-key",
            "region" => "us-east-1",
            "api" => "elastic",
            "endpoint_name" => "my-endpoint",
            "dimensions" => 384,
            "element_type" => "float",
        ],
    ],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"service":"amazon_sagemaker","service_settings":{"access_key":"AWS-access-key","secret_key":"AWS-secret-key","region":"us-east-1","api":"elastic","endpoint_name":"my-endpoint","dimensions":384,"element_type":"float"}}' "$ELASTICSEARCH_URL/_inference/text_embedding/amazon_sagemaker_embeddings"
Request examples
Run `PUT _inference/text_embedding/amazon_sagemaker_embeddings` to create an inference endpoint that performs a text embedding task.
{
    "service": "amazon_sagemaker",
    "service_settings": {
        "access_key": "AWS-access-key",
        "secret_key": "AWS-secret-key",
        "region": "us-east-1",
        "api": "elastic",
        "endpoint_name": "my-endpoint",
        "dimensions": 384,
        "element_type": "float"
    }
}
Run `PUT _inference/completion/amazon_sagemaker_completion` to create an inference endpoint that performs a completion task.
{
    "service": "amazon_sagemaker",
    "service_settings": {
        "access_key": "AWS-access-key",
        "secret_key": "AWS-secret-key",
        "region": "us-east-1",
        "api": "elastic",
        "endpoint_name": "my-endpoint"
    }
}
Run `PUT _inference/chat_completion/amazon_sagemaker_chat_completion` to create an inference endpoint that performs a chat completion task.
{
    "service": "amazon_sagemaker",
    "service_settings": {
        "access_key": "AWS-access-key",
        "secret_key": "AWS-secret-key",
        "region": "us-east-1",
        "api": "elastic",
        "endpoint_name": "my-endpoint"
    }
}
Run `PUT _inference/sparse_embedding/amazon_sagemaker_sparse_embedding` to create an inference endpoint that performs a sparse embedding task.
{
    "service": "amazon_sagemaker",
    "service_settings": {
        "access_key": "AWS-access-key",
        "secret_key": "AWS-secret-key",
        "region": "us-east-1",
        "api": "elastic",
        "endpoint_name": "my-endpoint"
    }
}
Run `PUT _inference/rerank/amazon_sagemaker_rerank` to create an inference endpoint that performs a rerank task.
{
    "service": "amazon_sagemaker",
    "service_settings": {
        "access_key": "AWS-access-key",
        "secret_key": "AWS-secret-key",
        "region": "us-east-1",
        "api": "elastic",
        "endpoint_name": "my-endpoint"
    }
}

Create an Anthropic inference endpoint Generally available; Added in 8.16.0

PUT /_inference/{task_type}/{anthropic_inference_id}

Create an inference endpoint to perform an inference task with the anthropic service.

Required authorization

  • Cluster privileges: manage_inference

Path parameters

  • task_type string

    The task type. The only valid task type for the model to perform is completion.

    Value is completion.

  • anthropic_inference_id string Required

    The unique identifier of the inference endpoint.

Query parameters

application/json

Body Required

  • chunking_settings object

    The chunking configuration object.

    External documentation
    Hide chunking_settings attributes Show chunking_settings attributes object
    • max_chunk_size number

      The maximum size of a chunk in words. This value cannot be higher than 300 or lower than 20 (for sentence strategy) or 10 (for word strategy).

      Default value is 250.

    • overlap number

      The number of overlapping words for chunks. It is applicable only to a word chunking strategy. This value cannot be higher than half the max_chunk_size value.

      Default value is 100.

    • sentence_overlap number

      The number of overlapping sentences for chunks. It is applicable only for a sentence chunking strategy. It can be either 1 or 0.

      Default value is 1.

    • strategy string

      The chunking strategy: sentence or word.

      Default value is sentence.

  • service string Required

    The type of service supported for the specified task type. In this case, anthropic.

    Value is anthropic.

  • service_settings object Required

    Settings used to install the inference model. These settings are specific to the watsonxai service.

    Hide service_settings attributes Show service_settings attributes object
    • api_key string Required

      A valid API key for the Anthropic API.

    • model_id string Required

      The name of the model to use for the inference task. Refer to the Anthropic documentation for the list of supported models.

    • rate_limit object

      This setting helps to minimize the number of rate limit errors returned from Anthropic. By default, the anthropic service sets the number of requests allowed per minute to 50.

      Hide rate_limit attribute Show rate_limit attribute object
      • requests_per_minute number

        The number of requests allowed per minute.

  • task_settings object

    Settings to configure the inference task. These settings are specific to the task type you specified.

    Hide task_settings attributes Show task_settings attributes object
    • max_tokens number Required

      For a completion task, it is the maximum number of tokens to generate before stopping.

    • temperature number

      For a completion task, it is the amount of randomness injected into the response. For more details about the supported range, refer to Anthropic documentation.

      External documentation
    • top_k number

      For a completion task, it specifies to only sample from the top K options for each subsequent token. It is recommended for advanced use cases only. You usually only need to use temperature.

    • top_p number

      For a completion task, it specifies to use Anthropic's nucleus sampling. In nucleus sampling, Anthropic computes the cumulative distribution over all the options for each subsequent token in decreasing probability order and cuts it off once it reaches the specified probability. You should either alter temperature or top_p, but not both. It is recommended for advanced use cases only. You usually only need to use temperature.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • chunking_settings object

      Chunking configuration object

      Hide chunking_settings attributes Show chunking_settings attributes object
      • max_chunk_size number

        The maximum size of a chunk in words. This value cannot be higher than 300 or lower than 20 (for sentence strategy) or 10 (for word strategy).

        Default value is 250.

      • overlap number

        The number of overlapping words for chunks. It is applicable only to a word chunking strategy. This value cannot be higher than half the max_chunk_size value.

        Default value is 100.

      • sentence_overlap number

        The number of overlapping sentences for chunks. It is applicable only for a sentence chunking strategy. It can be either 1 or 0.

        Default value is 1.

      • strategy string

        The chunking strategy: sentence or word.

        Default value is sentence.

    • service string Required

      The service type

    • service_settings object Required

      Settings specific to the service

    • task_settings object

      Task settings specific to the service and task type

    • inference_id string Required

      The inference Id

    • task_type string Required

      The task type

      Value is completion.

PUT /_inference/{task_type}/{anthropic_inference_id}
PUT _inference/completion/anthropic_completion
{
    "service": "anthropic",
    "service_settings": {
        "api_key": "Anthropic-Api-Key",
        "model_id": "Model-ID"
    },
    "task_settings": {
        "max_tokens": 1024
    }
}
resp = client.inference.put(
    task_type="completion",
    inference_id="anthropic_completion",
    inference_config={
        "service": "anthropic",
        "service_settings": {
            "api_key": "Anthropic-Api-Key",
            "model_id": "Model-ID"
        },
        "task_settings": {
            "max_tokens": 1024
        }
    },
)
const response = await client.inference.put({
  task_type: "completion",
  inference_id: "anthropic_completion",
  inference_config: {
    service: "anthropic",
    service_settings: {
      api_key: "Anthropic-Api-Key",
      model_id: "Model-ID",
    },
    task_settings: {
      max_tokens: 1024,
    },
  },
});
response = client.inference.put(
  task_type: "completion",
  inference_id: "anthropic_completion",
  body: {
    "service": "anthropic",
    "service_settings": {
      "api_key": "Anthropic-Api-Key",
      "model_id": "Model-ID"
    },
    "task_settings": {
      "max_tokens": 1024
    }
  }
)
$resp = $client->inference()->put([
    "task_type" => "completion",
    "inference_id" => "anthropic_completion",
    "body" => [
        "service" => "anthropic",
        "service_settings" => [
            "api_key" => "Anthropic-Api-Key",
            "model_id" => "Model-ID",
        ],
        "task_settings" => [
            "max_tokens" => 1024,
        ],
    ],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"service":"anthropic","service_settings":{"api_key":"Anthropic-Api-Key","model_id":"Model-ID"},"task_settings":{"max_tokens":1024}}' "$ELASTICSEARCH_URL/_inference/completion/anthropic_completion"
client.inference().put(p -> p
    .inferenceId("anthropic_completion")
    .taskType(TaskType.Completion)
    .inferenceConfig(i -> i
        .service("anthropic")
        .serviceSettings(JsonData.fromJson("{\"api_key\":\"Anthropic-Api-Key\",\"model_id\":\"Model-ID\"}"))
        .taskSettings(JsonData.fromJson("{\"max_tokens\":1024}"))
    )
);
Request example
Run `PUT _inference/completion/anthropic_completion` to create an inference endpoint that performs a completion task.
{
    "service": "anthropic",
    "service_settings": {
        "api_key": "Anthropic-Api-Key",
        "model_id": "Model-ID"
    },
    "task_settings": {
        "max_tokens": 1024
    }
}

Create an Azure AI studio inference endpoint Generally available; Added in 8.14.0

PUT /_inference/{task_type}/{azureaistudio_inference_id}

Create an inference endpoint to perform an inference task with the azureaistudio service.

Required authorization

  • Cluster privileges: manage_inference

Path parameters

  • task_type string

    The type of the inference task that the model will perform.

    Values are completion or text_embedding.

  • azureaistudio_inference_id string Required

    The unique identifier of the inference endpoint.

Query parameters

application/json

Body Required

  • chunking_settings object

    The chunking configuration object.

    External documentation
    Hide chunking_settings attributes Show chunking_settings attributes object
    • max_chunk_size number

      The maximum size of a chunk in words. This value cannot be higher than 300 or lower than 20 (for sentence strategy) or 10 (for word strategy).

      Default value is 250.

    • overlap number

      The number of overlapping words for chunks. It is applicable only to a word chunking strategy. This value cannot be higher than half the max_chunk_size value.

      Default value is 100.

    • sentence_overlap number

      The number of overlapping sentences for chunks. It is applicable only for a sentence chunking strategy. It can be either 1 or 0.

      Default value is 1.

    • strategy string

      The chunking strategy: sentence or word.

      Default value is sentence.

  • service string Required

    The type of service supported for the specified task type. In this case, azureaistudio.

    Value is azureaistudio.

  • service_settings object Required

    Settings used to install the inference model. These settings are specific to the openai service.

    Hide service_settings attributes Show service_settings attributes object
    • api_key string Required

      A valid API key of your Azure AI Studio model deployment. This key can be found on the overview page for your deployment in the management section of your Azure AI Studio account.

      IMPORTANT: You need to provide the API key only once, during the inference model creation. The get inference endpoint API does not retrieve your API key. After creating the inference model, you cannot change the associated API key. If you want to use a different API key, delete the inference model and recreate it with the same name and the updated API key.

      External documentation
    • endpoint_type string Required

      The type of endpoint that is available for deployment through Azure AI Studio: token or realtime. The token endpoint type is for "pay as you go" endpoints that are billed per token. The realtime endpoint type is for "real-time" endpoints that are billed per hour of usage.

      External documentation
    • target string Required

      The target URL of your Azure AI Studio model deployment. This can be found on the overview page for your deployment in the management section of your Azure AI Studio account.

    • provider string Required

      The model provider for your deployment. Note that some providers may support only certain task types. Supported providers include:

      • cohere - available for text_embedding and completion task types
      • databricks - available for completion task type only
      • meta - available for completion task type only
      • microsoft_phi - available for completion task type only
      • mistral - available for completion task type only
      • openai - available for text_embedding and completion task types
    • rate_limit object

      This setting helps to minimize the number of rate limit errors returned from Azure AI Studio. By default, the azureaistudio service sets the number of requests allowed per minute to 240.

      Hide rate_limit attribute Show rate_limit attribute object
      • requests_per_minute number

        The number of requests allowed per minute.

  • task_settings object

    Settings to configure the inference task. These settings are specific to the task type you specified.

    Hide task_settings attributes Show task_settings attributes object
    • do_sample number

      For a completion task, instruct the inference process to perform sampling. It has no effect unless temperature or top_p is specified.

    • max_new_tokens number

      For a completion task, provide a hint for the maximum number of output tokens to be generated.

      Default value is 64.

    • temperature number

      For a completion task, control the apparent creativity of generated completions with a sampling temperature. It must be a number in the range of 0.0 to 2.0. It should not be used if top_p is specified.

    • top_p number

      For a completion task, make the model consider the results of the tokens with nucleus sampling probability. It is an alternative value to temperature and must be a number in the range of 0.0 to 2.0. It should not be used if temperature is specified.

    • user string

      For a text_embedding task, specify the user issuing the request. This information can be used for abuse detection.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • chunking_settings object

      Chunking configuration object

      Hide chunking_settings attributes Show chunking_settings attributes object
      • max_chunk_size number

        The maximum size of a chunk in words. This value cannot be higher than 300 or lower than 20 (for sentence strategy) or 10 (for word strategy).

        Default value is 250.

      • overlap number

        The number of overlapping words for chunks. It is applicable only to a word chunking strategy. This value cannot be higher than half the max_chunk_size value.

        Default value is 100.

      • sentence_overlap number

        The number of overlapping sentences for chunks. It is applicable only for a sentence chunking strategy. It can be either 1 or 0.

        Default value is 1.

      • strategy string

        The chunking strategy: sentence or word.

        Default value is sentence.

    • service string Required

      The service type

    • service_settings object Required

      Settings specific to the service

    • task_settings object

      Task settings specific to the service and task type

    • inference_id string Required

      The inference Id

    • task_type string Required

      The task type

      Values are text_embedding or completion.

PUT /_inference/{task_type}/{azureaistudio_inference_id}
PUT _inference/text_embedding/azure_ai_studio_embeddings
{
    "service": "azureaistudio",
    "service_settings": {
        "api_key": "Azure-AI-Studio-API-key",
        "target": "Target-Uri",
        "provider": "openai",
        "endpoint_type": "token"
    }
}
resp = client.inference.put(
    task_type="text_embedding",
    inference_id="azure_ai_studio_embeddings",
    inference_config={
        "service": "azureaistudio",
        "service_settings": {
            "api_key": "Azure-AI-Studio-API-key",
            "target": "Target-Uri",
            "provider": "openai",
            "endpoint_type": "token"
        }
    },
)
const response = await client.inference.put({
  task_type: "text_embedding",
  inference_id: "azure_ai_studio_embeddings",
  inference_config: {
    service: "azureaistudio",
    service_settings: {
      api_key: "Azure-AI-Studio-API-key",
      target: "Target-Uri",
      provider: "openai",
      endpoint_type: "token",
    },
  },
});
response = client.inference.put(
  task_type: "text_embedding",
  inference_id: "azure_ai_studio_embeddings",
  body: {
    "service": "azureaistudio",
    "service_settings": {
      "api_key": "Azure-AI-Studio-API-key",
      "target": "Target-Uri",
      "provider": "openai",
      "endpoint_type": "token"
    }
  }
)
$resp = $client->inference()->put([
    "task_type" => "text_embedding",
    "inference_id" => "azure_ai_studio_embeddings",
    "body" => [
        "service" => "azureaistudio",
        "service_settings" => [
            "api_key" => "Azure-AI-Studio-API-key",
            "target" => "Target-Uri",
            "provider" => "openai",
            "endpoint_type" => "token",
        ],
    ],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"service":"azureaistudio","service_settings":{"api_key":"Azure-AI-Studio-API-key","target":"Target-Uri","provider":"openai","endpoint_type":"token"}}' "$ELASTICSEARCH_URL/_inference/text_embedding/azure_ai_studio_embeddings"
client.inference().put(p -> p
    .inferenceId("azure_ai_studio_embeddings")
    .taskType(TaskType.TextEmbedding)
    .inferenceConfig(i -> i
        .service("azureaistudio")
        .serviceSettings(JsonData.fromJson("{\"api_key\":\"Azure-AI-Studio-API-key\",\"target\":\"Target-Uri\",\"provider\":\"openai\",\"endpoint_type\":\"token\"}"))
    )
);
Request examples
Run `PUT _inference/text_embedding/azure_ai_studio_embeddings` to create an inference endpoint that performs a text_embedding task. Note that you do not specify a model here, as it is defined already in the Azure AI Studio deployment.
{
    "service": "azureaistudio",
    "service_settings": {
        "api_key": "Azure-AI-Studio-API-key",
        "target": "Target-Uri",
        "provider": "openai",
        "endpoint_type": "token"
    }
}
Run `PUT _inference/completion/azure_ai_studio_completion` to create an inference endpoint that performs a completion task.
{
    "service": "azureaistudio",
    "service_settings": {
        "api_key": "Azure-AI-Studio-API-key",
        "target": "Target-URI",
        "provider": "databricks",
        "endpoint_type": "realtime"
    }
}
















Create an Elasticsearch inference endpoint Generally available; Added in 8.13.0

PUT /_inference/{task_type}/{elasticsearch_inference_id}

Create an inference endpoint to perform an inference task with the elasticsearch service.


Your Elasticsearch deployment contains preconfigured ELSER and E5 inference endpoints, you only need to create the enpoints using the API if you want to customize the settings.

If you use the ELSER or the E5 model through the elasticsearch service, the API request will automatically download and deploy the model if it isn't downloaded yet.


You might see a 502 bad gateway error in the response when using the Kibana Console. This error usually just reflects a timeout, while the model downloads in the background. You can check the download progress in the Machine Learning UI. If using the Python client, you can set the timeout parameter to a higher value.

After creating the endpoint, wait for the model deployment to complete before using it. To verify the deployment status, use the get trained model statistics API. Look for "state": "fully_allocated" in the response and ensure that the "allocation_count" matches the "target_allocation_count". Avoid creating multiple endpoints for the same model unless required, as each endpoint consumes significant resources.

Required authorization

  • Cluster privileges: manage_inference

Path parameters

  • task_type string

    The type of the inference task that the model will perform.

    Values are rerank, sparse_embedding, or text_embedding.

  • elasticsearch_inference_id string Required

    The unique identifier of the inference endpoint. The must not match the model_id.

Query parameters

application/json

Body Required

  • chunking_settings object

    The chunking configuration object.

    External documentation
    Hide chunking_settings attributes Show chunking_settings attributes object
    • max_chunk_size number

      The maximum size of a chunk in words. This value cannot be higher than 300 or lower than 20 (for sentence strategy) or 10 (for word strategy).

      Default value is 250.

    • overlap number

      The number of overlapping words for chunks. It is applicable only to a word chunking strategy. This value cannot be higher than half the max_chunk_size value.

      Default value is 100.

    • sentence_overlap number

      The number of overlapping sentences for chunks. It is applicable only for a sentence chunking strategy. It can be either 1 or 0.

      Default value is 1.

    • strategy string

      The chunking strategy: sentence or word.

      Default value is sentence.

  • service string Required

    The type of service supported for the specified task type. In this case, elasticsearch.

    Value is elasticsearch.

  • service_settings object Required

    Settings used to install the inference model. These settings are specific to the elasticsearch service.

    Hide service_settings attributes Show service_settings attributes object
    • adaptive_allocations object

      Adaptive allocations configuration details. If enabled is true, the number of allocations of the model is set based on the current load the process gets. When the load is high, a new model allocation is automatically created, respecting the value of max_number_of_allocations if it's set. When the load is low, a model allocation is automatically removed, respecting the value of min_number_of_allocations if it's set. If enabled is true, do not set the number of allocations manually.

      Hide adaptive_allocations attributes Show adaptive_allocations attributes object
      • enabled boolean

        Turn on adaptive_allocations.

        Default value is false.

      • max_number_of_allocations number

        The maximum number of allocations to scale to. If set, it must be greater than or equal to min_number_of_allocations.

      • min_number_of_allocations number

        The minimum number of allocations to scale to. If set, it must be greater than or equal to 0. If not defined, the deployment scales to 0.

    • deployment_id string

      The deployment identifier for a trained model deployment. When deployment_id is used the model_id is optional.

    • model_id string Required

      The name of the model to use for the inference task. It can be the ID of a built-in model (for example, .multilingual-e5-small for E5) or a text embedding model that was uploaded by using the Eland client.

      External documentation
    • num_allocations number

      The total number of allocations that are assigned to the model across machine learning nodes. Increasing this value generally increases the throughput. If adaptive allocations are enabled, do not set this value because it's automatically set.

    • num_threads number Required

      The number of threads used by each model allocation during inference. This setting generally increases the speed per inference request. The inference process is a compute-bound process; threads_per_allocations must not exceed the number of available allocated processors per node. The value must be a power of 2. The maximum value is 32.

  • task_settings object

    Settings to configure the inference task. These settings are specific to the task type you specified.

    Hide task_settings attribute Show task_settings attribute object
    • return_documents boolean

      For a rerank task, return the document instead of only the index.

      Default value is true.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • chunking_settings object

      Chunking configuration object

      Hide chunking_settings attributes Show chunking_settings attributes object
      • max_chunk_size number

        The maximum size of a chunk in words. This value cannot be higher than 300 or lower than 20 (for sentence strategy) or 10 (for word strategy).

        Default value is 250.

      • overlap number

        The number of overlapping words for chunks. It is applicable only to a word chunking strategy. This value cannot be higher than half the max_chunk_size value.

        Default value is 100.

      • sentence_overlap number

        The number of overlapping sentences for chunks. It is applicable only for a sentence chunking strategy. It can be either 1 or 0.

        Default value is 1.

      • strategy string

        The chunking strategy: sentence or word.

        Default value is sentence.

    • service string Required

      The service type

    • service_settings object Required

      Settings specific to the service

    • task_settings object

      Task settings specific to the service and task type

    • inference_id string Required

      The inference Id

    • task_type string Required

      The task type

      Values are sparse_embedding, text_embedding, or rerank.

PUT /_inference/{task_type}/{elasticsearch_inference_id}
PUT _inference/sparse_embedding/my-elser-model
{
    "service": "elasticsearch",
    "service_settings": {
        "adaptive_allocations": { 
        "enabled": true,
        "min_number_of_allocations": 1,
        "max_number_of_allocations": 4
        },
        "num_threads": 1,
        "model_id": ".elser_model_2" 
    }
}
resp = client.inference.put(
    task_type="sparse_embedding",
    inference_id="my-elser-model",
    inference_config={
        "service": "elasticsearch",
        "service_settings": {
            "adaptive_allocations": {
                "enabled": True,
                "min_number_of_allocations": 1,
                "max_number_of_allocations": 4
            },
            "num_threads": 1,
            "model_id": ".elser_model_2"
        }
    },
)
const response = await client.inference.put({
  task_type: "sparse_embedding",
  inference_id: "my-elser-model",
  inference_config: {
    service: "elasticsearch",
    service_settings: {
      adaptive_allocations: {
        enabled: true,
        min_number_of_allocations: 1,
        max_number_of_allocations: 4,
      },
      num_threads: 1,
      model_id: ".elser_model_2",
    },
  },
});
response = client.inference.put(
  task_type: "sparse_embedding",
  inference_id: "my-elser-model",
  body: {
    "service": "elasticsearch",
    "service_settings": {
      "adaptive_allocations": {
        "enabled": true,
        "min_number_of_allocations": 1,
        "max_number_of_allocations": 4
      },
      "num_threads": 1,
      "model_id": ".elser_model_2"
    }
  }
)
$resp = $client->inference()->put([
    "task_type" => "sparse_embedding",
    "inference_id" => "my-elser-model",
    "body" => [
        "service" => "elasticsearch",
        "service_settings" => [
            "adaptive_allocations" => [
                "enabled" => true,
                "min_number_of_allocations" => 1,
                "max_number_of_allocations" => 4,
            ],
            "num_threads" => 1,
            "model_id" => ".elser_model_2",
        ],
    ],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"service":"elasticsearch","service_settings":{"adaptive_allocations":{"enabled":true,"min_number_of_allocations":1,"max_number_of_allocations":4},"num_threads":1,"model_id":".elser_model_2"}}' "$ELASTICSEARCH_URL/_inference/sparse_embedding/my-elser-model"
client.inference().put(p -> p
    .inferenceId("my-elser-model")
    .taskType(TaskType.SparseEmbedding)
    .inferenceConfig(i -> i
        .service("elasticsearch")
        .serviceSettings(JsonData.fromJson("{\"adaptive_allocations\":{\"enabled\":true,\"min_number_of_allocations\":1,\"max_number_of_allocations\":4},\"num_threads\":1,\"model_id\":\".elser_model_2\"}"))
    )
);
Request examples
Run `PUT _inference/sparse_embedding/my-elser-model` to create an inference endpoint that performs a `sparse_embedding` task. The `model_id` must be the ID of one of the built-in ELSER models. The API will automatically download the ELSER model if it isn't already downloaded and then deploy the model.
{
    "service": "elasticsearch",
    "service_settings": {
        "adaptive_allocations": { 
        "enabled": true,
        "min_number_of_allocations": 1,
        "max_number_of_allocations": 4
        },
        "num_threads": 1,
        "model_id": ".elser_model_2" 
    }
}
Run `PUT _inference/rerank/my-elastic-rerank` to create an inference endpoint that performs a rerank task using the built-in Elastic Rerank cross-encoder model. The `model_id` must be `.rerank-v1`, which is the ID of the built-in Elastic Rerank model. The API will automatically download the Elastic Rerank model if it isn't already downloaded and then deploy the model. Once deployed, the model can be used for semantic re-ranking with a `text_similarity_reranker` retriever.
{
    "service": "elasticsearch",
    "service_settings": {
        "model_id": ".rerank-v1", 
        "num_threads": 1,
        "adaptive_allocations": { 
        "enabled": true,
        "min_number_of_allocations": 1,
        "max_number_of_allocations": 4
        }
    }
}
Run `PUT _inference/text_embedding/my-e5-model` to create an inference endpoint that performs a `text_embedding` task. The `model_id` must be the ID of one of the built-in E5 models. The API will automatically download the E5 model if it isn't already downloaded and then deploy the model.
{
    "service": "elasticsearch",
    "service_settings": {
        "num_allocations": 1,
        "num_threads": 1,
        "model_id": ".multilingual-e5-small" 
    }
}
Run `PUT _inference/text_embedding/my-msmarco-minilm-model` to create an inference endpoint that performs a `text_embedding` task with a model that was uploaded by Eland.
{
    "service": "elasticsearch",
    "service_settings": {
        "num_allocations": 1,
        "num_threads": 1,
        "model_id": "msmarco-MiniLM-L12-cos-v5" 
    }
}
Run `PUT _inference/text_embedding/my-e5-model` to create an inference endpoint that performs a `text_embedding` task and to configure adaptive allocations. The API request will automatically download the E5 model if it isn't already downloaded and then deploy the model.
{
    "service": "elasticsearch",
    "service_settings": {
        "adaptive_allocations": {
        "enabled": true,
        "min_number_of_allocations": 3,
        "max_number_of_allocations": 10
        },
        "num_threads": 1,
        "model_id": ".multilingual-e5-small"
    }
}
Run `PUT _inference/sparse_embedding/use_existing_deployment` to use an already existing model deployment when creating an inference endpoint.
{
    "service": "elasticsearch",
    "service_settings": {
        "deployment_id": ".elser_model_2"
    }
}
Response examples (200)
A successful response from `PUT _inference/sparse_embedding/use_existing_deployment`. It contains the model ID and the threads and allocations settings from the model deployment.
{
  "inference_id": "use_existing_deployment",
  "task_type": "sparse_embedding",
  "service": "elasticsearch",
  "service_settings": {
    "num_allocations": 2,
    "num_threads": 1,
    "model_id": ".elser_model_2",
    "deployment_id": ".elser_model_2"
  },
  "chunking_settings": {
    "strategy": "sentence",
    "max_chunk_size": 250,
    "sentence_overlap": 1
  }
}










































































































































































Close anomaly detection jobs Generally available; Added in 5.4.0

POST /_ml/anomaly_detectors/{job_id}/_close

A job can be opened and closed multiple times throughout its lifecycle. A closed job cannot receive data or perform analysis operations, but you can still explore and navigate results. When you close a job, it runs housekeeping tasks such as pruning the model history, flushing buffers, calculating final results and persisting the model snapshots. Depending upon the size of the job, it could take several minutes to close and the equivalent time to re-open. After it is closed, the job has a minimal overhead on the cluster except for maintaining its meta data. Therefore it is a best practice to close jobs that are no longer required to process data. If you close an anomaly detection job whose datafeed is running, the request first tries to stop the datafeed. This behavior is equivalent to calling stop datafeed API with the same timeout and force parameters as the close job request. When a datafeed that has a specified end date stops, it automatically closes its associated job.

Required authorization

  • Cluster privileges: manage_ml

Path parameters

  • job_id string Required

    Identifier for the anomaly detection job. It can be a job identifier, a group name, or a wildcard expression. You can close multiple anomaly detection jobs in a single API request by using a group name, a comma-separated list of jobs, or a wildcard expression. You can close all jobs by using _all or by specifying * as the job identifier.

Query parameters

  • allow_no_match boolean

    Specifies what to do when the request: contains wildcard expressions and there are no jobs that match; contains the _all string or no identifiers and there are no matches; or contains wildcard expressions and there are only partial matches. By default, it returns an empty jobs array when there are no matches and the subset of results when there are partial matches. If false, the request returns a 404 status code when there are no matches or only partial matches.

  • force boolean

    Use to close a failed job, or to forcefully close a job which has not responded to its initial close request; the request returns without performing the associated actions such as flushing buffers and persisting the model snapshots. If you want the job to be in a consistent state after the close job API returns, do not set to true. This parameter should be used only in situations where the job has already failed or where you are not interested in results the job might have recently produced or might produce in the future.

  • timeout string

    Controls the time to wait until a job has closed.

    External documentation
application/json

Body

  • allow_no_match boolean

    Refer to the description for the allow_no_match query parameter.

    Default value is true.

  • force boolean

    Refer to the descriptiion for the force query parameter.

    Default value is false.

  • timeout string

    Refer to the description for the timeout query parameter.

    External documentation

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • closed boolean Required
POST /_ml/anomaly_detectors/{job_id}/_close
POST _ml/anomaly_detectors/low_request_rate/_close
resp = client.ml.close_job(
    job_id="low_request_rate",
)
const response = await client.ml.closeJob({
  job_id: "low_request_rate",
});
response = client.ml.close_job(
  job_id: "low_request_rate"
)
$resp = $client->ml()->closeJob([
    "job_id" => "low_request_rate",
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_ml/anomaly_detectors/low_request_rate/_close"
client.ml().closeJob(c -> c
    .jobId("low_request_rate")
);
Response examples (200)
A successful response when closing anomaly detection jobs.
{
  "closed": true
}




































































































































Send data to an anomaly detection job for analysis Deprecated Generally available; Added in 5.4.0

POST /_ml/anomaly_detectors/{job_id}/_data

IMPORTANT: For each job, data can be accepted from only a single connection at a time. It is not currently possible to post data to multiple jobs using wildcards or a comma-separated list.

Required authorization

  • Cluster privileges: manage_ml

Path parameters

  • job_id string Required

    Identifier for the anomaly detection job. The job must have a state of open to receive and process the data.

Query parameters

  • reset_end string | number

    Specifies the end of the bucket resetting range.

  • reset_start string | number

    Specifies the start of the bucket resetting range.

application/json

Body Required

object object

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • job_id string Required
    • processed_record_count number Required
    • processed_field_count number Required
    • input_bytes number Required
    • input_field_count number Required
    • invalid_date_count number Required
    • missing_field_count number Required
    • out_of_order_timestamp_count number Required
    • empty_bucket_count number Required
    • sparse_bucket_count number Required
    • bucket_count number Required
    • earliest_record_timestamp number

      Time unit for milliseconds

    • latest_record_timestamp number

      Time unit for milliseconds

    • last_data_time number

      Time unit for milliseconds

    • latest_empty_bucket_timestamp number

      Time unit for milliseconds

    • latest_sparse_bucket_timestamp number

      Time unit for milliseconds

    • input_record_count number Required
    • log_time number

      Time unit for milliseconds

POST /_ml/anomaly_detectors/{job_id}/_data
curl \
 --request POST 'http://api.example.com/_ml/anomaly_detectors/{job_id}/_data' \
 --header "Authorization: $API_KEY" \
 --header "Content-Type: application/json" \
 --data '[{}]'


























































































Create a trained model Generally available; Added in 7.10.0

PUT /_ml/trained_models/{model_id}

Enable you to supply a trained model that is not created by data frame analytics.

Required authorization

  • Cluster privileges: manage_ml

Path parameters

  • model_id string Required

    The unique identifier of the trained model.

Query parameters

  • defer_definition_decompression boolean Generally available; Added in 8.0.0

    If set to true and a compressed_definition is provided, the request defers definition decompression and skips relevant validations.

  • wait_for_completion boolean Generally available; Added in 8.8.0

    Whether to wait for all child operations (e.g. model download) to complete.

application/json

Body Required

  • compressed_definition string

    The compressed (GZipped and Base64 encoded) inference definition of the model. If compressed_definition is specified, then definition cannot be specified.

  • definition object

    The inference definition for the model. If definition is specified, then compressed_definition cannot be specified.

    Hide definition attributes Show definition attributes object
    • preprocessors array[object]

      Collection of preprocessors

      Hide preprocessors attributes Show preprocessors attributes object
      • frequency_encoding object
        Hide frequency_encoding attributes Show frequency_encoding attributes object
        • field string Required
        • feature_name string Required
        • frequency_map object Required
      • one_hot_encoding object
        Hide one_hot_encoding attributes Show one_hot_encoding attributes object
        • field string Required
        • hot_map object Required
      • target_mean_encoding object
        Hide target_mean_encoding attributes Show target_mean_encoding attributes object
        • field string Required
        • feature_name string Required
        • target_map object Required
        • default_value number Required
    • trained_model object Required

      The definition of the trained model.

      Hide trained_model attributes Show trained_model attributes object
      • tree object

        The definition for a binary decision tree.

        Hide tree attributes Show tree attributes object
        • classification_labels array[string]
        • feature_names array[string] Required
        • target_type string
        • tree_structure array[object] Required
      • tree_node object

        The definition of a node in a tree. There are two major types of nodes: leaf nodes and not-leaf nodes.

        • Leaf nodes only need node_index and leaf_value defined.
        • All other nodes need split_feature, left_child, right_child, threshold, decision_type, and default_left defined.
        Hide tree_node attributes Show tree_node attributes object
        • decision_type string
        • default_left boolean
        • leaf_value number
        • left_child number
        • node_index number Required
        • right_child number
        • split_feature number
        • split_gain number
        • threshold number
      • ensemble object

        The definition for an ensemble model

        Hide ensemble attributes Show ensemble attributes object
        • classification_labels array[string]
        • feature_names array[string]
        • target_type string
        • trained_models array[object] Required
  • description string

    A human-readable description of the inference trained model.

  • inference_config object

    The default configuration for inference. This can be either a regression or classification configuration. It must match the underlying definition.trained_model's target_type. For pre-packaged models such as ELSER the config is not required.

    Hide inference_config attributes Show inference_config attributes object
    • regression object

      Regression configuration for inference.

      Hide regression attributes Show regression attributes object
      • results_field string

        The field that is added to incoming documents to contain the inference prediction. Defaults to predicted_value.

      • num_top_feature_importance_values number

        Specifies the maximum number of feature importance values per document.

        Default value is 0.

    • classification object

      Classification configuration for inference.

      Hide classification attributes Show classification attributes object
      • num_top_classes number

        Specifies the number of top class predictions to return. Defaults to 0.

      • num_top_feature_importance_values number

        Specifies the maximum number of feature importance values per document.

        Default value is 0.

      • prediction_field_type string

        Specifies the type of the predicted field to write. Acceptable values are: string, number, boolean. When boolean is provided 1.0 is transformed to true and 0.0 to false.

      • results_field string

        The field that is added to incoming documents to contain the inference prediction. Defaults to predicted_value.

      • top_classes_results_field string

        Specifies the field to which the top classes are written. Defaults to top_classes.

    • text_classification object

      Text classification configuration for inference.

      Hide text_classification attributes Show text_classification attributes object
      • num_top_classes number

        Specifies the number of top class predictions to return. Defaults to 0.

      • tokenization object

        The tokenization options

      • results_field string

        The field that is added to incoming documents to contain the inference prediction. Defaults to predicted_value.

      • classification_labels array[string]

        Classification labels to apply other than the stored labels. Must have the same deminsions as the default configured labels

      • vocabulary object
    • zero_shot_classification object

      Zeroshot classification configuration for inference.

      Hide zero_shot_classification attributes Show zero_shot_classification attributes object
      • tokenization object

        The tokenization options to update when inferring

      • hypothesis_template string

        Hypothesis template used when tokenizing labels for prediction

        Default value is "This example is {}.".

      • classification_labels array[string] Required

        The zero shot classification labels indicating entailment, neutral, and contradiction Must contain exactly and only entailment, neutral, and contradiction

      • results_field string

        The field that is added to incoming documents to contain the inference prediction. Defaults to predicted_value.

      • multi_label boolean

        Indicates if more than one true label exists.

        Default value is false.

      • labels array[string]

        The labels to predict.

    • fill_mask object

      Fill mask configuration for inference.

      Hide fill_mask attributes Show fill_mask attributes object
      • mask_token string

        The string/token which will be removed from incoming documents and replaced with the inference prediction(s). In a response, this field contains the mask token for the specified model/tokenizer. Each model and tokenizer has a predefined mask token which cannot be changed. Thus, it is recommended not to set this value in requests. However, if this field is present in a request, its value must match the predefined value for that model/tokenizer, otherwise the request will fail.

      • num_top_classes number

        Specifies the number of top class predictions to return. Defaults to 0.

      • tokenization object

        The tokenization options to update when inferring

      • results_field string

        The field that is added to incoming documents to contain the inference prediction. Defaults to predicted_value.

      • vocabulary object
    • learning_to_rank object
      Hide learning_to_rank attributes Show learning_to_rank attributes object
      • default_params object
        Hide default_params attribute Show default_params attribute object
        • * object Additional properties
      • feature_extractors array[object]
      • num_top_feature_importance_values number Required
    • ner object

      Named entity recognition configuration for inference.

      Hide ner attributes Show ner attributes object
      • tokenization object

        The tokenization options

      • results_field string

        The field that is added to incoming documents to contain the inference prediction. Defaults to predicted_value.

      • classification_labels array[string]

        The token classification labels. Must be IOB formatted tags

      • vocabulary object
    • pass_through object

      Pass through configuration for inference.

      Hide pass_through attributes Show pass_through attributes object
      • tokenization object

        The tokenization options

      • results_field string

        The field that is added to incoming documents to contain the inference prediction. Defaults to predicted_value.

      • vocabulary object
    • text_embedding object

      Text embedding configuration for inference.

      Hide text_embedding attributes Show text_embedding attributes object
      • embedding_size number

        The number of dimensions in the embedding output

      • tokenization object

        The tokenization options

      • results_field string

        The field that is added to incoming documents to contain the inference prediction. Defaults to predicted_value.

      • vocabulary object
    • text_expansion object

      Text expansion configuration for inference.

      Hide text_expansion attributes Show text_expansion attributes object
      • tokenization object

        The tokenization options

      • results_field string

        The field that is added to incoming documents to contain the inference prediction. Defaults to predicted_value.

      • vocabulary object
    • question_answering object

      Question answering configuration for inference.

      Hide question_answering attributes Show question_answering attributes object
      • num_top_classes number

        Specifies the number of top class predictions to return. Defaults to 0.

      • tokenization object

        The tokenization options to update when inferring

      • results_field string

        The field that is added to incoming documents to contain the inference prediction. Defaults to predicted_value.

      • max_answer_length number

        The maximum answer length to consider

  • input object

    The input field names for the model definition.

    Hide input attribute Show input attribute object
    • field_names string | array[string] Required
  • metadata object

    An object map that contains metadata about the model.

  • model_type string

    The model type.

    Supported values include:

    • tree_ensemble: The model definition is an ensemble model of decision trees.
    • lang_ident: A special type reserved for language identification models.
    • pytorch: The stored definition is a PyTorch (specifically a TorchScript) model. Currently only NLP models are supported.

    Values are tree_ensemble, lang_ident, or pytorch.

  • model_size_bytes number

    The estimated memory usage in bytes to keep the trained model in memory. This property is supported only if defer_definition_decompression is true or the model definition is not supplied.

  • platform_architecture string

    The platform architecture (if applicable) of the trained mode. If the model only works on one platform, because it is heavily optimized for a particular processor architecture and OS combination, then this field specifies which. The format of the string must match the platform identifiers used by Elasticsearch, so one of, linux-x86_64, linux-aarch64, darwin-x86_64, darwin-aarch64, or windows-x86_64. For portable models (those that work independent of processor architecture or OS features), leave this field unset.

  • tags array[string]

    An array of tags to organize the model.

  • prefix_strings object

    Optional prefix strings applied at inference

    Hide prefix_strings attributes Show prefix_strings attributes object
    • ingest string

      String prepended to input at ingest

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • model_id string Required

      Identifier for the trained model.

    • model_type string

      The model type

      Supported values include:

      • tree_ensemble: The model definition is an ensemble model of decision trees.
      • lang_ident: A special type reserved for language identification models.
      • pytorch: The stored definition is a PyTorch (specifically a TorchScript) model. Currently only NLP models are supported.

      Values are tree_ensemble, lang_ident, or pytorch.

    • tags array[string] Required

      A comma delimited string of tags. A trained model can have many tags, or none.

    • version string

      The Elasticsearch version number in which the trained model was created.

    • compressed_definition string
    • created_by string

      Information on the creator of the trained model.

    • create_time string | number

      The time when the trained model was created.

      One of:

      The time when the trained model was created.

    • default_field_map object

      Any field map described in the inference configuration takes precedence.

      Hide default_field_map attribute Show default_field_map attribute object
      • * string Additional properties
    • description string

      The free-text description of the trained model.

    • estimated_heap_memory_usage_bytes number

      The estimated heap usage in bytes to keep the trained model in memory.

    • estimated_operations number

      The estimated number of operations to use the trained model.

    • fully_defined boolean

      True if the full model definition is present.

    • inference_config object

      The default configuration for inference. This can be either a regression, classification, or one of the many NLP focused configurations. It must match the underlying definition.trained_model's target_type. For pre-packaged models such as ELSER the config is not required.

      Hide inference_config attributes Show inference_config attributes object
      • regression object

        Regression configuration for inference.

        Hide regression attributes Show regression attributes object
        • results_field string

          The field that is added to incoming documents to contain the inference prediction. Defaults to predicted_value.

        • num_top_feature_importance_values number

          Specifies the maximum number of feature importance values per document.

          Default value is 0.

      • classification object

        Classification configuration for inference.

        Hide classification attributes Show classification attributes object
        • num_top_classes number

          Specifies the number of top class predictions to return. Defaults to 0.

        • num_top_feature_importance_values number

          Specifies the maximum number of feature importance values per document.

          Default value is 0.

        • prediction_field_type string

          Specifies the type of the predicted field to write. Acceptable values are: string, number, boolean. When boolean is provided 1.0 is transformed to true and 0.0 to false.

        • results_field string

          The field that is added to incoming documents to contain the inference prediction. Defaults to predicted_value.

        • top_classes_results_field string

          Specifies the field to which the top classes are written. Defaults to top_classes.

      • text_classification object

        Text classification configuration for inference.

        Hide text_classification attributes Show text_classification attributes object
        • num_top_classes number

          Specifies the number of top class predictions to return. Defaults to 0.

        • tokenization object

          The tokenization options

        • results_field string

          The field that is added to incoming documents to contain the inference prediction. Defaults to predicted_value.

        • classification_labels array[string]

          Classification labels to apply other than the stored labels. Must have the same deminsions as the default configured labels

        • vocabulary object
      • zero_shot_classification object

        Zeroshot classification configuration for inference.

        Hide zero_shot_classification attributes Show zero_shot_classification attributes object
        • tokenization object

          The tokenization options to update when inferring

        • hypothesis_template string

          Hypothesis template used when tokenizing labels for prediction

          Default value is "This example is {}.".

        • classification_labels array[string] Required

          The zero shot classification labels indicating entailment, neutral, and contradiction Must contain exactly and only entailment, neutral, and contradiction

        • results_field string

          The field that is added to incoming documents to contain the inference prediction. Defaults to predicted_value.

        • multi_label boolean

          Indicates if more than one true label exists.

          Default value is false.

        • labels array[string]

          The labels to predict.

      • fill_mask object

        Fill mask configuration for inference.

        Hide fill_mask attributes Show fill_mask attributes object
        • mask_token string

          The string/token which will be removed from incoming documents and replaced with the inference prediction(s). In a response, this field contains the mask token for the specified model/tokenizer. Each model and tokenizer has a predefined mask token which cannot be changed. Thus, it is recommended not to set this value in requests. However, if this field is present in a request, its value must match the predefined value for that model/tokenizer, otherwise the request will fail.

        • num_top_classes number

          Specifies the number of top class predictions to return. Defaults to 0.

        • tokenization object

          The tokenization options to update when inferring

        • results_field string

          The field that is added to incoming documents to contain the inference prediction. Defaults to predicted_value.

        • vocabulary object
      • learning_to_rank object
        Hide learning_to_rank attributes Show learning_to_rank attributes object
        • default_params object
          Hide default_params attribute Show default_params attribute object
          • * object Additional properties
        • feature_extractors array[object]
        • num_top_feature_importance_values number Required
      • ner object

        Named entity recognition configuration for inference.

        Hide ner attributes Show ner attributes object
        • tokenization object

          The tokenization options

        • results_field string

          The field that is added to incoming documents to contain the inference prediction. Defaults to predicted_value.

        • classification_labels array[string]

          The token classification labels. Must be IOB formatted tags

        • vocabulary object
      • pass_through object

        Pass through configuration for inference.

        Hide pass_through attributes Show pass_through attributes object
        • tokenization object

          The tokenization options

        • results_field string

          The field that is added to incoming documents to contain the inference prediction. Defaults to predicted_value.

        • vocabulary object
      • text_embedding object

        Text embedding configuration for inference.

        Hide text_embedding attributes Show text_embedding attributes object
        • embedding_size number

          The number of dimensions in the embedding output

        • tokenization object

          The tokenization options

        • results_field string

          The field that is added to incoming documents to contain the inference prediction. Defaults to predicted_value.

        • vocabulary object
      • text_expansion object

        Text expansion configuration for inference.

        Hide text_expansion attributes Show text_expansion attributes object
        • tokenization object

          The tokenization options

        • results_field string

          The field that is added to incoming documents to contain the inference prediction. Defaults to predicted_value.

        • vocabulary object
      • question_answering object

        Question answering configuration for inference.

        Hide question_answering attributes Show question_answering attributes object
        • num_top_classes number

          Specifies the number of top class predictions to return. Defaults to 0.

        • tokenization object

          The tokenization options to update when inferring

        • results_field string

          The field that is added to incoming documents to contain the inference prediction. Defaults to predicted_value.

        • max_answer_length number

          The maximum answer length to consider

    • input object Required

      The input field names for the model definition.

      Hide input attribute Show input attribute object
      • field_names array[string] Required

        An array of input field names for the model.

    • license_level string

      The license level of the trained model.

    • metadata object

      An object containing metadata about the trained model. For example, models created by data frame analytics contain analysis_config and input objects.

      Hide metadata attributes Show metadata attributes object
      • model_aliases array[string]
      • feature_importance_baseline object

        An object that contains the baseline for feature importance values. For regression analysis, it is a single value. For classification analysis, there is a value for each class.

        Hide feature_importance_baseline attribute Show feature_importance_baseline attribute object
        • * string Additional properties
      • hyperparameters array[object]

        List of the available hyperparameters optimized during the fine_parameter_tuning phase as well as specified by the user.

        Hide hyperparameters attributes Show hyperparameters attributes object
        • absolute_importance number

          A positive number showing how much the parameter influences the variation of the loss function. For hyperparameters with values that are not specified by the user but tuned during hyperparameter optimization.

        • name string Required

          Name of the hyperparameter.

        • relative_importance number

          A number between 0 and 1 showing the proportion of influence on the variation of the loss function among all tuned hyperparameters. For hyperparameters with values that are not specified by the user but tuned during hyperparameter optimization.

        • supplied boolean Required

          Indicates if the hyperparameter is specified by the user (true) or optimized (false).

        • value number Required

          The value of the hyperparameter, either optimized or specified by the user.

      • total_feature_importance array[object]

        An array of the total feature importance for each feature used from the training data set. This array of objects is returned if data frame analytics trained the model and the request includes total_feature_importance in the include request parameter.

        Hide total_feature_importance attributes Show total_feature_importance attributes object
        • feature_name string Required

          The feature for which this importance was calculated.

        • importance array[object] Required

          A collection of feature importance statistics related to the training data set for this particular feature.

        • classes array[object] Required

          If the trained model is a classification model, feature importance statistics are gathered per target class value.

    • model_size_bytes number | string

    • model_package object
      Hide model_package attributes Show model_package attributes object
      • create_time number

        Time unit for milliseconds

      • description string
      • inference_config object
        Hide inference_config attribute Show inference_config attribute object
        • * object Additional properties
      • metadata object
        Hide metadata attribute Show metadata attribute object
        • * object Additional properties
      • minimum_version string
      • model_repository string
      • model_type string
      • packaged_model_id string Required
      • platform_architecture string
      • prefix_strings object
        Hide prefix_strings attributes Show prefix_strings attributes object
        • ingest string

          String prepended to input at ingest

      • size number | string

      • sha256 string
      • tags array[string]
      • vocabulary_file string
    • location object
      Hide location attribute Show location attribute object
      • index object Required
        Hide index attribute Show index attribute object
        • name string Required
    • platform_architecture string
    • prefix_strings object
      Hide prefix_strings attributes Show prefix_strings attributes object
      • ingest string

        String prepended to input at ingest

PUT /_ml/trained_models/{model_id}
curl \
 --request PUT 'http://api.example.com/_ml/trained_models/{model_id}' \
 --header "Authorization: $API_KEY" \
 --header "Content-Type: application/json" \
 --data '{"compressed_definition":"string","definition":{"preprocessors":[{"frequency_encoding":{"field":"string","feature_name":"string","frequency_map":{}},"one_hot_encoding":{"field":"string","hot_map":{}},"target_mean_encoding":{"field":"string","feature_name":"string","target_map":{},"default_value":42.0}}],"trained_model":{"tree":{"classification_labels":["string"],"feature_names":["string"],"target_type":"string","tree_structure":[{}]},"tree_node":{"decision_type":"string","default_left":true,"leaf_value":42.0,"left_child":42.0,"node_index":42.0,"right_child":42.0,"split_feature":42.0,"split_gain":42.0,"threshold":42.0},"ensemble":{"classification_labels":["string"],"feature_names":["string"],"target_type":"string","trained_models":[{}]}}},"description":"string","inference_config":{"regression":{"results_field":"string","num_top_feature_importance_values":0},"classification":{"num_top_classes":42.0,"num_top_feature_importance_values":0,"prediction_field_type":"string","results_field":"string","top_classes_results_field":"string"},"text_classification":{"num_top_classes":42.0,"tokenization":{},"results_field":"string","classification_labels":["string"],"vocabulary":{}},"zero_shot_classification":{"tokenization":{},"hypothesis_template":"\"This example is {}.\"","classification_labels":["string"],"results_field":"string","multi_label":false,"labels":["string"]},"fill_mask":{"mask_token":"string","num_top_classes":42.0,"tokenization":{},"results_field":"string","vocabulary":{}},"learning_to_rank":{"default_params":{"additionalProperty1":{},"additionalProperty2":{}},"feature_extractors":[{}],"num_top_feature_importance_values":42.0},"ner":{"tokenization":{},"results_field":"string","classification_labels":["string"],"vocabulary":{}},"pass_through":{"tokenization":{},"results_field":"string","vocabulary":{}},"text_embedding":{"embedding_size":42.0,"tokenization":{},"results_field":"string","vocabulary":{}},"text_expansion":{"tokenization":{},"results_field":"string","vocabulary":{}},"question_answering":{"num_top_classes":42.0,"tokenization":{},"results_field":"string","max_answer_length":42.0}},"input":{"field_names":"string"},"metadata":{},"model_type":"tree_ensemble","model_size_bytes":42.0,"platform_architecture":"string","tags":["string"],"prefix_strings":{"ingest":"string","search":"string"}}'


















































































Query rules

Query rules enable you to configure per-query rules that are applied at query time to queries that match the specific rule. Query rules are organized into rulesets, collections of query rules that are matched against incoming queries. Query rules are applied using the rule query. If a query matches one or more rules in the ruleset, the query is re-written to apply the rules before searching. This allows pinning documents for only queries that match a specific term.

Learn more about the rule query.
























Get all query rulesets Generally available; Added in 8.10.0

GET /_query_rules

Get summarized information about the query rulesets.

Required authorization

  • Cluster privileges: manage_search_query_rules

Query parameters

  • from number

    The offset from the first result to fetch.

  • size number

    The maximum number of results to retrieve.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • count number Required
    • results array[object] Required
      Hide results attributes Show results attributes object
      • ruleset_id string Required

        A unique identifier for the ruleset.

      • rule_total_count number Required

        The number of rules associated with the ruleset.

      • rule_criteria_types_counts object Required

        A map of criteria type (for example, exact) to the number of rules of that type.

        NOTE: The counts in rule_criteria_types_counts may be larger than the value of rule_total_count because a rule may have multiple criteria.

        Hide rule_criteria_types_counts attribute Show rule_criteria_types_counts attribute object
        • * number Additional properties
      • rule_type_counts object Required

        A map of rule type (for example, pinned) to the number of rules of that type.

        Hide rule_type_counts attribute Show rule_type_counts attribute object
        • * number Additional properties
GET /_query_rules
GET _query_rules/?from=0&size=3
resp = client.query_rules.list_rulesets(
    from="0",
    size="3",
)
const response = await client.queryRules.listRulesets({
  from: 0,
  size: 3,
});
response = client.query_rules.list_rulesets(
  from: "0",
  size: "3"
)
$resp = $client->queryRules()->listRulesets([
    "from" => "0",
    "size" => "3",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_query_rules/?from=0&size=3"
client.queryRules().listRulesets(l -> l
    .from(0)
    .size(3)
);
Response examples (200)
A successful response from `GET _query_rules/?from=0&size=3`.
{
    "count": 3,
    "results": [
        {
            "ruleset_id": "ruleset-1",
            "rule_total_count": 1,
            "rule_criteria_types_counts": {
                "exact": 1
            }
        },
        {
            "ruleset_id": "ruleset-2",
            "rule_total_count": 2,
            "rule_criteria_types_counts": {
                "exact": 1,
                "fuzzy": 1
            }
        },
        {
            "ruleset_id": "ruleset-3",
            "rule_total_count": 3,
            "rule_criteria_types_counts": {
                "exact": 1,
                "fuzzy": 2
            }
        }
    ]
}















































































Run a scrolling search Generally available

POST /_search/scroll/{scroll_id}

All methods and paths for this operation:

GET /_search/scroll

POST /_search/scroll
GET /_search/scroll/{scroll_id}
POST /_search/scroll/{scroll_id}

IMPORTANT: The scroll API is no longer recommend for deep pagination. If you need to preserve the index state while paging through more than 10,000 hits, use the search_after parameter with a point in time (PIT).

The scroll API gets large sets of results from a single scrolling search request. To get the necessary scroll ID, submit a search API request that includes an argument for the scroll query parameter. The scroll parameter indicates how long Elasticsearch should retain the search context for the request. The search response returns a scroll ID in the _scroll_id response body parameter. You can then use the scroll ID with the scroll API to retrieve the next batch of results for the request. If the Elasticsearch security features are enabled, the access to the results of a specific scroll ID is restricted to the user or API key that submitted the search.

You can also use the scroll API to specify a new scroll parameter that extends or shortens the retention period for the search context.

IMPORTANT: Results from a scrolling search reflect the state of the index at the time of the initial search request. Subsequent indexing or document changes only affect later search and scroll requests.

Required authorization

  • Index privileges: read
External documentation

Path parameters

  • scroll_id string Required Deprecated

    The scroll ID

Query parameters

  • scroll string

    The period to retain the search context for scrolling.

    External documentation
  • scroll_id string Deprecated

    The scroll ID for scrolled search

  • rest_total_hits_as_int boolean

    If true, the API response’s hit.total property is returned as an integer. If false, the API response’s hit.total property is returned as an object.

application/json

Body

  • scroll string

    The period to retain the search context for scrolling.

    External documentation
  • scroll_id string Required

    The scroll ID of the search.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • took number Required

      The number of milliseconds it took Elasticsearch to run the request. This value is calculated by measuring the time elapsed between receipt of a request on the coordinating node and the time at which the coordinating node is ready to send the response. It includes:

      • Communication time between the coordinating node and data nodes
      • Time the request spends in the search thread pool, queued for execution
      • Actual run time

      It does not include:

      • Time needed to send the request to Elasticsearch
      • Time needed to serialize the JSON response
      • Time needed to send the response to a client
    • timed_out boolean Required

      If true, the request timed out before completion; returned results may be partial or empty.

    • _shards object Required

      A count of shards used for the request.

      Hide _shards attributes Show _shards attributes object
      • failed number Required

        The number of shards the operation or search attempted to run on but failed.

      • successful number Required

        The number of shards the operation or search succeeded on.

      • total number Required

        The number of shards the operation or search will run on overall.

      • failures array[object]
        Hide failures attributes Show failures attributes object
        • index string
        • node string
        • reason object Required

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

        • shard number
        • status string
        • primary boolean
      • skipped number
    • hits object Required

      The returned documents and metadata.

      Hide hits attributes Show hits attributes object
      • total object | number

        Total hit count information, present only if track_total_hits wasn't false in the search request.

        One of:
        Hide attributes Show attributes
        • relation string Required

          Supported values include:

          • eq: Accurate
          • gte: Lower bound, including returned events or sequences

          Values are eq or gte.

        • value number Required
      • hits array[object] Required
        Hide hits attributes Show hits attributes object
        • _index string Required
        • _id string
        • _score number | string | null

        • _explanation object
        • fields object
          Hide fields attribute Show fields attribute object
          • * object Additional properties
        • highlight object
          Hide highlight attribute Show highlight attribute object
          • * array[string] Additional properties
        • inner_hits object
          Hide inner_hits attribute Show inner_hits attribute object
          • * object Additional properties
        • matched_queries array[string] | object

        • _nested object
        • _ignored array[string]
        • ignored_field_values object
          Hide ignored_field_values attribute Show ignored_field_values attribute object
          • * array[number | string | boolean | null | object] Additional properties
        • _shard string
        • _node string
        • _routing string
        • _source object
        • _rank number
        • _seq_no number
        • _primary_term number
        • _version number
        • sort array[number | string | boolean | null | object]
      • max_score number | string | null

    • aggregations object
    • _clusters object
      Hide _clusters attributes Show _clusters attributes object
      • skipped number Required
      • successful number Required
      • total number Required
      • running number Required
      • partial number Required
      • failed number Required
      • details object
        Hide details attribute Show details attribute object
        • * object Additional properties
          Hide * attributes Show * attributes object
          • status string Required

            Values are running, successful, partial, skipped, or failed.

          • indices string Required
          • timed_out boolean Required
          • _shards object
          • failures array[object]
    • fields object
      Hide fields attribute Show fields attribute object
      • * object Additional properties
    • max_score number
    • num_reduce_phases number
    • profile object
      Hide profile attribute Show profile attribute object
      • shards array[object] Required
        Hide shards attributes Show shards attributes object
        • aggregations array[object] Required
        • cluster string Required
        • dfs object
        • fetch object
        • id string Required
        • index string Required
        • node_id string Required
        • searches array[object] Required
        • shard_id number Required
    • pit_id string
    • _scroll_id string

      The identifier for the search and its search context. You can use this scroll ID with the scroll API to retrieve the next batch of search results for the request. This property is returned only if the scroll query parameter is specified in the request.

    • suggest object
      Hide suggest attribute Show suggest attribute object
      • * array[object] Additional properties
        One of:
        Hide attributes Show attributes
        • length number Required
        • offset number Required
        • text string Required
        • options
    • terminated_early boolean
POST /_search/scroll/{scroll_id}
GET /_search/scroll
{
  "scroll_id" : "DXF1ZXJ5QW5kRmV0Y2gBAAAAAAAAAD4WYm9laVYtZndUQlNsdDcwakFMNjU1QQ=="
}
resp = client.scroll(
    scroll_id="DXF1ZXJ5QW5kRmV0Y2gBAAAAAAAAAD4WYm9laVYtZndUQlNsdDcwakFMNjU1QQ==",
)
const response = await client.scroll({
  scroll_id: "DXF1ZXJ5QW5kRmV0Y2gBAAAAAAAAAD4WYm9laVYtZndUQlNsdDcwakFMNjU1QQ==",
});
response = client.scroll(
  body: {
    "scroll_id": "DXF1ZXJ5QW5kRmV0Y2gBAAAAAAAAAD4WYm9laVYtZndUQlNsdDcwakFMNjU1QQ=="
  }
)
$resp = $client->scroll([
    "body" => [
        "scroll_id" => "DXF1ZXJ5QW5kRmV0Y2gBAAAAAAAAAD4WYm9laVYtZndUQlNsdDcwakFMNjU1QQ==",
    ],
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"scroll_id":"DXF1ZXJ5QW5kRmV0Y2gBAAAAAAAAAD4WYm9laVYtZndUQlNsdDcwakFMNjU1QQ=="}' "$ELASTICSEARCH_URL/_search/scroll"
client.scroll(s -> s
    .scrollId("DXF1ZXJ5QW5kRmV0Y2gBAAAAAAAAAD4WYm9laVYtZndUQlNsdDcwakFMNjU1QQ==")
);
Request example
Run `GET /_search/scroll` to get the next batch of results for a scrolling search.
{
  "scroll_id" : "DXF1ZXJ5QW5kRmV0Y2gBAAAAAAAAAD4WYm9laVYtZndUQlNsdDcwakFMNjU1QQ=="
}




























































Get terms in an index Generally available; Added in 7.14.0

POST /{index}/_terms_enum

All methods and paths for this operation:

GET /{index}/_terms_enum

POST /{index}/_terms_enum

Discover terms that match a partial string in an index. This API is designed for low-latency look-ups used in auto-complete scenarios.


The terms enum API may return terms from deleted documents. Deleted documents are initially only marked as deleted. It is not until their segments are merged that documents are actually deleted. Until that happens, the terms enum API will return terms from these documents.

Path parameters

  • index string Required

    A comma-separated list of data streams, indices, and index aliases to search. Wildcard (*) expressions are supported. To search all data streams or indices, omit this parameter or use * or _all.

application/json

Body Required

  • field string Required

    The string to match at the start of indexed terms. If not provided, all terms in the field are considered.

  • size number

    The number of matching terms to return.

    Default value is 10.

  • timeout string

    The maximum length of time to spend collecting results. If the timeout is exceeded the complete flag set to false in the response and the results may be partial or empty.

    External documentation
  • case_insensitive boolean

    When true, the provided search string is matched against index terms without case sensitivity.

    Default value is false.

  • index_filter object

    Filter an index shard if the provided query rewrites to match_none.

    External documentation
  • string string

    The string to match at the start of indexed terms. If it is not provided, all terms in the field are considered.


    The prefix string cannot be larger than the largest possible keyword value, which is Lucene's term byte-length limit of 32766.

  • search_after string

    The string after which terms in the index should be returned. It allows for a form of pagination if the last result from one request is passed as the search_after parameter for a subsequent request.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • _shards object Required
      Hide _shards attributes Show _shards attributes object
      • failed number Required

        The number of shards the operation or search attempted to run on but failed.

      • successful number Required

        The number of shards the operation or search succeeded on.

      • total number Required

        The number of shards the operation or search will run on overall.

      • failures array[object]
        Hide failures attributes Show failures attributes object
        • index string
        • node string
        • reason object Required

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

        • shard number
        • status string
        • primary boolean
      • skipped number
    • terms array[string] Required
    • complete boolean Required

      If false, the returned terms set may be incomplete and should be treated as approximate. This can occur due to a few reasons, such as a request timeout or a node error.

POST /{index}/_terms_enum
POST stackoverflow/_terms_enum
{
    "field" : "tags",
    "string" : "kiba"
}
resp = client.terms_enum(
    index="stackoverflow",
    field="tags",
    string="kiba",
)
const response = await client.termsEnum({
  index: "stackoverflow",
  field: "tags",
  string: "kiba",
});
response = client.terms_enum(
  index: "stackoverflow",
  body: {
    "field": "tags",
    "string": "kiba"
  }
)
$resp = $client->termsEnum([
    "index" => "stackoverflow",
    "body" => [
        "field" => "tags",
        "string" => "kiba",
    ],
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"field":"tags","string":"kiba"}' "$ELASTICSEARCH_URL/stackoverflow/_terms_enum"
client.termsEnum(t -> t
    .field("tags")
    .index("stackoverflow")
    .string("kiba")
);
Request example
Run `POST stackoverflow/_terms_enum`.
{
    "field" : "tags",
    "string" : "kiba"
}
Response examples (200)
A successful response from `POST stackoverflow/_terms_enum`.
{
  "_shards": {
    "total": 1,
    "successful": 1,
    "failed": 0
  },
  "terms": [
    "kibana"
  ],
  "complete" : true
}













Get search applications Beta; Added in 8.8.0

GET /_application/search_application

Get information about search applications.

Required authorization

  • Cluster privileges: manage_search_application

Query parameters

  • q string

    Query in the Lucene query string syntax.

  • from number

    Starting offset.

  • size number

    Specifies a max number of results to get.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • count number Required
    • results array[object] Required
      Hide results attributes Show results attributes object
      • indices array[string] Required

        Indices that are part of the Search Application.

      • analytics_collection_name string

        Analytics collection associated to the Search Application.

      • template object

        Search template to use on search operations.

      • name string Required

        Search Application name

      • updated_at_millis number

        Time unit for milliseconds

GET /_application/search_application
GET _application/search_application?from=0&size=3&q=app*
resp = client.search_application.list(
    from="0",
    size="3",
    q="app*",
)
const response = await client.searchApplication.list({
  from: 0,
  size: 3,
  q: "app*",
});
response = client.search_application.list(
  from: "0",
  size: "3",
  q: "app*"
)
$resp = $client->searchApplication()->list([
    "from" => "0",
    "size" => "3",
    "q" => "app*",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_application/search_application?from=0&size=3&q=app*"
client.searchApplication().list(l -> l
    .from(0)
    .q("app*")
    .size(3)
);
Response examples (200)
A succesful response from `GET _application/search_application?from=0&size=3&q=app*` returns the first three search applications whose names start with `app`.
{
  "count": 2,
  "results": [
    {
      "name": "app-1",
      "updated_at_millis": 1690981129366
    },
    {
      "name": "app-2",
      "updated_at_millis": 1691501823939
    }
  ]
}




Run a search application search Beta; Added in 8.8.0

POST /_application/search_application/{name}/_search

All methods and paths for this operation:

GET /_application/search_application/{name}/_search

POST /_application/search_application/{name}/_search

Generate and run an Elasticsearch query that uses the specified query parameteter and the search template associated with the search application or default template. Unspecified template parameters are assigned their default values if applicable.

Path parameters

  • name string Required

    The name of the search application to be searched.

Query parameters

  • typed_keys boolean Generally available; Added in 8.14.0

    Determines whether aggregation names are prefixed by their respective types in the response.

application/json

Body

  • params object

    Query parameters specific to this request, which will override any defaults specified in the template.

    Hide params attribute Show params attribute object
    • * object Additional properties

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • took number Required

      The number of milliseconds it took Elasticsearch to run the request. This value is calculated by measuring the time elapsed between receipt of a request on the coordinating node and the time at which the coordinating node is ready to send the response. It includes:

      • Communication time between the coordinating node and data nodes
      • Time the request spends in the search thread pool, queued for execution
      • Actual run time

      It does not include:

      • Time needed to send the request to Elasticsearch
      • Time needed to serialize the JSON response
      • Time needed to send the response to a client
    • timed_out boolean Required

      If true, the request timed out before completion; returned results may be partial or empty.

    • _shards object Required

      A count of shards used for the request.

      Hide _shards attributes Show _shards attributes object
      • failed number Required

        The number of shards the operation or search attempted to run on but failed.

      • successful number Required

        The number of shards the operation or search succeeded on.

      • total number Required

        The number of shards the operation or search will run on overall.

      • failures array[object]
        Hide failures attributes Show failures attributes object
        • index string
        • node string
        • reason object Required

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

        • shard number
        • status string
        • primary boolean
      • skipped number
    • hits object Required

      The returned documents and metadata.

      Hide hits attributes Show hits attributes object
      • total object | number

        Total hit count information, present only if track_total_hits wasn't false in the search request.

        One of:
        Hide attributes Show attributes
        • relation string Required

          Supported values include:

          • eq: Accurate
          • gte: Lower bound, including returned events or sequences

          Values are eq or gte.

        • value number Required
      • hits array[object] Required
        Hide hits attributes Show hits attributes object
        • _index string Required
        • _id string
        • _score number | string | null

        • _explanation object
        • fields object
          Hide fields attribute Show fields attribute object
          • * object Additional properties
        • highlight object
          Hide highlight attribute Show highlight attribute object
          • * array[string] Additional properties
        • inner_hits object
          Hide inner_hits attribute Show inner_hits attribute object
          • * object Additional properties
        • matched_queries array[string] | object

        • _nested object
        • _ignored array[string]
        • ignored_field_values object
          Hide ignored_field_values attribute Show ignored_field_values attribute object
          • * array[number | string | boolean | null | object] Additional properties
        • _shard string
        • _node string
        • _routing string
        • _source object
        • _rank number
        • _seq_no number
        • _primary_term number
        • _version number
        • sort array[number | string | boolean | null | object]
      • max_score number | string | null

    • aggregations object
    • _clusters object
      Hide _clusters attributes Show _clusters attributes object
      • skipped number Required
      • successful number Required
      • total number Required
      • running number Required
      • partial number Required
      • failed number Required
      • details object
        Hide details attribute Show details attribute object
        • * object Additional properties
          Hide * attributes Show * attributes object
          • status string Required

            Values are running, successful, partial, skipped, or failed.

          • indices string Required
          • timed_out boolean Required
          • _shards object
          • failures array[object]
    • fields object
      Hide fields attribute Show fields attribute object
      • * object Additional properties
    • max_score number
    • num_reduce_phases number
    • profile object
      Hide profile attribute Show profile attribute object
      • shards array[object] Required
        Hide shards attributes Show shards attributes object
        • aggregations array[object] Required
        • cluster string Required
        • dfs object
        • fetch object
        • id string Required
        • index string Required
        • node_id string Required
        • searches array[object] Required
        • shard_id number Required
    • pit_id string
    • _scroll_id string

      The identifier for the search and its search context. You can use this scroll ID with the scroll API to retrieve the next batch of search results for the request. This property is returned only if the scroll query parameter is specified in the request.

    • suggest object
      Hide suggest attribute Show suggest attribute object
      • * array[object] Additional properties
        One of:
        Hide attributes Show attributes
        • length number Required
        • offset number Required
        • text string Required
        • options
    • terminated_early boolean
POST /_application/search_application/{name}/_search
POST _application/search_application/my-app/_search
{
  "params": {
    "query_string": "my first query",
    "text_fields": [
      {"name": "title", "boost": 5},
      {"name": "description", "boost": 1}
    ]
  }
}
resp = client.search_application.search(
    name="my-app",
    params={
        "query_string": "my first query",
        "text_fields": [
            {
                "name": "title",
                "boost": 5
            },
            {
                "name": "description",
                "boost": 1
            }
        ]
    },
)
const response = await client.searchApplication.search({
  name: "my-app",
  params: {
    query_string: "my first query",
    text_fields: [
      {
        name: "title",
        boost: 5,
      },
      {
        name: "description",
        boost: 1,
      },
    ],
  },
});
response = client.search_application.search(
  name: "my-app",
  body: {
    "params": {
      "query_string": "my first query",
      "text_fields": [
        {
          "name": "title",
          "boost": 5
        },
        {
          "name": "description",
          "boost": 1
        }
      ]
    }
  }
)
$resp = $client->searchApplication()->search([
    "name" => "my-app",
    "body" => [
        "params" => [
            "query_string" => "my first query",
            "text_fields" => array(
                [
                    "name" => "title",
                    "boost" => 5,
                ],
                [
                    "name" => "description",
                    "boost" => 1,
                ],
            ),
        ],
    ],
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"params":{"query_string":"my first query","text_fields":[{"name":"title","boost":5},{"name":"description","boost":1}]}}' "$ELASTICSEARCH_URL/_application/search_application/my-app/_search"
client.searchApplication().search(s -> s
    .name("my-app")
    .params(Map.of("text_fields", JsonData.fromJson("[{\"name\":\"title\",\"boost\":5},{\"name\":\"description\",\"boost\":1}]"),"query_string", JsonData.fromJson("\"my first query\"")))
);
Request example
Use `POST _application/search_application/my-app/_search` to run a search against a search application called `my-app` that uses a search template.
{
  "params": {
    "query_string": "my first query",
    "text_fields": [
      {"name": "title", "boost": 5},
      {"name": "description", "boost": 1}
    ]
  }
}









Mount a snapshot Generally available; Added in 7.10.0

POST /_snapshot/{repository}/{snapshot}/_mount

Mount a snapshot as a searchable snapshot index. Do not use this API for snapshots managed by index lifecycle management (ILM). Manually mounting ILM-managed snapshots can interfere with ILM processes.

Required authorization

  • Index privileges: manage
  • Cluster privileges: manage

Path parameters

  • repository string Required

    The name of the repository containing the snapshot of the index to mount.

  • snapshot string Required

    The name of the snapshot of the index to mount.

Query parameters

  • master_timeout string

    The period to wait for the master node. If the master node is not available before the timeout expires, the request fails and returns an error. To indicate that the request should never timeout, set it to -1.

    External documentation
  • wait_for_completion boolean

    If true, the request blocks until the operation is complete.

  • storage string

    The mount option for the searchable snapshot index.

application/json

Body Required

  • index string Required

    The name of the index contained in the snapshot whose data is to be mounted. If no renamed_index is specified, this name will also be used to create the new index.

  • renamed_index string

    The name of the index that will be created.

  • index_settings object

    The settings that should be added to the index when it is mounted.

    Hide index_settings attribute Show index_settings attribute object
    • * object Additional properties
  • ignore_index_settings array[string]

    The names of settings that should be removed from the index when it is mounted.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • snapshot object Required
      Hide snapshot attributes Show snapshot attributes object
      • snapshot string Required
      • indices string | array[string] Required
      • shards object Required
        Hide shards attributes Show shards attributes object
        • failed number Required

          The number of shards the operation or search attempted to run on but failed.

        • successful number Required

          The number of shards the operation or search succeeded on.

        • total number Required

          The number of shards the operation or search will run on overall.

        • failures array[object]
        • skipped number
POST /_snapshot/{repository}/{snapshot}/_mount
POST /_snapshot/my_repository/my_snapshot/_mount?wait_for_completion=true
{
  "index": "my_docs",
  "renamed_index": "docs",
  "index_settings": {
    "index.number_of_replicas": 0
  },
  "ignore_index_settings": [ "index.refresh_interval" ]
}
resp = client.searchable_snapshots.mount(
    repository="my_repository",
    snapshot="my_snapshot",
    wait_for_completion=True,
    index="my_docs",
    renamed_index="docs",
    index_settings={
        "index.number_of_replicas": 0
    },
    ignore_index_settings=[
        "index.refresh_interval"
    ],
)
const response = await client.searchableSnapshots.mount({
  repository: "my_repository",
  snapshot: "my_snapshot",
  wait_for_completion: "true",
  index: "my_docs",
  renamed_index: "docs",
  index_settings: {
    "index.number_of_replicas": 0,
  },
  ignore_index_settings: ["index.refresh_interval"],
});
response = client.searchable_snapshots.mount(
  repository: "my_repository",
  snapshot: "my_snapshot",
  wait_for_completion: "true",
  body: {
    "index": "my_docs",
    "renamed_index": "docs",
    "index_settings": {
      "index.number_of_replicas": 0
    },
    "ignore_index_settings": [
      "index.refresh_interval"
    ]
  }
)
$resp = $client->searchableSnapshots()->mount([
    "repository" => "my_repository",
    "snapshot" => "my_snapshot",
    "wait_for_completion" => "true",
    "body" => [
        "index" => "my_docs",
        "renamed_index" => "docs",
        "index_settings" => [
            "index.number_of_replicas" => 0,
        ],
        "ignore_index_settings" => array(
            "index.refresh_interval",
        ),
    ],
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"index":"my_docs","renamed_index":"docs","index_settings":{"index.number_of_replicas":0},"ignore_index_settings":["index.refresh_interval"]}' "$ELASTICSEARCH_URL/_snapshot/my_repository/my_snapshot/_mount?wait_for_completion=true"
client.searchableSnapshots().mount(m -> m
    .ignoreIndexSettings("index.refresh_interval")
    .index("my_docs")
    .indexSettings("index.number_of_replicas", JsonData.fromJson("0"))
    .renamedIndex("docs")
    .repository("my_repository")
    .snapshot("my_snapshot")
    .waitForCompletion(true)
);
Request example
Run `POST /_snapshot/my_repository/my_snapshot/_mount?wait_for_completion=true` to mount the index `my_docs` from an existing snapshot named `my_snapshot` stored in `my_repository` as a new index `docs`.
{
  "index": "my_docs",
  "renamed_index": "docs",
  "index_settings": {
    "index.number_of_replicas": 0
  },
  "ignore_index_settings": [ "index.refresh_interval" ]
}