Explain the shard allocations Generally available; Added in 5.0.0

POST /_cluster/allocation/explain

All methods and paths for this operation:

GET /_cluster/allocation/explain

POST /_cluster/allocation/explain

Get explanations for shard allocations in the cluster. This API accepts the current_node, index, primary and shard parameters in the request body or in query parameters, but not in both at the same time. For unassigned shards, it provides an explanation for why the shard is unassigned. For assigned shards, it provides an explanation for why the shard is remaining on its current node and has not moved or rebalanced to another node. This API can be very useful when attempting to diagnose why a shard is unassigned or why a shard continues to remain on its current node when you might expect otherwise. Refer to the linked documentation for examples of how to troubleshoot allocation issues using this API.

External documentation

Query parameters

  • index string

    The name of the index that you would like an explanation for.

  • shard number

    An identifier for the shard that you would like an explanation for.

  • primary boolean

    If true, returns an explanation for the primary shard for the specified shard ID.

  • current_node string

    Explain a shard only if it is currently located on the specified node name or node ID.

  • include_disk_info boolean

    If true, returns information about disk usage and shard sizes.

  • include_yes_decisions boolean

    If true, returns YES decisions in explanation.

  • master_timeout string

    Period to wait for a connection to the master node.

    Values are -1 or 0.

application/json

Body

  • index string

    The name of the index that you would like an explanation for.

  • shard number

    An identifier for the shard that you would like an explanation for.

  • primary boolean

    If true, returns an explanation for the primary shard for the specified shard ID.

  • current_node string

    Explain a shard only if it is currently located on the specified node name or node ID.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • allocate_explanation string
    • allocation_delay string

      A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

    • allocation_delay_in_millis number

      Time unit for milliseconds

    • can_allocate string

      Values are yes, no, worse_balance, throttled, awaiting_info, allocation_delayed, no_valid_shard_copy, or no_attempt.

    • can_move_to_other_node string

      Values are yes, no, worse_balance, throttled, awaiting_info, allocation_delayed, no_valid_shard_copy, or no_attempt.

    • can_rebalance_cluster string

      Values are yes, no, worse_balance, throttled, awaiting_info, allocation_delayed, no_valid_shard_copy, or no_attempt.

    • can_rebalance_cluster_decisions array[object]
      Hide can_rebalance_cluster_decisions attributes Show can_rebalance_cluster_decisions attributes object
      • decider string Required
      • decision string Required

        Values are NO, YES, THROTTLE, or ALWAYS.

      • explanation string Required
    • can_rebalance_to_other_node string

      Values are yes, no, worse_balance, throttled, awaiting_info, allocation_delayed, no_valid_shard_copy, or no_attempt.

    • can_remain_decisions array[object]
      Hide can_remain_decisions attributes Show can_remain_decisions attributes object
      • decider string Required
      • decision string Required

        Values are NO, YES, THROTTLE, or ALWAYS.

      • explanation string Required
    • can_remain_on_current_node string

      Values are yes, no, worse_balance, throttled, awaiting_info, allocation_delayed, no_valid_shard_copy, or no_attempt.

    • cluster_info object
      Hide cluster_info attributes Show cluster_info attributes object
      • nodes object Required
        Hide nodes attribute Show nodes attribute object
        • * object Additional properties
          Hide * attributes Show * attributes object
          • node_name string Required
          • least_available object Required
          • most_available object Required
      • shard_sizes object Required
        Hide shard_sizes attribute Show shard_sizes attribute object
        • * number Additional properties
      • shard_data_set_sizes object
        Hide shard_data_set_sizes attribute Show shard_data_set_sizes attribute object
        • * string Additional properties
      • shard_paths object Required
        Hide shard_paths attribute Show shard_paths attribute object
        • * string Additional properties
      • reserved_sizes array[object] Required
        Hide reserved_sizes attributes Show reserved_sizes attributes object
        • node_id string Required
        • path string Required
        • total number Required
        • shards array[string] Required
    • configured_delay string

      A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

    • configured_delay_in_millis number

      Time unit for milliseconds

    • current_node object
      Hide current_node attributes Show current_node attributes object
      • id string Required
      • name string Required
      • roles array[string] Required

        Values are master, data, data_cold, data_content, data_frozen, data_hot, data_warm, client, ingest, ml, voting_only, transform, remote_cluster_client, or coordinating_only.

      • attributes object Required
        Hide attributes attribute Show attributes attribute object
        • * string Additional properties
      • transport_address string Required
      • weight_ranking number Required
    • current_state string Required
    • index string Required
    • move_explanation string
    • node_allocation_decisions array[object]
      Hide node_allocation_decisions attributes Show node_allocation_decisions attributes object
      • deciders array[object] Required
        Hide deciders attributes Show deciders attributes object
        • decider string Required
        • decision string Required

          Values are NO, YES, THROTTLE, or ALWAYS.

        • explanation string Required
      • node_attributes object Required
        Hide node_attributes attribute Show node_attributes attribute object
        • * string Additional properties
      • node_decision string Required

        Values are yes, no, worse_balance, throttled, awaiting_info, allocation_delayed, no_valid_shard_copy, or no_attempt.

      • node_id string Required
      • node_name string Required
      • roles array[string] Required

        Values are master, data, data_cold, data_content, data_frozen, data_hot, data_warm, client, ingest, ml, voting_only, transform, remote_cluster_client, or coordinating_only.

      • store object
        Hide store attributes Show store attributes object
        • allocation_id string Required
        • found boolean Required
        • in_sync boolean Required
        • matching_size_in_bytes number Required
        • matching_sync_id boolean Required
        • store_exception string Required
      • transport_address string Required
      • weight_ranking number Required
    • primary boolean Required
    • rebalance_explanation string
    • remaining_delay string

      A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

    • remaining_delay_in_millis number

      Time unit for milliseconds

    • shard number Required
    • unassigned_info object
      Hide unassigned_info attributes Show unassigned_info attributes object
      • at string | number

        One of:
      • last_allocation_status string
      • reason string Required

        Values are INDEX_CREATED, CLUSTER_RECOVERED, INDEX_REOPENED, DANGLING_INDEX_IMPORTED, NEW_INDEX_RESTORED, EXISTING_INDEX_RESTORED, REPLICA_ADDED, ALLOCATION_FAILED, NODE_LEFT, REROUTE_CANCELLED, REINITIALIZED, REALLOCATED_REPLICA, PRIMARY_FAILED, FORCED_EMPTY_PRIMARY, or MANUAL_ALLOCATION.

      • details string
      • failed_allocation_attempts number
      • delayed boolean
      • allocation_status string
    • note string Generally available; Added in 7.14.0
POST /_cluster/allocation/explain
GET _cluster/allocation/explain
{
  "index": "my-index-000001",
  "shard": 0,
  "primary": false,
  "current_node": "my-node"
}
resp = client.cluster.allocation_explain(
    index="my-index-000001",
    shard=0,
    primary=False,
    current_node="my-node",
)
const response = await client.cluster.allocationExplain({
  index: "my-index-000001",
  shard: 0,
  primary: false,
  current_node: "my-node",
});
response = client.cluster.allocation_explain(
  body: {
    "index": "my-index-000001",
    "shard": 0,
    "primary": false,
    "current_node": "my-node"
  }
)
$resp = $client->cluster()->allocationExplain([
    "body" => [
        "index" => "my-index-000001",
        "shard" => 0,
        "primary" => false,
        "current_node" => "my-node",
    ],
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"index":"my-index-000001","shard":0,"primary":false,"current_node":"my-node"}' "$ELASTICSEARCH_URL/_cluster/allocation/explain"
Request examples
Run `GET _cluster/allocation/explain` to get an explanation for a shard's current allocation.
{
  "index": "my-index-000001",
  "shard": 0,
  "primary": false,
  "current_node": "my-node"
}
Run `GET _cluster/allocation/explain?index=my-index-000001&shard=0&primary=false&current_node=my-node` to get an explanation for a shard's current allocation. No parameters are required in the request body
{}
Response examples (200)
An example of an allocation explanation for an unassigned primary shard. In this example, a newly created index has an index setting that requires that it only be allocated to a node named `nonexistent_node`, which does not exist, so the index is unable to allocate.
{
  "index" : "my-index-000001",
  "shard" : 0,
  "primary" : true,
  "current_state" : "unassigned",
  "unassigned_info" : {
    "reason" : "INDEX_CREATED",
    "at" : "2017-01-04T18:08:16.600Z",
    "last_allocation_status" : "no"
  },
  "can_allocate" : "no",
  "allocate_explanation" : "Elasticsearch isn't allowed to allocate this shard to any of the nodes in the cluster. Choose a node to which you expect this shard to be allocated, find this node in the node-by-node explanation, and address the reasons which prevent Elasticsearch from allocating this shard there.",
  "node_allocation_decisions" : [
    {
      "node_id" : "8qt2rY-pT6KNZB3-hGfLnw",
      "node_name" : "node-0",
      "transport_address" : "127.0.0.1:9401",
      "roles" : ["data", "data_cold", "data_content", "data_frozen", "data_hot", "data_warm", "ingest", "master", "ml", "remote_cluster_client", "transform"],
      "node_attributes" : {},
      "node_decision" : "no",
      "weight_ranking" : 1,
      "deciders" : [
        {
          "decider" : "filter",
          "decision" : "NO",
          "explanation" : "node does not match index setting [index.routing.allocation.include] filters [_name:\"nonexistent_node\"]"
        }
      ]
    }
  ]
}
An example of an allocation explanation for an unassigned primary shard that has reached the maximum number of allocation retry attempts. After the maximum number of retries is reached, Elasticsearch stops attempting to allocate the shard in order to prevent infinite retries which may impact cluster performance.
{
  "index" : "my-index-000001",
  "shard" : 0,
  "primary" : true,
  "current_state" : "unassigned",
  "unassigned_info" : {
    "at" : "2017-01-04T18:03:28.464Z",
    "failed shard on node [mEKjwwzLT1yJVb8UxT6anw]: failed recovery, failure RecoveryFailedException",
    "reason": "ALLOCATION_FAILED",
    "failed_allocation_attempts": 5,
    "last_allocation_status": "no",
  },
  "can_allocate": "no",
  "allocate_explanation": "cannot allocate because allocation is not permitted to any of the nodes",
  "node_allocation_decisions" : [
    {
      "node_id" : "3sULLVJrRneSg0EfBB-2Ew",
      "node_name" : "node_t0",
      "transport_address" : "127.0.0.1:9400",
      "roles" : ["data_content", "data_hot"],
      "node_decision" : "no",
      "store" : {
        "matching_size" : "4.2kb",
        "matching_size_in_bytes" : 4325
      },
      "deciders" : [
        {
          "decider": "max_retry",
          "decision" : "NO",
          "explanation": "shard has exceeded the maximum number of retries [5] on failed allocation attempts - manually call [POST /_cluster/reroute?retry_failed] to retry, [unassigned_info[[reason=ALLOCATION_FAILED], at[2024-07-30T21:04:12.166Z], failed_attempts[5], failed_nodes[[mEKjwwzLT1yJVb8UxT6anw]], delayed=false, details[failed shard on node [mEKjwwzLT1yJVb8UxT6anw]: failed recovery, failure RecoveryFailedException], allocation_status[deciders_no]]]"
        }
      ]
    }
  ]
}












Update the cluster settings Generally available

PUT /_cluster/settings

Configure and update dynamic settings on a running cluster. You can also configure dynamic settings locally on an unstarted or shut down node in elasticsearch.yml.

Updates made with this API can be persistent, which apply across cluster restarts, or transient, which reset after a cluster restart. You can also reset transient or persistent settings by assigning them a null value.

If you configure the same setting using multiple methods, Elasticsearch applies the settings in following order of precedence: 1) Transient setting; 2) Persistent setting; 3) elasticsearch.yml setting; 4) Default setting value. For example, you can apply a transient setting to override a persistent setting or elasticsearch.yml setting. However, a change to an elasticsearch.yml setting will not override a defined transient or persistent setting.

TIP: In Elastic Cloud, use the user settings feature to configure all cluster settings. This method automatically rejects unsafe settings that could break your cluster. If you run Elasticsearch on your own hardware, use this API to configure dynamic cluster settings. Only use elasticsearch.yml for static cluster settings and node settings. The API doesn’t require a restart and ensures a setting’s value is the same on all nodes.

WARNING: Transient cluster settings are no longer recommended. Use persistent cluster settings instead. If a cluster becomes unstable, transient settings can clear unexpectedly, resulting in a potentially undesired cluster configuration.

External documentation

Query parameters

  • flat_settings boolean

    Return settings in flat format (default: false)

  • master_timeout string

    Explicit operation timeout for connection to master node

    Values are -1 or 0.

  • timeout string

    Explicit operation timeout

    Values are -1 or 0.

application/json

Body Required

  • persistent object

    The settings that persist after the cluster restarts.

    Hide persistent attribute Show persistent attribute object
    • * object Additional properties
  • transient object

    The settings that do not persist after the cluster restarts.

    Hide transient attribute Show transient attribute object
    • * object Additional properties

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • acknowledged boolean Required
    • persistent object Required
      Hide persistent attribute Show persistent attribute object
      • * object Additional properties
    • transient object Required
      Hide transient attribute Show transient attribute object
      • * object Additional properties
PUT /_cluster/settings
PUT /_cluster/settings
{
  "persistent" : {
    "indices.recovery.max_bytes_per_sec" : "50mb"
  }
}
resp = client.cluster.put_settings(
    persistent={
        "indices.recovery.max_bytes_per_sec": "50mb"
    },
)
const response = await client.cluster.putSettings({
  persistent: {
    "indices.recovery.max_bytes_per_sec": "50mb",
  },
});
response = client.cluster.put_settings(
  body: {
    "persistent": {
      "indices.recovery.max_bytes_per_sec": "50mb"
    }
  }
)
$resp = $client->cluster()->putSettings([
    "body" => [
        "persistent" => [
            "indices.recovery.max_bytes_per_sec" => "50mb",
        ],
    ],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"persistent":{"indices.recovery.max_bytes_per_sec":"50mb"}}' "$ELASTICSEARCH_URL/_cluster/settings"
Request examples
An example of a persistent update.
{
  "persistent" : {
    "indices.recovery.max_bytes_per_sec" : "50mb"
  }
}
PUT `/_cluster/settings` to update the `action.auto_create_index` setting. The setting accepts a comma-separated list of patterns that you want to allow or you can prefix each pattern with `+` or `-` to indicate whether it should be allowed or blocked. In this example, the auto-creation of indices called `my-index-000001` or `index10` is allowed, the creation of indices that match the pattern `index1*` is blocked, and the creation of any other indices that match the `ind*` pattern is allowed. Patterns are matched in the order specified.
{
  "persistent": {
    "action.auto_create_index": "my-index-000001,index10,-index1*,+ind*" 
  }
}








Get the pending cluster tasks Generally available

GET /_cluster/pending_tasks

Get information about cluster-level changes (such as create index, update mapping, allocate or fail shard) that have not yet taken effect.

NOTE: This API returns a list of any pending updates to the cluster state. These are distinct from the tasks reported by the task management API which include periodic tasks and tasks initiated by the user, such as node stats, search queries, or create index requests. However, if a user-initiated task such as a create index command causes a cluster state update, the activity of this task might be reported by both task api and pending cluster tasks API.

Required authorization

  • Cluster privileges: monitor

Query parameters

  • local boolean

    If true, the request retrieves information from the local node only. If false, information is retrieved from the master node.

  • master_timeout string

    Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.

    Values are -1 or 0.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • tasks array[object] Required
      Hide tasks attributes Show tasks attributes object
      • executing boolean Required

        Indicates whether the pending tasks are currently executing or not.

      • insert_order number Required

        The number that represents when the task has been inserted into the task queue.

      • priority string Required

        The priority of the pending task. The valid priorities in descending priority order are: IMMEDIATE > URGENT > HIGH > NORMAL > LOW > LANGUID.

      • source string Required

        A general description of the cluster task that may include a reason and origin.

      • time_in_queue string

        The time since the task is waiting for being performed.

      • time_in_queue_millis number

        Time unit for milliseconds

GET /_cluster/pending_tasks
GET /_cluster/pending_tasks
resp = client.cluster.pending_tasks()
const response = await client.cluster.pendingTasks();
response = client.cluster.pending_tasks
$resp = $client->cluster()->pendingTasks();
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_cluster/pending_tasks"






























































































































































Update the connector scheduling Beta; Added in 8.12.0

PUT /_connector/{connector_id}/_scheduling

Path parameters

  • connector_id string Required

    The unique identifier of the connector to be updated

application/json

Body Required

  • scheduling object Required
    Hide scheduling attributes Show scheduling attributes object
    • access_control object
      Hide access_control attributes Show access_control attributes object
      • enabled boolean Required
      • interval string Required

        The interval is expressed using the crontab syntax

    • full object
      Hide full attributes Show full attributes object
      • enabled boolean Required
      • interval string Required

        The interval is expressed using the crontab syntax

    • incremental object
      Hide incremental attributes Show incremental attributes object
      • enabled boolean Required
      • interval string Required

        The interval is expressed using the crontab syntax

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • result string Required

      Values are created, updated, deleted, not_found, or noop.

PUT /_connector/{connector_id}/_scheduling
PUT _connector/my-connector/_scheduling
{
    "scheduling": {
        "access_control": {
            "enabled": true,
            "interval": "0 10 0 * * ?"
        },
        "full": {
            "enabled": true,
            "interval": "0 20 0 * * ?"
        },
        "incremental": {
            "enabled": false,
            "interval": "0 30 0 * * ?"
        }
    }
}
resp = client.connector.update_scheduling(
    connector_id="my-connector",
    scheduling={
        "access_control": {
            "enabled": True,
            "interval": "0 10 0 * * ?"
        },
        "full": {
            "enabled": True,
            "interval": "0 20 0 * * ?"
        },
        "incremental": {
            "enabled": False,
            "interval": "0 30 0 * * ?"
        }
    },
)
const response = await client.connector.updateScheduling({
  connector_id: "my-connector",
  scheduling: {
    access_control: {
      enabled: true,
      interval: "0 10 0 * * ?",
    },
    full: {
      enabled: true,
      interval: "0 20 0 * * ?",
    },
    incremental: {
      enabled: false,
      interval: "0 30 0 * * ?",
    },
  },
});
response = client.connector.update_scheduling(
  connector_id: "my-connector",
  body: {
    "scheduling": {
      "access_control": {
        "enabled": true,
        "interval": "0 10 0 * * ?"
      },
      "full": {
        "enabled": true,
        "interval": "0 20 0 * * ?"
      },
      "incremental": {
        "enabled": false,
        "interval": "0 30 0 * * ?"
      }
    }
  }
)
$resp = $client->connector()->updateScheduling([
    "connector_id" => "my-connector",
    "body" => [
        "scheduling" => [
            "access_control" => [
                "enabled" => true,
                "interval" => "0 10 0 * * ?",
            ],
            "full" => [
                "enabled" => true,
                "interval" => "0 20 0 * * ?",
            ],
            "incremental" => [
                "enabled" => false,
                "interval" => "0 30 0 * * ?",
            ],
        ],
    ],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"scheduling":{"access_control":{"enabled":true,"interval":"0 10 0 * * ?"},"full":{"enabled":true,"interval":"0 20 0 * * ?"},"incremental":{"enabled":false,"interval":"0 30 0 * * ?"}}}' "$ELASTICSEARCH_URL/_connector/my-connector/_scheduling"
Request examples
{
    "scheduling": {
        "access_control": {
            "enabled": true,
            "interval": "0 10 0 * * ?"
        },
        "full": {
            "enabled": true,
            "interval": "0 20 0 * * ?"
        },
        "incremental": {
            "enabled": false,
            "interval": "0 30 0 * * ?"
        }
    }
}
{
    "scheduling": {
        "full": {
            "enabled": true,
            "interval": "0 10 0 * * ?"
        }
    }
}
Response examples (200)
{
  "result": "updated"
}





























Get follower stats Generally available; Added in 6.5.0

GET /{index}/_ccr/stats

Get cross-cluster replication follower stats. The API returns shard-level stats about the "following tasks" associated with each shard for the specified indices.

Required authorization

  • Cluster privileges: monitor
External documentation

Path parameters

  • index string | array[string] Required

    A comma-delimited list of index patterns.

Query parameters

  • timeout string

    The period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.

    Values are -1 or 0.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • indices array[object] Required

      An array of follower index statistics.

      Hide indices attributes Show indices attributes object
      • index string Required

        The name of the follower index.

      • shards array[object] Required

        An array of shard-level following task statistics.

        Hide shards attributes Show shards attributes object
        • bytes_read number Required

          The total of transferred bytes read from the leader. This is only an estimate and does not account for compression if enabled.

        • failed_read_requests number Required

          The number of failed reads.

        • failed_write_requests number Required

          The number of failed bulk write requests on the follower.

        • fatal_exception object

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

        • follower_aliases_version number Required

          The index aliases version the follower is synced up to.

        • follower_global_checkpoint number Required

          The current global checkpoint on the follower. The difference between the leader_global_checkpoint and the follower_global_checkpoint is an indication of how much the follower is lagging the leader.

        • follower_index string Required

          The name of the follower index.

        • follower_mapping_version number Required

          The mapping version the follower is synced up to.

        • follower_max_seq_no number Required

          The current maximum sequence number on the follower.

        • follower_settings_version number Required

          The index settings version the follower is synced up to.

        • last_requested_seq_no number Required

          The starting sequence number of the last batch of operations requested from the leader.

        • leader_global_checkpoint number Required

          The current global checkpoint on the leader known to the follower task.

        • leader_index string Required

          The name of the index in the leader cluster being followed.

        • leader_max_seq_no number Required

          The current maximum sequence number on the leader known to the follower task.

        • operations_read number Required

          The total number of operations read from the leader.

        • operations_written number Required

          The number of operations written on the follower.

        • outstanding_read_requests number Required

          The number of active read requests from the follower.

        • outstanding_write_requests number Required

          The number of active bulk write requests on the follower.

        • read_exceptions array[object] Required

          An array of objects representing failed reads.

        • remote_cluster string Required

          The remote cluster containing the leader index.

        • shard_id number Required

          The numerical shard ID, with values from 0 to one less than the number of replicas.

        • successful_read_requests number Required

          The number of successful fetches.

        • successful_write_requests number Required

          The number of bulk write requests run on the follower.

        • time_since_last_read string

          A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

        • time_since_last_read_millis
        • total_read_remote_exec_time string

          A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

        • total_read_remote_exec_time_millis
        • total_read_time string

          A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

        • total_read_time_millis
        • total_write_time string

          A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

        • total_write_time_millis
        • write_buffer_operation_count number Required

          The number of write operations queued on the follower.

        • write_buffer_size_in_bytes
GET /{index}/_ccr/stats
GET /follower_index/_ccr/stats
resp = client.ccr.follow_stats(
    index="follower_index",
)
const response = await client.ccr.followStats({
  index: "follower_index",
});
response = client.ccr.follow_stats(
  index: "follower_index"
)
$resp = $client->ccr()->followStats([
    "index" => "follower_index",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/follower_index/_ccr/stats"
Response examples (200)
A successful response from `GET /follower_index/_ccr/stats`, which retrieves follower stats.
{
  "indices" : [
    {
      "index" : "follower_index",
      "total_global_checkpoint_lag" : 256,
      "shards" : [
        {
          "remote_cluster" : "remote_cluster",
          "leader_index" : "leader_index",
          "follower_index" : "follower_index",
          "shard_id" : 0,
          "leader_global_checkpoint" : 1024,
          "leader_max_seq_no" : 1536,
          "follower_global_checkpoint" : 768,
          "follower_max_seq_no" : 896,
          "last_requested_seq_no" : 897,
          "outstanding_read_requests" : 8,
          "outstanding_write_requests" : 2,
          "write_buffer_operation_count" : 64,
          "follower_mapping_version" : 4,
          "follower_settings_version" : 2,
          "follower_aliases_version" : 8,
          "total_read_time_millis" : 32768,
          "total_read_remote_exec_time_millis" : 16384,
          "successful_read_requests" : 32,
          "failed_read_requests" : 0,
          "operations_read" : 896,
          "bytes_read" : 32768,
          "total_write_time_millis" : 16384,
          "write_buffer_size_in_bytes" : 1536,
          "successful_write_requests" : 16,
          "failed_write_requests" : 0,
          "operations_written" : 832,
          "read_exceptions" : [ ],
          "time_since_last_read_millis" : 8
        }
      ]
    }
  ]
}

Forget a follower Generally available; Added in 6.7.0

POST /{index}/_ccr/forget_follower

Remove the cross-cluster replication follower retention leases from the leader.

A following index takes out retention leases on its leader index. These leases are used to increase the likelihood that the shards of the leader index retain the history of operations that the shards of the following index need to run replication. When a follower index is converted to a regular index by the unfollow API (either by directly calling the API or by index lifecycle management tasks), these leases are removed. However, removal of the leases can fail, for example when the remote cluster containing the leader index is unavailable. While the leases will eventually expire on their own, their extended existence can cause the leader index to hold more history than necessary and prevent index lifecycle management from performing some operations on the leader index. This API exists to enable manually removing the leases when the unfollow API is unable to do so.

NOTE: This API does not stop replication by a following index. If you use this API with a follower index that is still actively following, the following index will add back retention leases on the leader. The only purpose of this API is to handle the case of failure to remove the following retention leases after the unfollow API is invoked.

External documentation

Path parameters

  • index string Required

    the name of the leader index for which specified follower retention leases should be removed

Query parameters

  • timeout string

    Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.

    Values are -1 or 0.

application/json

Body Required

  • follower_cluster string
  • follower_index string
  • follower_index_uuid string
  • leader_remote_cluster string

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • _shards object Required
      Hide _shards attributes Show _shards attributes object
      • failed number Required

        The number of shards the operation or search attempted to run on but failed.

      • successful number Required

        The number of shards the operation or search succeeded on.

      • total number Required

        The number of shards the operation or search will run on overall.

      • failures array[object]
        Hide failures attributes Show failures attributes object
        • index string
        • node string
        • reason object Required

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

        • shard number
        • status string
        • primary boolean
      • skipped number
POST /{index}/_ccr/forget_follower
POST /<leader_index>/_ccr/forget_follower
{
  "follower_cluster" : "<follower_cluster>",
  "follower_index" : "<follower_index>",
  "follower_index_uuid" : "<follower_index_uuid>",
  "leader_remote_cluster" : "<leader_remote_cluster>"
}
resp = client.ccr.forget_follower(
    index="<leader_index>",
    follower_cluster="<follower_cluster>",
    follower_index="<follower_index>",
    follower_index_uuid="<follower_index_uuid>",
    leader_remote_cluster="<leader_remote_cluster>",
)
const response = await client.ccr.forgetFollower({
  index: "<leader_index>",
  follower_cluster: "<follower_cluster>",
  follower_index: "<follower_index>",
  follower_index_uuid: "<follower_index_uuid>",
  leader_remote_cluster: "<leader_remote_cluster>",
});
response = client.ccr.forget_follower(
  index: "<leader_index>",
  body: {
    "follower_cluster": "<follower_cluster>",
    "follower_index": "<follower_index>",
    "follower_index_uuid": "<follower_index_uuid>",
    "leader_remote_cluster": "<leader_remote_cluster>"
  }
)
$resp = $client->ccr()->forgetFollower([
    "index" => "<leader_index>",
    "body" => [
        "follower_cluster" => "<follower_cluster>",
        "follower_index" => "<follower_index>",
        "follower_index_uuid" => "<follower_index_uuid>",
        "leader_remote_cluster" => "<leader_remote_cluster>",
    ],
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"follower_cluster":"<follower_cluster>","follower_index":"<follower_index>","follower_index_uuid":"<follower_index_uuid>","leader_remote_cluster":"<leader_remote_cluster>"}' "$ELASTICSEARCH_URL/<leader_index>/_ccr/forget_follower"
Request example
Run `POST /<leader_index>/_ccr/forget_follower`.
{
  "follower_cluster" : "<follower_cluster>",
  "follower_index" : "<follower_index>",
  "follower_index_uuid" : "<follower_index_uuid>",
  "leader_remote_cluster" : "<leader_remote_cluster>"
}
Response examples (200)
A successful response for removing the follower retention leases from the leader index.
{
  "_shards" : {
    "total" : 1,
    "successful" : 1,
    "failed" : 0,
    "failures" : [ ]
  }
}




Pause a follower Generally available; Added in 6.5.0

POST /{index}/_ccr/pause_follow

Pause a cross-cluster replication follower index. The follower index will not fetch any additional operations from the leader index. You can resume following with the resume follower API. You can pause and resume a follower index to change the configuration of the following task.

Required authorization

  • Cluster privileges: manage_ccr

Path parameters

  • index string Required

    The name of the follower index.

Query parameters

  • master_timeout string

    The period to wait for a connection to the master node. If the master node is not available before the timeout expires, the request fails and returns an error. It can also be set to -1 to indicate that the request should never timeout.

    Values are -1 or 0.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

POST /{index}/_ccr/pause_follow
POST /follower_index/_ccr/pause_follow
resp = client.ccr.pause_follow(
    index="follower_index",
)
const response = await client.ccr.pauseFollow({
  index: "follower_index",
});
response = client.ccr.pause_follow(
  index: "follower_index"
)
$resp = $client->ccr()->pauseFollow([
    "index" => "follower_index",
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/follower_index/_ccr/pause_follow"
Response examples (200)
A successful response from `POST /follower_index/_ccr/pause_follow`, which pauses a follower index.
{
  "acknowledged" : true
}










































































































Create or update a document in an index Generally available

POST /{index}/_doc/{id}

All methods and paths for this operation:

POST /{index}/_doc

PUT /{index}/_doc/{id}
POST /{index}/_doc/{id}

Add a JSON document to the specified data stream or index and make it searchable. If the target is an index and the document already exists, the request updates the document and increments its version.

NOTE: You cannot use this API to send update requests for existing documents in a data stream.

If the Elasticsearch security features are enabled, you must have the following index privileges for the target data stream, index, or index alias:

  • To add or overwrite a document using the PUT /<target>/_doc/<_id> request format, you must have the create, index, or write index privilege.
  • To add a document using the POST /<target>/_doc/ request format, you must have the create_doc, create, index, or write index privilege.
  • To automatically create a data stream or index with this API request, you must have the auto_configure, create_index, or manage index privilege.

Automatic data stream creation requires a matching index template with data stream enabled.

NOTE: Replica shards might not all be started when an indexing operation returns successfully. By default, only the primary is required. Set wait_for_active_shards to change this default behavior.

Automatically create data streams and indices

If the request's target doesn't exist and matches an index template with a data_stream definition, the index operation automatically creates the data stream.

If the target doesn't exist and doesn't match a data stream template, the operation automatically creates the index and applies any matching index templates.

NOTE: Elasticsearch includes several built-in index templates. To avoid naming collisions with these templates, refer to index pattern documentation.

If no mapping exists, the index operation creates a dynamic mapping. By default, new fields and objects are automatically added to the mapping if needed.

Automatic index creation is controlled by the action.auto_create_index setting. If it is true, any index can be created automatically. You can modify this setting to explicitly allow or block automatic creation of indices that match specified patterns or set it to false to turn off automatic index creation entirely. Specify a comma-separated list of patterns you want to allow or prefix each pattern with + or - to indicate whether it should be allowed or blocked. When a list is specified, the default behaviour is to disallow.

NOTE: The action.auto_create_index setting affects the automatic creation of indices only. It does not affect the creation of data streams.

Optimistic concurrency control

Index operations can be made conditional and only be performed if the last modification to the document was assigned the sequence number and primary term specified by the if_seq_no and if_primary_term parameters. If a mismatch is detected, the operation will result in a VersionConflictException and a status code of 409.

Routing

By default, shard placement — or routing — is controlled by using a hash of the document's ID value. For more explicit control, the value fed into the hash function used by the router can be directly specified on a per-operation basis using the routing parameter.

When setting up explicit mapping, you can also use the _routing field to direct the index operation to extract the routing value from the document itself. This does come at the (very minimal) cost of an additional document parsing pass. If the _routing mapping is defined and set to be required, the index operation will fail if no routing value is provided or extracted.

NOTE: Data streams do not support custom routing unless they were created with the allow_custom_routing setting enabled in the template.

Distributed

The index operation is directed to the primary shard based on its route and performed on the actual node containing this shard. After the primary shard completes the operation, if needed, the update is distributed to applicable replicas.

Active shards

To improve the resiliency of writes to the system, indexing operations can be configured to wait for a certain number of active shard copies before proceeding with the operation. If the requisite number of active shard copies are not available, then the write operation must wait and retry, until either the requisite shard copies have started or a timeout occurs. By default, write operations only wait for the primary shards to be active before proceeding (that is to say wait_for_active_shards is 1). This default can be overridden in the index settings dynamically by setting index.write.wait_for_active_shards. To alter this behavior per operation, use the wait_for_active_shards request parameter.

Valid values are all or any positive integer up to the total number of configured copies per shard in the index (which is number_of_replicas+1). Specifying a negative value or a number greater than the number of shard copies will throw an error.

For example, suppose you have a cluster of three nodes, A, B, and C and you create an index index with the number of replicas set to 3 (resulting in 4 shard copies, one more copy than there are nodes). If you attempt an indexing operation, by default the operation will only ensure the primary copy of each shard is available before proceeding. This means that even if B and C went down and A hosted the primary shard copies, the indexing operation would still proceed with only one copy of the data. If wait_for_active_shards is set on the request to 3 (and all three nodes are up), the indexing operation will require 3 active shard copies before proceeding. This requirement should be met because there are 3 active nodes in the cluster, each one holding a copy of the shard. However, if you set wait_for_active_shards to all (or to 4, which is the same in this situation), the indexing operation will not proceed as you do not have all 4 copies of each shard active in the index. The operation will timeout unless a new node is brought up in the cluster to host the fourth copy of the shard.

It is important to note that this setting greatly reduces the chances of the write operation not writing to the requisite number of shard copies, but it does not completely eliminate the possibility, because this check occurs before the write operation starts. After the write operation is underway, it is still possible for replication to fail on any number of shard copies but still succeed on the primary. The _shards section of the API response reveals the number of shard copies on which replication succeeded and failed.

No operation (noop) updates

When updating a document by using this API, a new version of the document is always created even if the document hasn't changed. If this isn't acceptable use the _update API with detect_noop set to true. The detect_noop option isn't available on this API because it doesn’t fetch the old source and isn't able to compare it against the new source.

There isn't a definitive rule for when noop updates aren't acceptable. It's a combination of lots of factors like how frequently your data source sends updates that are actually noops and how many queries per second Elasticsearch runs on the shard receiving the updates.

Versioning

Each indexed document is given a version number. By default, internal versioning is used that starts at 1 and increments with each update, deletes included. Optionally, the version number can be set to an external value (for example, if maintained in a database). To enable this functionality, version_type should be set to external. The value provided must be a numeric, long value greater than or equal to 0, and less than around 9.2e+18.

NOTE: Versioning is completely real time, and is not affected by the near real time aspects of search operations. If no version is provided, the operation runs without any version checks.

When using the external version type, the system checks to see if the version number passed to the index request is greater than the version of the currently stored document. If true, the document will be indexed and the new version number used. If the value provided is less than or equal to the stored document's version number, a version conflict will occur and the index operation will fail. For example:

PUT my-index-000001/_doc/1?version=2&version_type=external
{
  "user": {
    "id": "elkbee"
  }
}

In this example, the operation will succeed since the supplied version of 2 is higher than the current document version of 1.
If the document was already updated and its version was set to 2 or higher, the indexing command will fail and result in a conflict (409 HTTP status code).

A nice side effect is that there is no need to maintain strict ordering of async indexing operations run as a result of changes to a source database, as long as version numbers from the source database are used.
Even the simple case of updating the Elasticsearch index using data from a database is simplified if external versioning is used, as only the latest version will be used if the index operations arrive out of order.

## Required authorization

* Index privileges: `index`
External documentation

Path parameters

  • index string Required

    The name of the data stream or index to target. If the target doesn't exist and matches the name or wildcard (*) pattern of an index template with a data_stream definition, this request creates the data stream. If the target doesn't exist and doesn't match a data stream template, this request creates the index. You can check for existing targets with the resolve index API.

  • id string Required

    A unique identifier for the document. To automatically generate a document ID, use the POST /<target>/_doc/ request format and omit this parameter.

Query parameters

  • if_primary_term number

    Only perform the operation if the document has this primary term.

  • if_seq_no number

    Only perform the operation if the document has this sequence number.

  • include_source_on_error boolean

    True or false if to include the document source in the error message in case of parsing errors.

  • op_type string

    Set to create to only index the document if it does not already exist (put if absent). If a document with the specified _id already exists, the indexing operation will fail. The behavior is the same as using the <index>/_create endpoint. If a document ID is specified, this paramater defaults to index. Otherwise, it defaults to create. If the request targets a data stream, an op_type of create is required.

    Supported values include:

    • index: Overwrite any documents that already exist.
    • create: Only index documents that do not already exist.

    Values are index or create.

  • pipeline string

    The ID of the pipeline to use to preprocess incoming documents. If the index has a default ingest pipeline specified, then setting the value to _none disables the default ingest pipeline for this request. If a final pipeline is configured it will always run, regardless of the value of this parameter.

  • refresh string

    If true, Elasticsearch refreshes the affected shards to make this operation visible to search. If wait_for, it waits for a refresh to make this operation visible to search. If false, it does nothing with refreshes.

    Values are true, false, or wait_for.

  • routing string

    A custom value that is used to route operations to a specific shard.

  • timeout string

    The period the request waits for the following operations: automatic index creation, dynamic mapping updates, waiting for active shards.

    This parameter is useful for situations where the primary shard assigned to perform the operation might not be available when the operation runs. Some reasons for this might be that the primary shard is currently recovering from a gateway or undergoing relocation. By default, the operation will wait on the primary shard to become available for at least 1 minute before failing and responding with an error. The actual wait time could be longer, particularly when multiple waits occur.

    Values are -1 or 0.

  • version number

    An explicit version number for concurrency control. It must be a non-negative long number.

  • version_type string

    The version type.

    Supported values include:

    • internal: Use internal versioning that starts at 1 and increments with each update or delete.
    • external: Only index the document if the specified version is strictly higher than the version of the stored document or if there is no existing document.
    • external_gte: Only index the document if the specified version is equal or higher than the version of the stored document or if there is no existing document. NOTE: The external_gte version type is meant for special use cases and should be used with care. If used incorrectly, it can result in loss of data.
    • force: This option is deprecated because it can cause primary and replica shards to diverge.

    Values are internal, external, external_gte, or force.

  • wait_for_active_shards number | string

    The number of shard copies that must be active before proceeding with the operation. You can set it to all or any positive integer up to the total number of shards in the index (number_of_replicas+1). The default value of 1 means it waits for each primary shard to be active.

    Values are all or index-setting.

  • require_alias boolean

    If true, the destination must be an index alias.

  • require_data_stream boolean

    If true, the request's actions must target a data stream (existing or to be created).

application/json

Body Required

object object

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • _id string Required

      The unique identifier for the added document.

    • _index string Required

      The name of the index the document was added to.

    • _primary_term number

      The primary term assigned to the document for the indexing operation.

    • result string Required

      The result of the indexing operation: created or updated.

      Values are created, updated, deleted, not_found, or noop.

    • _seq_no number

      The sequence number assigned to the document for the indexing operation. Sequence numbers are used to ensure an older version of a document doesn't overwrite a newer version.

    • _shards object Required

      Information about the replication process of the operation.

      Hide _shards attributes Show _shards attributes object
      • failed number Required

        The number of shards the operation or search attempted to run on but failed.

      • successful number Required

        The number of shards the operation or search succeeded on.

      • total number Required

        The number of shards the operation or search will run on overall.

      • failures array[object]
        Hide failures attributes Show failures attributes object
        • index string
        • node string
        • reason object Required

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

        • shard number
        • status string
        • primary boolean
      • skipped number
    • _version number Required

      The document version, which is incremented each time the document is updated.

    • forced_refresh boolean
POST /{index}/_doc/{id}
POST my-index-000001/_doc/
{
  "@timestamp": "2099-11-15T13:12:00",
  "message": "GET /search HTTP/1.1 200 1070000",
  "user": {
    "id": "kimchy"
  }
}
resp = client.index(
    index="my-index-000001",
    document={
        "@timestamp": "2099-11-15T13:12:00",
        "message": "GET /search HTTP/1.1 200 1070000",
        "user": {
            "id": "kimchy"
        }
    },
)
const response = await client.index({
  index: "my-index-000001",
  document: {
    "@timestamp": "2099-11-15T13:12:00",
    message: "GET /search HTTP/1.1 200 1070000",
    user: {
      id: "kimchy",
    },
  },
});
response = client.index(
  index: "my-index-000001",
  body: {
    "@timestamp": "2099-11-15T13:12:00",
    "message": "GET /search HTTP/1.1 200 1070000",
    "user": {
      "id": "kimchy"
    }
  }
)
$resp = $client->index([
    "index" => "my-index-000001",
    "body" => [
        "@timestamp" => "2099-11-15T13:12:00",
        "message" => "GET /search HTTP/1.1 200 1070000",
        "user" => [
            "id" => "kimchy",
        ],
    ],
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"@timestamp":"2099-11-15T13:12:00","message":"GET /search HTTP/1.1 200 1070000","user":{"id":"kimchy"}}' "$ELASTICSEARCH_URL/my-index-000001/_doc/"
Request examples
Run `POST my-index-000001/_doc/` to index a document. When you use the `POST /<target>/_doc/` request format, the `op_type` is automatically set to `create` and the index operation generates a unique ID for the document.
{
  "@timestamp": "2099-11-15T13:12:00",
  "message": "GET /search HTTP/1.1 200 1070000",
  "user": {
    "id": "kimchy"
  }
}
Run `PUT my-index-000001/_doc/1` to insert a JSON document into the `my-index-000001` index with an `_id` of 1.
{
  "@timestamp": "2099-11-15T13:12:00",
  "message": "GET /search HTTP/1.1 200 1070000",
  "user": {
    "id": "kimchy"
  }
}
Response examples (200)
A successful response from `POST my-index-000001/_doc/`, which contains an automated document ID.
{
  "_shards": {
    "total": 2,
    "failed": 0,
    "successful": 2
  },
  "_index": "my-index-000001",
  "_id": "W0tpsmIBdwcYyG50zbta",
  "_version": 1,
  "_seq_no": 0,
  "_primary_term": 1,
  "result": "created"
}
A successful response from `PUT my-index-000001/_doc/1`.
{
  "_shards": {
    "total": 2,
    "failed": 0,
    "successful": 2
  },
  "_index": "my-index-000001",
  "_id": "1",
  "_version": 1,
  "_seq_no": 0,
  "_primary_term": 1,
  "result": "created"
}
























































Enrich

The enrich APIs enable you to manage enrich policies. An enrich policy is a set of configuration options used to add the right enrich data to the right incoming documents.











































































































































































Delete data stream lifecycles Generally available; Added in 8.11.0

DELETE /_data_stream/{name}/_lifecycle

Removes the data stream lifecycle from a data stream, rendering it not managed by the data stream lifecycle.

External documentation

Path parameters

  • name string | array[string] Required

    A comma-separated list of data streams of which the data stream lifecycle will be deleted; use * to get all data streams

Query parameters

  • expand_wildcards string | array[string]

    Whether wildcard expressions should get expanded to open or closed indices (default: open)

    Supported values include:

    • all: Match any data stream or index, including hidden ones.
    • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
    • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
    • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
    • none: Wildcard expressions are not accepted.

    Values are all, open, closed, hidden, or none.

  • master_timeout string

    Specify timeout for connection to master

    Values are -1 or 0.

  • timeout string

    Explicit timestamp for the document

    Values are -1 or 0.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

DELETE /_data_stream/{name}/_lifecycle
DELETE _data_stream/my-data-stream/_lifecycle
resp = client.indices.delete_data_lifecycle(
    name="my-data-stream",
)
const response = await client.indices.deleteDataLifecycle({
  name: "my-data-stream",
});
response = client.indices.delete_data_lifecycle(
  name: "my-data-stream"
)
$resp = $client->indices()->deleteDataLifecycle([
    "name" => "my-data-stream",
]);
curl -X DELETE -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_data_stream/my-data-stream/_lifecycle"
Response examples (200)
A successful response for deleting a data stream lifecycle.
{
  "acknowledged": true
}


































































































































































































































Create an Anthropic inference endpoint Generally available; Added in 8.16.0

PUT /_inference/{task_type}/{anthropic_inference_id}

Create an inference endpoint to perform an inference task with the anthropic service.

Required authorization

  • Cluster privileges: manage_inference

Path parameters

  • task_type string

    The task type. The only valid task type for the model to perform is completion.

    Value is completion.

  • anthropic_inference_id string Required

    The unique identifier of the inference endpoint.

Query parameters

  • timeout string

    Specifies the amount of time to wait for the inference endpoint to be created.

    Values are -1 or 0.

application/json

Body

  • chunking_settings object

    The chunking configuration object.

    Hide chunking_settings attributes Show chunking_settings attributes object
    • max_chunk_size number

      The maximum size of a chunk in words. This value cannot be lower than 20 (for sentence strategy) or 10 (for word strategy). This value should not exceed the window size for the associated model.

      Default value is 250.

    • overlap number

      The number of overlapping words for chunks. It is applicable only to a word chunking strategy. This value cannot be higher than half the max_chunk_size value.

      Default value is 100.

    • sentence_overlap number

      The number of overlapping sentences for chunks. It is applicable only for a sentence chunking strategy. It can be either 1 or 0.

      Default value is 1.

    • separator_group string

      Only applicable to the recursive strategy and required when using it.

      Sets a predefined list of separators in the saved chunking settings based on the selected text type. Values can be markdown or plaintext.

      Using this parameter is an alternative to manually specifying a custom separators list.

    • separators array[string]

      Only applicable to the recursive strategy and required when using it.

      A list of strings used as possible split points when chunking text.

      Each string can be a plain string or a regular expression (regex) pattern. The system tries each separator in order to split the text, starting from the first item in the list.

      After splitting, it attempts to recombine smaller pieces into larger chunks that stay within the max_chunk_size limit, to reduce the total number of chunks generated.

    • strategy string

      The chunking strategy: sentence, word, none or recursive.

      • If strategy is set to recursive, you must also specify:

        • max_chunk_size
        • either separators orseparator_group

      Learn more about different chunking strategies in the linked documentation.

      Default value is sentence.

      External documentation
  • service string Required

    The type of service supported for the specified task type. In this case, anthropic.

    Value is anthropic.

  • service_settings object Required

    Settings used to install the inference model. These settings are specific to the watsonxai service.

    Hide service_settings attributes Show service_settings attributes object
    • api_key string Required

      A valid API key for the Anthropic API.

    • model_id string Required

      The name of the model to use for the inference task. Refer to the Anthropic documentation for the list of supported models.

    • rate_limit object

      This setting helps to minimize the number of rate limit errors returned from Anthropic. By default, the anthropic service sets the number of requests allowed per minute to 50.

      Hide rate_limit attribute Show rate_limit attribute object
      • requests_per_minute number

        The number of requests allowed per minute. By default, the number of requests allowed per minute is set by each service as follows:

        • alibabacloud-ai-search service: 1000
        • anthropic service: 50
        • azureaistudio service: 240
        • azureopenai service and task type text_embedding: 1440
        • azureopenai service and task type completion: 120
        • cohere service: 10000
        • elastic service and task type chat_completion: 240
        • googleaistudio service: 360
        • googlevertexai service: 30000
        • hugging_face service: 3000
        • jinaai service: 2000
        • llama service: 3000
        • mistral service: 240
        • openai service and task type text_embedding: 3000
        • openai service and task type completion: 500
        • voyageai service: 2000
        • watsonxai service: 120
  • task_settings object

    Settings to configure the inference task. These settings are specific to the task type you specified.

    Hide task_settings attributes Show task_settings attributes object
    • max_tokens number Required

      For a completion task, it is the maximum number of tokens to generate before stopping.

    • temperature number

      For a completion task, it is the amount of randomness injected into the response. For more details about the supported range, refer to Anthropic documentation.

      External documentation
    • top_k number

      For a completion task, it specifies to only sample from the top K options for each subsequent token. It is recommended for advanced use cases only. You usually only need to use temperature.

    • top_p number

      For a completion task, it specifies to use Anthropic's nucleus sampling. In nucleus sampling, Anthropic computes the cumulative distribution over all the options for each subsequent token in decreasing probability order and cuts it off once it reaches the specified probability. You should either alter temperature or top_p, but not both. It is recommended for advanced use cases only. You usually only need to use temperature.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • chunking_settings object

      Chunking configuration object

      Hide chunking_settings attributes Show chunking_settings attributes object
      • max_chunk_size number

        The maximum size of a chunk in words. This value cannot be lower than 20 (for sentence strategy) or 10 (for word strategy). This value should not exceed the window size for the associated model.

        Default value is 250.

      • overlap number

        The number of overlapping words for chunks. It is applicable only to a word chunking strategy. This value cannot be higher than half the max_chunk_size value.

        Default value is 100.

      • sentence_overlap number

        The number of overlapping sentences for chunks. It is applicable only for a sentence chunking strategy. It can be either 1 or 0.

        Default value is 1.

      • separator_group string

        Only applicable to the recursive strategy and required when using it.

        Sets a predefined list of separators in the saved chunking settings based on the selected text type. Values can be markdown or plaintext.

        Using this parameter is an alternative to manually specifying a custom separators list.

      • separators array[string]

        Only applicable to the recursive strategy and required when using it.

        A list of strings used as possible split points when chunking text.

        Each string can be a plain string or a regular expression (regex) pattern. The system tries each separator in order to split the text, starting from the first item in the list.

        After splitting, it attempts to recombine smaller pieces into larger chunks that stay within the max_chunk_size limit, to reduce the total number of chunks generated.

      • strategy string

        The chunking strategy: sentence, word, none or recursive.

        • If strategy is set to recursive, you must also specify:

          • max_chunk_size
          • either separators orseparator_group

        Learn more about different chunking strategies in the linked documentation.

        Default value is sentence.

        External documentation
    • service string Required

      The service type

    • service_settings object Required

      Settings specific to the service

    • task_settings object

      Task settings specific to the service and task type

    • inference_id string Required

      The inference Id

    • task_type string Required

      The task type

      Value is completion.

PUT /_inference/{task_type}/{anthropic_inference_id}
PUT _inference/completion/anthropic_completion
{
    "service": "anthropic",
    "service_settings": {
        "api_key": "Anthropic-Api-Key",
        "model_id": "Model-ID"
    },
    "task_settings": {
        "max_tokens": 1024
    }
}
resp = client.inference.put(
    task_type="completion",
    inference_id="anthropic_completion",
    inference_config={
        "service": "anthropic",
        "service_settings": {
            "api_key": "Anthropic-Api-Key",
            "model_id": "Model-ID"
        },
        "task_settings": {
            "max_tokens": 1024
        }
    },
)
const response = await client.inference.put({
  task_type: "completion",
  inference_id: "anthropic_completion",
  inference_config: {
    service: "anthropic",
    service_settings: {
      api_key: "Anthropic-Api-Key",
      model_id: "Model-ID",
    },
    task_settings: {
      max_tokens: 1024,
    },
  },
});
response = client.inference.put(
  task_type: "completion",
  inference_id: "anthropic_completion",
  body: {
    "service": "anthropic",
    "service_settings": {
      "api_key": "Anthropic-Api-Key",
      "model_id": "Model-ID"
    },
    "task_settings": {
      "max_tokens": 1024
    }
  }
)
$resp = $client->inference()->put([
    "task_type" => "completion",
    "inference_id" => "anthropic_completion",
    "body" => [
        "service" => "anthropic",
        "service_settings" => [
            "api_key" => "Anthropic-Api-Key",
            "model_id" => "Model-ID",
        ],
        "task_settings" => [
            "max_tokens" => 1024,
        ],
    ],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"service":"anthropic","service_settings":{"api_key":"Anthropic-Api-Key","model_id":"Model-ID"},"task_settings":{"max_tokens":1024}}' "$ELASTICSEARCH_URL/_inference/completion/anthropic_completion"
Request example
Run `PUT _inference/completion/anthropic_completion` to create an inference endpoint that performs a completion task.
{
    "service": "anthropic",
    "service_settings": {
        "api_key": "Anthropic-Api-Key",
        "model_id": "Model-ID"
    },
    "task_settings": {
        "max_tokens": 1024
    }
}




























Create an Google AI Studio inference endpoint Generally available; Added in 8.15.0

PUT /_inference/{task_type}/{googleaistudio_inference_id}

Create an inference endpoint to perform an inference task with the googleaistudio service.

Required authorization

  • Cluster privileges: manage_inference

Path parameters

  • task_type string

    The type of the inference task that the model will perform.

    Values are completion or text_embedding.

  • googleaistudio_inference_id string Required

    The unique identifier of the inference endpoint.

Query parameters

  • timeout string

    Specifies the amount of time to wait for the inference endpoint to be created.

    Values are -1 or 0.

application/json

Body

  • chunking_settings object

    The chunking configuration object.

    Hide chunking_settings attributes Show chunking_settings attributes object
    • max_chunk_size number

      The maximum size of a chunk in words. This value cannot be lower than 20 (for sentence strategy) or 10 (for word strategy). This value should not exceed the window size for the associated model.

      Default value is 250.

    • overlap number

      The number of overlapping words for chunks. It is applicable only to a word chunking strategy. This value cannot be higher than half the max_chunk_size value.

      Default value is 100.

    • sentence_overlap number

      The number of overlapping sentences for chunks. It is applicable only for a sentence chunking strategy. It can be either 1 or 0.

      Default value is 1.

    • separator_group string

      Only applicable to the recursive strategy and required when using it.

      Sets a predefined list of separators in the saved chunking settings based on the selected text type. Values can be markdown or plaintext.

      Using this parameter is an alternative to manually specifying a custom separators list.

    • separators array[string]

      Only applicable to the recursive strategy and required when using it.

      A list of strings used as possible split points when chunking text.

      Each string can be a plain string or a regular expression (regex) pattern. The system tries each separator in order to split the text, starting from the first item in the list.

      After splitting, it attempts to recombine smaller pieces into larger chunks that stay within the max_chunk_size limit, to reduce the total number of chunks generated.

    • strategy string

      The chunking strategy: sentence, word, none or recursive.

      • If strategy is set to recursive, you must also specify:

        • max_chunk_size
        • either separators orseparator_group

      Learn more about different chunking strategies in the linked documentation.

      Default value is sentence.

      External documentation
  • service string Required

    The type of service supported for the specified task type. In this case, googleaistudio.

    Value is googleaistudio.

  • service_settings object Required

    Settings used to install the inference model. These settings are specific to the googleaistudio service.

    Hide service_settings attributes Show service_settings attributes object
    • api_key string Required

      A valid API key of your Google Gemini account.

    • model_id string Required

      The name of the model to use for the inference task. Refer to the Google documentation for the list of supported models.

      External documentation
    • rate_limit object

      This setting helps to minimize the number of rate limit errors returned from Google AI Studio. By default, the googleaistudio service sets the number of requests allowed per minute to 360.

      Hide rate_limit attribute Show rate_limit attribute object
      • requests_per_minute number

        The number of requests allowed per minute. By default, the number of requests allowed per minute is set by each service as follows:

        • alibabacloud-ai-search service: 1000
        • anthropic service: 50
        • azureaistudio service: 240
        • azureopenai service and task type text_embedding: 1440
        • azureopenai service and task type completion: 120
        • cohere service: 10000
        • elastic service and task type chat_completion: 240
        • googleaistudio service: 360
        • googlevertexai service: 30000
        • hugging_face service: 3000
        • jinaai service: 2000
        • llama service: 3000
        • mistral service: 240
        • openai service and task type text_embedding: 3000
        • openai service and task type completion: 500
        • voyageai service: 2000
        • watsonxai service: 120

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • chunking_settings object

      Chunking configuration object

      Hide chunking_settings attributes Show chunking_settings attributes object
      • max_chunk_size number

        The maximum size of a chunk in words. This value cannot be lower than 20 (for sentence strategy) or 10 (for word strategy). This value should not exceed the window size for the associated model.

        Default value is 250.

      • overlap number

        The number of overlapping words for chunks. It is applicable only to a word chunking strategy. This value cannot be higher than half the max_chunk_size value.

        Default value is 100.

      • sentence_overlap number

        The number of overlapping sentences for chunks. It is applicable only for a sentence chunking strategy. It can be either 1 or 0.

        Default value is 1.

      • separator_group string

        Only applicable to the recursive strategy and required when using it.

        Sets a predefined list of separators in the saved chunking settings based on the selected text type. Values can be markdown or plaintext.

        Using this parameter is an alternative to manually specifying a custom separators list.

      • separators array[string]

        Only applicable to the recursive strategy and required when using it.

        A list of strings used as possible split points when chunking text.

        Each string can be a plain string or a regular expression (regex) pattern. The system tries each separator in order to split the text, starting from the first item in the list.

        After splitting, it attempts to recombine smaller pieces into larger chunks that stay within the max_chunk_size limit, to reduce the total number of chunks generated.

      • strategy string

        The chunking strategy: sentence, word, none or recursive.

        • If strategy is set to recursive, you must also specify:

          • max_chunk_size
          • either separators orseparator_group

        Learn more about different chunking strategies in the linked documentation.

        Default value is sentence.

        External documentation
    • service string Required

      The service type

    • service_settings object Required

      Settings specific to the service

    • task_settings object

      Task settings specific to the service and task type

    • inference_id string Required

      The inference Id

    • task_type string Required

      The task type

      Values are text_embedding or completion.

PUT /_inference/{task_type}/{googleaistudio_inference_id}
PUT _inference/completion/google_ai_studio_completion
{
    "service": "googleaistudio",
    "service_settings": {
        "api_key": "api-key",
        "model_id": "model-id"
    }
}
resp = client.inference.put(
    task_type="completion",
    inference_id="google_ai_studio_completion",
    inference_config={
        "service": "googleaistudio",
        "service_settings": {
            "api_key": "api-key",
            "model_id": "model-id"
        }
    },
)
const response = await client.inference.put({
  task_type: "completion",
  inference_id: "google_ai_studio_completion",
  inference_config: {
    service: "googleaistudio",
    service_settings: {
      api_key: "api-key",
      model_id: "model-id",
    },
  },
});
response = client.inference.put(
  task_type: "completion",
  inference_id: "google_ai_studio_completion",
  body: {
    "service": "googleaistudio",
    "service_settings": {
      "api_key": "api-key",
      "model_id": "model-id"
    }
  }
)
$resp = $client->inference()->put([
    "task_type" => "completion",
    "inference_id" => "google_ai_studio_completion",
    "body" => [
        "service" => "googleaistudio",
        "service_settings" => [
            "api_key" => "api-key",
            "model_id" => "model-id",
        ],
    ],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"service":"googleaistudio","service_settings":{"api_key":"api-key","model_id":"model-id"}}' "$ELASTICSEARCH_URL/_inference/completion/google_ai_studio_completion"
Request example
Run `PUT _inference/completion/google_ai_studio_completion` to create an inference endpoint to perform a `completion` task type.
{
    "service": "googleaistudio",
    "service_settings": {
        "api_key": "api-key",
        "model_id": "model-id"
    }
}
















































Update an inference endpoint Generally available; Added in 8.17.0

PUT /_inference/{task_type}/{inference_id}/_update

All methods and paths for this operation:

PUT /_inference/{inference_id}/_update

PUT /_inference/{task_type}/{inference_id}/_update

Modify task_settings, secrets (within service_settings), or num_allocations for an inference endpoint, depending on the specific endpoint service and task_type.

IMPORTANT: The inference APIs enable you to use certain services, such as built-in machine learning models (ELSER, E5), models uploaded through Eland, Cohere, OpenAI, Azure, Google AI Studio, Google Vertex AI, Anthropic, Watsonx.ai, or Hugging Face. For built-in models and models uploaded through Eland, the inference APIs offer an alternative way to use and manage trained models. However, if you do not plan to use the inference APIs to use these models or if you want to use non-NLP models, use the machine learning trained model APIs.

Required authorization

  • Cluster privileges: manage_inference

Path parameters

  • task_type string Required

    The type of inference task that the model performs.

    Values are sparse_embedding, text_embedding, rerank, completion, or chat_completion.

  • inference_id string Required

    The unique identifier of the inference endpoint.

application/json

Body Required

  • chunking_settings object

    Chunking configuration object

    Hide chunking_settings attributes Show chunking_settings attributes object
    • max_chunk_size number

      The maximum size of a chunk in words. This value cannot be lower than 20 (for sentence strategy) or 10 (for word strategy). This value should not exceed the window size for the associated model.

      Default value is 250.

    • overlap number

      The number of overlapping words for chunks. It is applicable only to a word chunking strategy. This value cannot be higher than half the max_chunk_size value.

      Default value is 100.

    • sentence_overlap number

      The number of overlapping sentences for chunks. It is applicable only for a sentence chunking strategy. It can be either 1 or 0.

      Default value is 1.

    • separator_group string

      Only applicable to the recursive strategy and required when using it.

      Sets a predefined list of separators in the saved chunking settings based on the selected text type. Values can be markdown or plaintext.

      Using this parameter is an alternative to manually specifying a custom separators list.

    • separators array[string]

      Only applicable to the recursive strategy and required when using it.

      A list of strings used as possible split points when chunking text.

      Each string can be a plain string or a regular expression (regex) pattern. The system tries each separator in order to split the text, starting from the first item in the list.

      After splitting, it attempts to recombine smaller pieces into larger chunks that stay within the max_chunk_size limit, to reduce the total number of chunks generated.

    • strategy string

      The chunking strategy: sentence, word, none or recursive.

      • If strategy is set to recursive, you must also specify:

        • max_chunk_size
        • either separators orseparator_group

      Learn more about different chunking strategies in the linked documentation.

      Default value is sentence.

      External documentation
  • service string Required

    The service type

  • service_settings object Required

    Settings specific to the service

  • task_settings object

    Task settings specific to the service and task type

Responses

  • 200 application/json
    Hide response attributes Show response attributes object

    Represents an inference endpoint as returned by the GET API

    • chunking_settings object

      Chunking configuration object

      Hide chunking_settings attributes Show chunking_settings attributes object
      • max_chunk_size number

        The maximum size of a chunk in words. This value cannot be lower than 20 (for sentence strategy) or 10 (for word strategy). This value should not exceed the window size for the associated model.

        Default value is 250.

      • overlap number

        The number of overlapping words for chunks. It is applicable only to a word chunking strategy. This value cannot be higher than half the max_chunk_size value.

        Default value is 100.

      • sentence_overlap number

        The number of overlapping sentences for chunks. It is applicable only for a sentence chunking strategy. It can be either 1 or 0.

        Default value is 1.

      • separator_group string

        Only applicable to the recursive strategy and required when using it.

        Sets a predefined list of separators in the saved chunking settings based on the selected text type. Values can be markdown or plaintext.

        Using this parameter is an alternative to manually specifying a custom separators list.

      • separators array[string]

        Only applicable to the recursive strategy and required when using it.

        A list of strings used as possible split points when chunking text.

        Each string can be a plain string or a regular expression (regex) pattern. The system tries each separator in order to split the text, starting from the first item in the list.

        After splitting, it attempts to recombine smaller pieces into larger chunks that stay within the max_chunk_size limit, to reduce the total number of chunks generated.

      • strategy string

        The chunking strategy: sentence, word, none or recursive.

        • If strategy is set to recursive, you must also specify:

          • max_chunk_size
          • either separators orseparator_group

        Learn more about different chunking strategies in the linked documentation.

        Default value is sentence.

        External documentation
    • service string Required

      The service type

    • service_settings object Required

      Settings specific to the service

    • task_settings object

      Task settings specific to the service and task type

    • inference_id string Required

      The inference Id

    • task_type string Required

      The task type

      Values are sparse_embedding, text_embedding, rerank, completion, or chat_completion.

PUT /_inference/{task_type}/{inference_id}/_update
PUT _inference/my-inference-endpoint/_update
{
 "service_settings": {
   "api_key": "<API_KEY>"
 }
}
resp = client.inference.update(
    inference_id="my-inference-endpoint",
    inference_config={
        "service_settings": {
            "api_key": "<API_KEY>"
        }
    },
)
const response = await client.inference.update({
  inference_id: "my-inference-endpoint",
  inference_config: {
    service_settings: {
      api_key: "<API_KEY>",
    },
  },
});
response = client.inference.update(
  inference_id: "my-inference-endpoint",
  body: {
    "service_settings": {
      "api_key": "<API_KEY>"
    }
  }
)
$resp = $client->inference()->update([
    "inference_id" => "my-inference-endpoint",
    "body" => [
        "service_settings" => [
            "api_key" => "<API_KEY>",
        ],
    ],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"service_settings":{"api_key":"<API_KEY>"}}' "$ELASTICSEARCH_URL/_inference/my-inference-endpoint/_update"
Request example
An example body for a `PUT _inference/my-inference-endpoint/_update` request.
{
 "service_settings": {
   "api_key": "<API_KEY>"
 }
}














































































































































Get datafeeds configuration info Generally available; Added in 5.5.0

GET /_ml/datafeeds/{datafeed_id}

All methods and paths for this operation:

GET /_ml/datafeeds

GET /_ml/datafeeds/{datafeed_id}

You can get information for multiple datafeeds in a single API request by using a comma-separated list of datafeeds or a wildcard expression. You can get information for all datafeeds by using _all, by specifying * as the <feed_id>, or by omitting the <feed_id>. This API returns a maximum of 10,000 datafeeds.

Required authorization

  • Cluster privileges: monitor_ml

Path parameters

  • datafeed_id string | array[string] Required

    Identifier for the datafeed. It can be a datafeed identifier or a wildcard expression. If you do not specify one of these options, the API returns information about all datafeeds.

Query parameters

  • allow_no_match boolean

    Specifies what to do when the request:

    1. Contains wildcard expressions and there are no datafeeds that match.
    2. Contains the _all string or no identifiers and there are no matches.
    3. Contains wildcard expressions and there are only partial matches.

    The default value is true, which returns an empty datafeeds array when there are no matches and the subset of results when there are partial matches. If this parameter is false, the request returns a 404 status code when there are no matches or only partial matches.

  • exclude_generated boolean

    Indicates if certain fields should be removed from the configuration on retrieval. This allows the configuration to be in an acceptable format to be retrieved and then added to another cluster.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • count number Required
    • datafeeds array[object] Required
      Hide datafeeds attributes Show datafeeds attributes object
      • aggregations object
      • authorization object

        The security privileges that the datafeed uses to run its queries. If Elastic Stack security features were disabled at the time of the most recent update to the datafeed, this property is omitted.

        Hide authorization attributes Show authorization attributes object
        • api_key object

          If an API key was used for the most recent update to the datafeed, its name and identifier are listed in the response.

        • roles array[string]

          If a user ID was used for the most recent update to the datafeed, its roles at the time of the update are listed in the response.

        • service_account string

          If a service account was used for the most recent update to the datafeed, the account name is listed in the response.

      • chunking_config object
        Hide chunking_config attributes Show chunking_config attributes object
        • mode string Required

          If the mode is auto, the chunk size is dynamically calculated; this is the recommended value when the datafeed does not use aggregations. If the mode is manual, chunking is applied according to the specified time_span; use this mode when the datafeed uses aggregations. If the mode is off, no chunking is applied.

          Values are auto, manual, or off.

        • time_span string

          The time span that each search will be querying. This setting is applicable only when the mode is set to manual.

      • datafeed_id string Required
      • frequency string

        The interval at which scheduled queries are made while the datafeed runs in real time. The default value is either the bucket span for short bucket spans, or, for longer bucket spans, a sensible fraction of the bucket span. For example: 150s. When frequency is shorter than the bucket span, interim results for the last (partial) bucket are written then eventually overwritten by the full bucket results. If the datafeed uses aggregations, this value must be divisible by the interval of the date histogram aggregation.

      • indices array[string] Required
      • indexes array[string]
      • job_id string Required
      • max_empty_searches number
      • query_delay string

        A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

      • script_fields object
        Hide script_fields attribute Show script_fields attribute object
        • * object Additional properties
          Hide * attributes Show * attributes object
          • script object Required
          • ignore_failure boolean
      • scroll_size number
      • delayed_data_check_config object Required
        Hide delayed_data_check_config attributes Show delayed_data_check_config attributes object
        • check_window string

          The window of time that is searched for late data. This window of time ends with the latest finalized bucket. It defaults to null, which causes an appropriate check_window to be calculated when the real-time datafeed runs. In particular, the default check_window span calculation is based on the maximum of 2h or 8 * bucket_span.

        • enabled boolean Required

          Specifies whether the datafeed periodically checks for delayed data.

      • runtime_mappings object
        Hide runtime_mappings attribute Show runtime_mappings attribute object
        • * object Additional properties
          Hide * attributes Show * attributes object
          • fields object

            For type composite

          • fetch_fields array[object]

            For type lookup

          • format string

            A custom format for date type runtime fields.

      • indices_options object

        Controls how to deal with unavailable concrete indices (closed or missing), how wildcard expressions are expanded to actual indices (all, closed or open indices) and how to deal with wildcard expressions that resolve to no indices.

        Hide indices_options attributes Show indices_options attributes object
        • allow_no_indices boolean

          If false, the request returns an error if any wildcard expression, index alias, or _all value targets only missing or closed indices. This behavior applies even if the request targets other open indices. For example, a request targeting foo*,bar* returns an error if an index starts with foo but no index starts with bar.

        • expand_wildcards string | array[string]

          Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports comma-separated values, such as open,hidden.

          Supported values include:

          • all: Match any data stream or index, including hidden ones.
          • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
          • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
          • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
          • none: Wildcard expressions are not accepted.
        • ignore_unavailable boolean

          If true, missing or closed indices are not included in the response.

          Default value is false.

        • ignore_throttled boolean

          If true, concrete, expanded or aliased indices are ignored when frozen.

          Default value is true.

      • query object Required

        The Elasticsearch query domain-specific language (DSL). This value corresponds to the query object in an Elasticsearch search POST body. All the options that are supported by Elasticsearch can be used, as this object is passed verbatim to Elasticsearch. By default, this property has the following value: {"match_all": {"boost": 1}}.

        Query DSL
GET /_ml/datafeeds/{datafeed_id}
GET _ml/datafeeds/datafeed-high_sum_total_sales
resp = client.ml.get_datafeeds(
    datafeed_id="datafeed-high_sum_total_sales",
)
const response = await client.ml.getDatafeeds({
  datafeed_id: "datafeed-high_sum_total_sales",
});
response = client.ml.get_datafeeds(
  datafeed_id: "datafeed-high_sum_total_sales"
)
$resp = $client->ml()->getDatafeeds([
    "datafeed_id" => "datafeed-high_sum_total_sales",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_ml/datafeeds/datafeed-high_sum_total_sales"
























Delete forecasts from a job Generally available; Added in 6.5.0

DELETE /_ml/anomaly_detectors/{job_id}/_forecast/{forecast_id}

All methods and paths for this operation:

DELETE /_ml/anomaly_detectors/{job_id}/_forecast

DELETE /_ml/anomaly_detectors/{job_id}/_forecast/{forecast_id}

By default, forecasts are retained for 14 days. You can specify a different retention period with the expires_in parameter in the forecast jobs API. The delete forecast API enables you to delete one or more forecasts before they expire.

Required authorization

  • Cluster privileges: manage_ml

Path parameters

  • job_id string Required

    Identifier for the anomaly detection job.

  • forecast_id string Required

    A comma-separated list of forecast identifiers. If you do not specify this optional parameter or if you specify _all or * the API deletes all forecasts from the job.

Query parameters

  • allow_no_forecasts boolean

    Specifies whether an error occurs when there are no forecasts. In particular, if this parameter is set to false and there are no forecasts associated with the job, attempts to delete all forecasts return an error.

  • timeout string

    Specifies the period of time to wait for the completion of the delete operation. When this period of time elapses, the API fails and returns an error.

    Values are -1 or 0.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

DELETE /_ml/anomaly_detectors/{job_id}/_forecast/{forecast_id}
DELETE _ml/anomaly_detectors/total-requests/_forecast/_all
resp = client.ml.delete_forecast(
    job_id="total-requests",
    forecast_id="_all",
)
const response = await client.ml.deleteForecast({
  job_id: "total-requests",
  forecast_id: "_all",
});
response = client.ml.delete_forecast(
  job_id: "total-requests",
  forecast_id: "_all"
)
$resp = $client->ml()->deleteForecast([
    "job_id" => "total-requests",
    "forecast_id" => "_all",
]);
curl -X DELETE -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_ml/anomaly_detectors/total-requests/_forecast/_all"
Response examples (200)
A successful response when deleting a forecast from an anomaly detection job.
{
  "acknowledged": true
}









































































































































Explain data frame analytics config Generally available; Added in 7.3.0

POST /_ml/data_frame/analytics/{id}/_explain

All methods and paths for this operation:

GET /_ml/data_frame/analytics/_explain

POST /_ml/data_frame/analytics/_explain
GET /_ml/data_frame/analytics/{id}/_explain
POST /_ml/data_frame/analytics/{id}/_explain

This API provides explanations for a data frame analytics config that either exists already or one that has not been created yet. The following explanations are provided:

  • which fields are included or not in the analysis and why,
  • how much memory is estimated to be required. The estimate can be used when deciding the appropriate value for model_memory_limit setting later on. If you have object fields or fields that are excluded via source filtering, they are not included in the explanation.

Required authorization

  • Cluster privileges: monitor_ml

Path parameters

  • id string Required

    Identifier for the data frame analytics job. This identifier can contain lowercase alphanumeric characters (a-z and 0-9), hyphens, and underscores. It must start and end with alphanumeric characters.

application/json

Body

  • source object

    The configuration of how to source the analysis data. It requires an index. Optionally, query and _source may be specified.

    Hide source attributes Show source attributes object
    • index string | array[string] Required

      Index or indices on which to perform the analysis. It can be a single index or index pattern as well as an array of indices or patterns. NOTE: If your source indices contain documents with the same IDs, only the document that is indexed last appears in the destination index.

    • runtime_mappings object

      Definitions of runtime fields that will become part of the mapping of the destination index.

      Hide runtime_mappings attribute Show runtime_mappings attribute object
      • * object Additional properties
        Hide * attributes Show * attributes object
        • fields object

          For type composite

          Hide fields attribute Show fields attribute object
          • * object Additional properties
        • fetch_fields array[object]

          For type lookup

        • format string

          A custom format for date type runtime fields.

        • input_field string

          For type lookup

        • target_field string

          For type lookup

        • target_index string

          For type lookup

        • script object

          Painless script executed at query time.

        • type string Required

          Field type, which can be: boolean, composite, date, double, geo_point, ip,keyword, long, or lookup.

          Values are boolean, composite, date, double, geo_point, geo_shape, ip, keyword, long, or lookup.

    • _source object

      Specify includes and/or `excludes patterns to select which fields will be present in the destination. Fields that are excluded cannot be included in the analysis.

      Hide _source attributes Show _source attributes object
      • includes array[string]

        An array of strings that defines the fields that will be excluded from the analysis. You do not need to add fields with unsupported data types to excludes, these fields are excluded from the analysis automatically.

      • excludes array[string]

        An array of strings that defines the fields that will be included in the analysis.

    • query object

      The Elasticsearch query domain-specific language (DSL). This value corresponds to the query object in an Elasticsearch search POST body. All the options that are supported by Elasticsearch can be used, as this object is passed verbatim to Elasticsearch. By default, this property has the following value: {"match_all": {}}.

      Query DSL
  • dest object

    The destination configuration, consisting of index and optionally results_field (ml by default).

    Hide dest attributes Show dest attributes object
    • index string Required

      Defines the destination index to store the results of the data frame analytics job.

    • results_field string

      Defines the name of the field in which to store the results of the analysis. Defaults to ml.

  • analysis object

    The analysis configuration, which contains the information necessary to perform one of the following types of analysis: classification, outlier detection, or regression.

    Hide analysis attributes Show analysis attributes object
    • classification object

      The configuration information necessary to perform classification.

      Hide classification attributes Show classification attributes object
      • alpha number

        Advanced configuration option. Machine learning uses loss guided tree growing, which means that the decision trees grow where the regularized loss decreases most quickly. This parameter affects loss calculations by acting as a multiplier of the tree depth. Higher alpha values result in shallower trees and faster training times. By default, this value is calculated during hyperparameter optimization. It must be greater than or equal to zero.

      • dependent_variable string Required

        Defines which field of the document is to be predicted. It must match one of the fields in the index being used to train. If this field is missing from a document, then that document will not be used for training, but a prediction with the trained model will be generated for it. It is also known as continuous target variable. For classification analysis, the data type of the field must be numeric (integer, short, long, byte), categorical (ip or keyword), or boolean. There must be no more than 30 different values in this field. For regression analysis, the data type of the field must be numeric.

      • downsample_factor number

        Advanced configuration option. Controls the fraction of data that is used to compute the derivatives of the loss function for tree training. A small value results in the use of a small fraction of the data. If this value is set to be less than 1, accuracy typically improves. However, too small a value may result in poor convergence for the ensemble and so require more trees. By default, this value is calculated during hyperparameter optimization. It must be greater than zero and less than or equal to 1.

      • early_stopping_enabled boolean

        Advanced configuration option. Specifies whether the training process should finish if it is not finding any better performing models. If disabled, the training process can take significantly longer and the chance of finding a better performing model is unremarkable.

        Default value is true.

      • eta number

        Advanced configuration option. The shrinkage applied to the weights. Smaller values result in larger forests which have a better generalization error. However, larger forests cause slower training. By default, this value is calculated during hyperparameter optimization. It must be a value between 0.001 and 1.

      • eta_growth_rate_per_tree number

        Advanced configuration option. Specifies the rate at which eta increases for each new tree that is added to the forest. For example, a rate of 1.05 increases eta by 5% for each extra tree. By default, this value is calculated during hyperparameter optimization. It must be between 0.5 and 2.

      • feature_bag_fraction number

        Advanced configuration option. Defines the fraction of features that will be used when selecting a random bag for each candidate split. By default, this value is calculated during hyperparameter optimization.

      • feature_processors array[object]

        Advanced configuration option. A collection of feature preprocessors that modify one or more included fields. The analysis uses the resulting one or more features instead of the original document field. However, these features are ephemeral; they are not stored in the destination index. Multiple feature_processors entries can refer to the same document fields. Automatic categorical feature encoding still occurs for the fields that are unprocessed by a custom processor or that have categorical values. Use this property only if you want to override the automatic feature encoding of the specified fields.

      • gamma number

        Advanced configuration option. Regularization parameter to prevent overfitting on the training data set. Multiplies a linear penalty associated with the size of individual trees in the forest. A high gamma value causes training to prefer small trees. A small gamma value results in larger individual trees and slower training. By default, this value is calculated during hyperparameter optimization. It must be a nonnegative value.

      • lambda number

        Advanced configuration option. Regularization parameter to prevent overfitting on the training data set. Multiplies an L2 regularization term which applies to leaf weights of the individual trees in the forest. A high lambda value causes training to favor small leaf weights. This behavior makes the prediction function smoother at the expense of potentially not being able to capture relevant relationships between the features and the dependent variable. A small lambda value results in large individual trees and slower training. By default, this value is calculated during hyperparameter optimization. It must be a nonnegative value.

      • max_optimization_rounds_per_hyperparameter number

        Advanced configuration option. A multiplier responsible for determining the maximum number of hyperparameter optimization steps in the Bayesian optimization procedure. The maximum number of steps is determined based on the number of undefined hyperparameters times the maximum optimization rounds per hyperparameter. By default, this value is calculated during hyperparameter optimization.

      • max_trees number

        Advanced configuration option. Defines the maximum number of decision trees in the forest. The maximum value is 2000. By default, this value is calculated during hyperparameter optimization.

      • num_top_feature_importance_values number

        Advanced configuration option. Specifies the maximum number of feature importance values per document to return. By default, no feature importance calculation occurs.

        Default value is 0.

      • prediction_field_name string

        Defines the name of the prediction field in the results. Defaults to <dependent_variable>_prediction.

      • randomize_seed number

        Defines the seed for the random generator that is used to pick training data. By default, it is randomly generated. Set it to a specific value to use the same training data each time you start a job (assuming other related parameters such as source and analyzed_fields are the same).

      • soft_tree_depth_limit number

        Advanced configuration option. Machine learning uses loss guided tree growing, which means that the decision trees grow where the regularized loss decreases most quickly. This soft limit combines with the soft_tree_depth_tolerance to penalize trees that exceed the specified depth; the regularized loss increases quickly beyond this depth. By default, this value is calculated during hyperparameter optimization. It must be greater than or equal to 0.

      • soft_tree_depth_tolerance number

        Advanced configuration option. This option controls how quickly the regularized loss increases when the tree depth exceeds soft_tree_depth_limit. By default, this value is calculated during hyperparameter optimization. It must be greater than or equal to 0.01.

      • training_percent
      • class_assignment_objective string
      • num_top_classes number

        Defines the number of categories for which the predicted probabilities are reported. It must be non-negative or -1. If it is -1 or greater than the total number of categories, probabilities are reported for all categories; if you have a large number of categories, there could be a significant effect on the size of your destination index. NOTE: To use the AUC ROC evaluation method, num_top_classes must be set to -1 or a value greater than or equal to the total number of categories.

        Default value is 2.

    • outlier_detection object

      The configuration information necessary to perform outlier detection. NOTE: Advanced parameters are for fine-tuning classification analysis. They are set automatically by hyperparameter optimization to give the minimum validation error. It is highly recommended to use the default values unless you fully understand the function of these parameters.

      Hide outlier_detection attributes Show outlier_detection attributes object
      • compute_feature_influence boolean

        Specifies whether the feature influence calculation is enabled.

        Default value is true.

      • feature_influence_threshold number

        The minimum outlier score that a document needs to have in order to calculate its feature influence score. Value range: 0-1.

        Default value is 0.1.

      • method string

        The method that outlier detection uses. Available methods are lof, ldof, distance_kth_nn, distance_knn, and ensemble. The default value is ensemble, which means that outlier detection uses an ensemble of different methods and normalises and combines their individual outlier scores to obtain the overall outlier score.

        Default value is ensemble.

      • n_neighbors number

        Defines the value for how many nearest neighbors each method of outlier detection uses to calculate its outlier score. When the value is not set, different values are used for different ensemble members. This default behavior helps improve the diversity in the ensemble; only override it if you are confident that the value you choose is appropriate for the data set.

      • outlier_fraction number

        The proportion of the data set that is assumed to be outlying prior to outlier detection. For example, 0.05 means it is assumed that 5% of values are real outliers and 95% are inliers.

      • standardization_enabled boolean

        If true, the following operation is performed on the columns before computing outlier scores: (x_i - mean(x_i)) / sd(x_i).

        Default value is true.

    • regression object

      The configuration information necessary to perform regression. NOTE: Advanced parameters are for fine-tuning regression analysis. They are set automatically by hyperparameter optimization to give the minimum validation error. It is highly recommended to use the default values unless you fully understand the function of these parameters.

      Hide regression attributes Show regression attributes object
      • alpha number

        Advanced configuration option. Machine learning uses loss guided tree growing, which means that the decision trees grow where the regularized loss decreases most quickly. This parameter affects loss calculations by acting as a multiplier of the tree depth. Higher alpha values result in shallower trees and faster training times. By default, this value is calculated during hyperparameter optimization. It must be greater than or equal to zero.

      • dependent_variable string Required

        Defines which field of the document is to be predicted. It must match one of the fields in the index being used to train. If this field is missing from a document, then that document will not be used for training, but a prediction with the trained model will be generated for it. It is also known as continuous target variable. For classification analysis, the data type of the field must be numeric (integer, short, long, byte), categorical (ip or keyword), or boolean. There must be no more than 30 different values in this field. For regression analysis, the data type of the field must be numeric.

      • downsample_factor number

        Advanced configuration option. Controls the fraction of data that is used to compute the derivatives of the loss function for tree training. A small value results in the use of a small fraction of the data. If this value is set to be less than 1, accuracy typically improves. However, too small a value may result in poor convergence for the ensemble and so require more trees. By default, this value is calculated during hyperparameter optimization. It must be greater than zero and less than or equal to 1.

      • early_stopping_enabled boolean

        Advanced configuration option. Specifies whether the training process should finish if it is not finding any better performing models. If disabled, the training process can take significantly longer and the chance of finding a better performing model is unremarkable.

        Default value is true.

      • eta number

        Advanced configuration option. The shrinkage applied to the weights. Smaller values result in larger forests which have a better generalization error. However, larger forests cause slower training. By default, this value is calculated during hyperparameter optimization. It must be a value between 0.001 and 1.

      • eta_growth_rate_per_tree number

        Advanced configuration option. Specifies the rate at which eta increases for each new tree that is added to the forest. For example, a rate of 1.05 increases eta by 5% for each extra tree. By default, this value is calculated during hyperparameter optimization. It must be between 0.5 and 2.

      • feature_bag_fraction number

        Advanced configuration option. Defines the fraction of features that will be used when selecting a random bag for each candidate split. By default, this value is calculated during hyperparameter optimization.

      • feature_processors array[object]

        Advanced configuration option. A collection of feature preprocessors that modify one or more included fields. The analysis uses the resulting one or more features instead of the original document field. However, these features are ephemeral; they are not stored in the destination index. Multiple feature_processors entries can refer to the same document fields. Automatic categorical feature encoding still occurs for the fields that are unprocessed by a custom processor or that have categorical values. Use this property only if you want to override the automatic feature encoding of the specified fields.

      • gamma number

        Advanced configuration option. Regularization parameter to prevent overfitting on the training data set. Multiplies a linear penalty associated with the size of individual trees in the forest. A high gamma value causes training to prefer small trees. A small gamma value results in larger individual trees and slower training. By default, this value is calculated during hyperparameter optimization. It must be a nonnegative value.

      • lambda number

        Advanced configuration option. Regularization parameter to prevent overfitting on the training data set. Multiplies an L2 regularization term which applies to leaf weights of the individual trees in the forest. A high lambda value causes training to favor small leaf weights. This behavior makes the prediction function smoother at the expense of potentially not being able to capture relevant relationships between the features and the dependent variable. A small lambda value results in large individual trees and slower training. By default, this value is calculated during hyperparameter optimization. It must be a nonnegative value.

      • max_optimization_rounds_per_hyperparameter number

        Advanced configuration option. A multiplier responsible for determining the maximum number of hyperparameter optimization steps in the Bayesian optimization procedure. The maximum number of steps is determined based on the number of undefined hyperparameters times the maximum optimization rounds per hyperparameter. By default, this value is calculated during hyperparameter optimization.

      • max_trees number

        Advanced configuration option. Defines the maximum number of decision trees in the forest. The maximum value is 2000. By default, this value is calculated during hyperparameter optimization.

      • num_top_feature_importance_values number

        Advanced configuration option. Specifies the maximum number of feature importance values per document to return. By default, no feature importance calculation occurs.

        Default value is 0.

      • prediction_field_name string

        Defines the name of the prediction field in the results. Defaults to <dependent_variable>_prediction.

      • randomize_seed number

        Defines the seed for the random generator that is used to pick training data. By default, it is randomly generated. Set it to a specific value to use the same training data each time you start a job (assuming other related parameters such as source and analyzed_fields are the same).

      • soft_tree_depth_limit number

        Advanced configuration option. Machine learning uses loss guided tree growing, which means that the decision trees grow where the regularized loss decreases most quickly. This soft limit combines with the soft_tree_depth_tolerance to penalize trees that exceed the specified depth; the regularized loss increases quickly beyond this depth. By default, this value is calculated during hyperparameter optimization. It must be greater than or equal to 0.

      • soft_tree_depth_tolerance number

        Advanced configuration option. This option controls how quickly the regularized loss increases when the tree depth exceeds soft_tree_depth_limit. By default, this value is calculated during hyperparameter optimization. It must be greater than or equal to 0.01.

      • training_percent
      • loss_function string

        The loss function used during regression. Available options are mse (mean squared error), msle (mean squared logarithmic error), huber (Pseudo-Huber loss).

        Default value is mse.

      • loss_function_parameter number

        A positive number that is used as a parameter to the loss_function.

  • description string

    A description of the job.

  • model_memory_limit string

    The approximate maximum amount of memory resources that are permitted for analytical processing. If your elasticsearch.yml file contains an xpack.ml.max_model_memory_limit setting, an error occurs when you try to create data frame analytics jobs that have model_memory_limit values greater than that setting.

    Default value is 1gb.

  • max_num_threads number

    The maximum number of threads to be used by the analysis. Using more threads may decrease the time necessary to complete the analysis at the cost of using more CPU. Note that the process may use additional threads for operational functionality other than the analysis itself.

    Default value is 1.

  • analyzed_fields object

    Specify includes and/or excludes patterns to select which fields will be included in the analysis. The patterns specified in excludes are applied last, therefore excludes takes precedence. In other words, if the same field is specified in both includes and excludes, then the field will not be included in the analysis.

    Hide analyzed_fields attributes Show analyzed_fields attributes object
    • includes array[string]

      An array of strings that defines the fields that will be excluded from the analysis. You do not need to add fields with unsupported data types to excludes, these fields are excluded from the analysis automatically.

    • excludes array[string]

      An array of strings that defines the fields that will be included in the analysis.

  • allow_lazy_start boolean

    Specifies whether this job can start when there is insufficient machine learning node capacity for it to be immediately assigned to a node.

    Default value is false.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • field_selection array[object] Required

      An array of objects that explain selection for each field, sorted by the field names.

      Hide field_selection attributes Show field_selection attributes object
      • is_included boolean Required

        Whether the field is selected to be included in the analysis.

      • is_required boolean Required

        Whether the field is required.

      • feature_type string

        The feature type of this field for the analysis. May be categorical or numerical.

      • mapping_types array[string] Required

        The mapping types of the field.

      • name string Required

        The field name.

      • reason string

        The reason a field is not selected to be included in the analysis.

    • memory_estimation object Required

      An array of objects that explain selection for each field, sorted by the field names.

      Hide memory_estimation attributes Show memory_estimation attributes object
      • expected_memory_with_disk string Required

        Estimated memory usage under the assumption that overflowing to disk is allowed during data frame analytics. expected_memory_with_disk is usually smaller than expected_memory_without_disk as using disk allows to limit the main memory needed to perform data frame analytics.

      • expected_memory_without_disk string Required

        Estimated memory usage under the assumption that the whole data frame analytics should happen in memory (i.e. without overflowing to disk).

POST /_ml/data_frame/analytics/{id}/_explain
POST _ml/data_frame/analytics/_explain
{
  "source": {
    "index": "houses_sold_last_10_yrs"
  },
  "analysis": {
    "regression": {
      "dependent_variable": "price"
    }
  }
}
resp = client.ml.explain_data_frame_analytics(
    source={
        "index": "houses_sold_last_10_yrs"
    },
    analysis={
        "regression": {
            "dependent_variable": "price"
        }
    },
)
const response = await client.ml.explainDataFrameAnalytics({
  source: {
    index: "houses_sold_last_10_yrs",
  },
  analysis: {
    regression: {
      dependent_variable: "price",
    },
  },
});
response = client.ml.explain_data_frame_analytics(
  body: {
    "source": {
      "index": "houses_sold_last_10_yrs"
    },
    "analysis": {
      "regression": {
        "dependent_variable": "price"
      }
    }
  }
)
$resp = $client->ml()->explainDataFrameAnalytics([
    "body" => [
        "source" => [
            "index" => "houses_sold_last_10_yrs",
        ],
        "analysis" => [
            "regression" => [
                "dependent_variable" => "price",
            ],
        ],
    ],
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"source":{"index":"houses_sold_last_10_yrs"},"analysis":{"regression":{"dependent_variable":"price"}}}' "$ELASTICSEARCH_URL/_ml/data_frame/analytics/_explain"
Request example
Run `POST _ml/data_frame/analytics/_explain` to explain a data frame analytics job configuration.
{
  "source": {
    "index": "houses_sold_last_10_yrs"
  },
  "analysis": {
    "regression": {
      "dependent_variable": "price"
    }
  }
}
Response examples (200)
A succesful response for explaining a data frame analytics job configuration.
{
  "field_selection": [
    {
      "field": "number_of_bedrooms",
      "mappings_types": [
        "integer"
      ],
      "is_included": true,
      "is_required": false,
      "feature_type": "numerical"
    },
    {
      "field": "postcode",
      "mappings_types": [
        "text"
      ],
      "is_included": false,
      "is_required": false,
      "reason": "[postcode.keyword] is preferred because it is aggregatable"
    },
    {
      "field": "postcode.keyword",
      "mappings_types": [
        "keyword"
      ],
      "is_included": true,
      "is_required": false,
      "feature_type": "categorical"
    },
    {
      "field": "price",
      "mappings_types": [
        "float"
      ],
      "is_included": true,
      "is_required": true,
      "feature_type": "numerical"
    }
  ],
  "memory_estimation": {
    "expected_memory_without_disk": "128MB",
    "expected_memory_with_disk": "32MB"
  }
}