Ping the cluster Generally available

HEAD /

Get information about whether the cluster is running.

Responses

  • 200 application/json
HEAD /
curl \
 --request HEAD 'http://api.example.com/' \
 --header "Authorization: $API_KEY"


















































Get all connectors Beta; Added in 8.12.0

GET /_connector

Get information about all connectors.

Query parameters

  • from number

    Starting offset (default: 0)

  • size number

    Specifies a max number of results to get

  • index_name string | array[string]

    A comma-separated list of connector index names to fetch connector documents for

  • connector_name string | array[string]

    A comma-separated list of connector names to fetch connector documents for

  • service_type string | array[string]

    A comma-separated list of connector service types to fetch connector documents for

  • query string

    A wildcard query string that filters connectors with matching name, description or index name

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • count number Required
    • results array[object] Required
      Hide results attributes Show results attributes object
      • api_key_id string
      • api_key_secret_id string
      • configuration object Required
        Hide configuration attribute Show configuration attribute object
        • * object Additional properties
          Hide * attributes Show * attributes object
          • category string
          • depends_on array[object] Required
          • label string Required
          • options array[object] Required
          • order number
          • placeholder string
          • required boolean Required
          • sensitive boolean Required
          • tooltip
          • ui_restrictions array[string]
          • validations array[object]
          • value object Required
      • custom_scheduling object Required
        Hide custom_scheduling attribute Show custom_scheduling attribute object
        • * object Additional properties
          Hide * attributes Show * attributes object
          • enabled boolean Required
          • interval string Required
          • name string Required
      • description string
      • features object
        Hide features attributes Show features attributes object
        • document_level_security object

          Indicates whether document-level security is enabled.

        • incremental_sync object

          Indicates whether incremental syncs are enabled.

        • native_connector_api_keys object

          Indicates whether managed connector API keys are enabled.

        • sync_rules object
      • filtering array[object] Required
        Hide filtering attributes Show filtering attributes object
        • active object Required
        • domain string
        • draft object Required
      • id string
      • index_name string | null

      • is_native boolean Required
      • language string
      • last_access_control_sync_error string
      • last_access_control_sync_scheduled_at string | number

        One of:
      • last_access_control_sync_status string

        Values are canceling, canceled, completed, error, in_progress, pending, or suspended.

      • last_deleted_document_count number
      • last_incremental_sync_scheduled_at string | number

        One of:
      • last_indexed_document_count number
      • last_seen string | number

        One of:
      • last_sync_error string
      • last_sync_scheduled_at string | number

        One of:
      • last_sync_status string

        Values are canceling, canceled, completed, error, in_progress, pending, or suspended.

      • last_synced string | number

        One of:
      • name string
      • pipeline object
        Hide pipeline attributes Show pipeline attributes object
        • extract_binary_content boolean Required
        • name string Required
        • reduce_whitespace boolean Required
        • run_ml_inference boolean Required
      • scheduling object Required
        Hide scheduling attributes Show scheduling attributes object
        • access_control object
        • full object
        • incremental object
      • service_type string
      • status string Required

        Values are created, needs_configuration, configured, connected, or error.

      • sync_cursor object
      • sync_now boolean Required
GET /_connector
GET _connector
resp = client.connector.list()
const response = await client.connector.list();
response = client.connector.list
$resp = $client->connector()->list();
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_connector"
client.connector().list(l -> l);
















































































Update the connector pipeline Beta; Added in 8.12.0

PUT /_connector/{connector_id}/_pipeline

When you create a new connector, the configuration of an ingest pipeline is populated with default settings.

Path parameters

  • connector_id string Required

    The unique identifier of the connector to be updated

application/json

Body Required

  • pipeline object Required
    Hide pipeline attributes Show pipeline attributes object
    • extract_binary_content boolean Required
    • name string Required
    • reduce_whitespace boolean Required
    • run_ml_inference boolean Required

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • result string Required

      Values are created, updated, deleted, not_found, or noop.

PUT /_connector/{connector_id}/_pipeline
PUT _connector/my-connector/_pipeline
{
    "pipeline": {
        "extract_binary_content": true,
        "name": "my-connector-pipeline",
        "reduce_whitespace": true,
        "run_ml_inference": true
    }
}
resp = client.connector.update_pipeline(
    connector_id="my-connector",
    pipeline={
        "extract_binary_content": True,
        "name": "my-connector-pipeline",
        "reduce_whitespace": True,
        "run_ml_inference": True
    },
)
const response = await client.connector.updatePipeline({
  connector_id: "my-connector",
  pipeline: {
    extract_binary_content: true,
    name: "my-connector-pipeline",
    reduce_whitespace: true,
    run_ml_inference: true,
  },
});
response = client.connector.update_pipeline(
  connector_id: "my-connector",
  body: {
    "pipeline": {
      "extract_binary_content": true,
      "name": "my-connector-pipeline",
      "reduce_whitespace": true,
      "run_ml_inference": true
    }
  }
)
$resp = $client->connector()->updatePipeline([
    "connector_id" => "my-connector",
    "body" => [
        "pipeline" => [
            "extract_binary_content" => true,
            "name" => "my-connector-pipeline",
            "reduce_whitespace" => true,
            "run_ml_inference" => true,
        ],
    ],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"pipeline":{"extract_binary_content":true,"name":"my-connector-pipeline","reduce_whitespace":true,"run_ml_inference":true}}' "$ELASTICSEARCH_URL/_connector/my-connector/_pipeline"
client.connector().updatePipeline(u -> u
    .connectorId("my-connector")
    .pipeline(p -> p
        .extractBinaryContent(true)
        .name("my-connector-pipeline")
        .reduceWhitespace(true)
        .runMlInference(true)
    )
);
Request example
{
    "pipeline": {
        "extract_binary_content": true,
        "name": "my-connector-pipeline",
        "reduce_whitespace": true,
        "run_ml_inference": true
    }
}
Response examples (200)
{
  "result": "updated"
}





























































Unfollow an index Generally available; Added in 6.5.0

POST /{index}/_ccr/unfollow

Convert a cross-cluster replication follower index to a regular index. The API stops the following task associated with a follower index and removes index metadata and settings associated with cross-cluster replication. The follower index must be paused and closed before you call the unfollow API.


Currently cross-cluster replication does not support converting an existing regular index to a follower index. Converting a follower index to a regular index is an irreversible operation.

Required authorization

  • Index privileges: manage_follow_index
External documentation

Path parameters

  • index string Required

    The name of the follower index.

Query parameters

  • master_timeout string

    The period to wait for a connection to the master node. If the master node is not available before the timeout expires, the request fails and returns an error. It can also be set to -1 to indicate that the request should never timeout.

    Values are -1 or 0.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

POST /{index}/_ccr/unfollow
POST /follower_index/_ccr/unfollow
resp = client.ccr.unfollow(
    index="follower_index",
)
const response = await client.ccr.unfollow({
  index: "follower_index",
});
response = client.ccr.unfollow(
  index: "follower_index"
)
$resp = $client->ccr()->unfollow([
    "index" => "follower_index",
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/follower_index/_ccr/unfollow"
client.ccr().unfollow(u -> u
    .index("follower_index")
);
Response examples (200)
A successful response from `POST /follower_index/_ccr/unfollow`.
{
  "acknowledged" : true
}





































































































































































Get async ES|QL query results Generally available; Added in 8.13.0

GET /_query/async/{id}

Get the current status and available results or stored results for an ES|QL asynchronous query. If the Elasticsearch security features are enabled, only the user who first submitted the ES|QL query can retrieve the results using this API.

External documentation

Path parameters

  • id string Required

    The unique identifier of the query. A query ID is provided in the ES|QL async query API response for a query that does not complete in the designated time. A query ID is also provided when the request was submitted with the keep_on_completion parameter set to true.

Query parameters

  • drop_null_columns boolean

    Indicates whether columns that are entirely null will be removed from the columns and values portion of the results. If true, the response will include an extra section under the name all_columns which has the name of all the columns.

  • format string

    A short version of the Accept header, for example json or yaml.

    Values are csv, json, tsv, txt, yaml, cbor, smile, or arrow.

  • keep_alive string

    The period for which the query and its results are stored in the cluster. When this period expires, the query and its results are deleted, even if the query is still ongoing.

    Values are -1 or 0.

  • wait_for_completion_timeout string

    The period to wait for the request to finish. By default, the request waits for complete query results. If the request completes during the period specified in this parameter, complete query results are returned. Otherwise, the response returns an is_running value of true and no results.

    Values are -1 or 0.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • Time unit for milliseconds

    • is_partial boolean
    • all_columns array[object]
      Hide all_columns attributes Show all_columns attributes object
      • name string Required
      • type string Required
    • columns array[object] Required
      Hide columns attributes Show columns attributes object
      • name string Required
      • type string Required
    • values array[array] Required
    • _clusters object

      Cross-cluster search information. Present if include_ccs_metadata was true in the request and a cross-cluster search was performed.

      Hide _clusters attributes Show _clusters attributes object
      • total number Required
      • successful number Required
      • running number Required
      • skipped number Required
      • partial number Required
      • failed number Required
      • details object Required
        Hide details attribute Show details attribute object
        • * object Additional properties
          Hide * attribute Show * attribute object
          • indices string Required
    • profile object

      Profiling information. Present if profile was true in the request. The contents of this field are currently unstable.

    • id string

      The ID of the async query, to be used in subsequent requests to check the status or retrieve results.

      Also available in the X-Elasticsearch-Async-Id HTTP header.

    • is_running boolean Required

      Indicates whether the async query is still running or has completed.

      Also available in the X-Elasticsearch-Async-Is-Running HTTP header.

GET /_query/async/{id}
GET /_query/async/FmNJRUZ1YWZCU3dHY1BIOUhaenVSRkEaaXFlZ3h4c1RTWFNocDdnY2FSaERnUTozNDE=?wait_for_completion_timeout=30s
resp = client.esql.async_query_get(
    id="FmNJRUZ1YWZCU3dHY1BIOUhaenVSRkEaaXFlZ3h4c1RTWFNocDdnY2FSaERnUTozNDE=",
    wait_for_completion_timeout="30s",
)
const response = await client.esql.asyncQueryGet({
  id: "FmNJRUZ1YWZCU3dHY1BIOUhaenVSRkEaaXFlZ3h4c1RTWFNocDdnY2FSaERnUTozNDE=",
  wait_for_completion_timeout: "30s",
});
response = client.esql.async_query_get(
  id: "FmNJRUZ1YWZCU3dHY1BIOUhaenVSRkEaaXFlZ3h4c1RTWFNocDdnY2FSaERnUTozNDE=",
  wait_for_completion_timeout: "30s"
)
$resp = $client->esql()->asyncQueryGet([
    "id" => "FmNJRUZ1YWZCU3dHY1BIOUhaenVSRkEaaXFlZ3h4c1RTWFNocDdnY2FSaERnUTozNDE=",
    "wait_for_completion_timeout" => "30s",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_query/async/FmNJRUZ1YWZCU3dHY1BIOUhaenVSRkEaaXFlZ3h4c1RTWFNocDdnY2FSaERnUTozNDE=?wait_for_completion_timeout=30s"








































































































Create or update an alias Generally available

POST /{index}/_aliases/{name}

All methods and paths for this operation:

PUT /{index}/_alias/{name}

POST /{index}/_alias/{name}
PUT /{index}/_aliases/{name}
POST /{index}/_aliases/{name}

Adds a data stream or index to an alias.

Path parameters

  • index string | array[string] Required

    Comma-separated list of data streams or indices to add. Supports wildcards (*). Wildcard patterns that match both data streams and indices return an error.

  • name string Required

    Alias to update. If the alias doesn’t exist, the request creates it. Index alias names support date math.

Query parameters

  • master_timeout string

    Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.

    Values are -1 or 0.

  • timeout string

    Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.

    Values are -1 or 0.

application/json

Body

  • filter object

    Query used to limit documents the alias can access.

    External documentation
  • index_routing string

    Value used to route indexing operations to a specific shard. If specified, this overwrites the routing value for indexing operations. Data stream aliases don’t support this parameter.

  • is_write_index boolean

    If true, sets the write index or data stream for the alias. If an alias points to multiple indices or data streams and is_write_index isn’t set, the alias rejects write requests. If an index alias points to one index and is_write_index isn’t set, the index automatically acts as the write index. Data stream aliases don’t automatically set a write data stream, even if the alias points to one data stream.

  • routing string

    Value used to route indexing and search operations to a specific shard. Data stream aliases don’t support this parameter.

  • search_routing string

    Value used to route search operations to a specific shard. If specified, this overwrites the routing value for search operations. Data stream aliases don’t support this parameter.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

POST /{index}/_aliases/{name}
POST _aliases
{
  "actions": [
    {
      "add": {
        "index": "my-data-stream",
        "alias": "my-alias"
      }
    }
  ]
}
resp = client.indices.update_aliases(
    actions=[
        {
            "add": {
                "index": "my-data-stream",
                "alias": "my-alias"
            }
        }
    ],
)
const response = await client.indices.updateAliases({
  actions: [
    {
      add: {
        index: "my-data-stream",
        alias: "my-alias",
      },
    },
  ],
});
response = client.indices.update_aliases(
  body: {
    "actions": [
      {
        "add": {
          "index": "my-data-stream",
          "alias": "my-alias"
        }
      }
    ]
  }
)
$resp = $client->indices()->updateAliases([
    "body" => [
        "actions" => array(
            [
                "add" => [
                    "index" => "my-data-stream",
                    "alias" => "my-alias",
                ],
            ],
        ),
    ],
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"actions":[{"add":{"index":"my-data-stream","alias":"my-alias"}}]}' "$ELASTICSEARCH_URL/_aliases"
client.indices().updateAliases(u -> u
    .actions(a -> a
        .add(ad -> ad
            .alias("my-alias")
            .index("my-data-stream")
        )
    )
);
Request example
{
  "actions": [
    {
      "add": {
        "index": "my-data-stream",
        "alias": "my-alias"
      }
    }
  ]
}




















































Get field usage stats Technical preview; Added in 7.15.0

GET /{index}/_field_usage_stats

Get field usage information for each shard and field of an index. Field usage statistics are automatically captured when queries are running on a cluster. A shard-level search request that accesses a given field, even if multiple times during that request, is counted as a single use.

The response body reports the per-shard usage count of the data structures that back the fields in the index. A given request will increment each count by a maximum value of 1, even if the request accesses the same field multiple times.

Required authorization

  • Index privileges: manage

Path parameters

  • index string | array[string] Required

    Comma-separated list or wildcard expression of index names used to limit the request.

Query parameters

  • allow_no_indices boolean

    If false, the request returns an error if any wildcard expression, index alias, or _all value targets only missing or closed indices. This behavior applies even if the request targets other open indices. For example, a request targeting foo*,bar* returns an error if an index starts with foo but no index starts with bar.

  • expand_wildcards string | array[string]

    Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports comma-separated values, such as open,hidden.

    Supported values include:

    • all: Match any data stream or index, including hidden ones.
    • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
    • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
    • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
    • none: Wildcard expressions are not accepted.

    Values are all, open, closed, hidden, or none.

  • ignore_unavailable boolean

    If true, missing or closed indices are not included in the response.

  • fields string | array[string]

    Comma-separated list or wildcard expressions of fields to include in the statistics.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • _shards object Required
      Hide _shards attributes Show _shards attributes object
      • failed number Required

        The number of shards the operation or search attempted to run on but failed.

      • successful number Required

        The number of shards the operation or search succeeded on.

      • total number Required

        The number of shards the operation or search will run on overall.

      • failures array[object]
        Hide failures attributes Show failures attributes object
        • index string
        • node string
        • reason object Required

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

        • shard number
        • status string
        • primary boolean
      • skipped number
GET /{index}/_field_usage_stats
GET /my-index-000001/_field_usage_stats
resp = client.indices.field_usage_stats(
    index="my-index-000001",
)
const response = await client.indices.fieldUsageStats({
  index: "my-index-000001",
});
response = client.indices.field_usage_stats(
  index: "my-index-000001"
)
$resp = $client->indices()->fieldUsageStats([
    "index" => "my-index-000001",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/my-index-000001/_field_usage_stats"
client.indices().fieldUsageStats(f -> f
    .index("my-index-000001")
);
Response examples (200)
An abbreviated response from `GET /my-index-000001/_field_usage_stats`. The `all_fields` object reports the sums of the usage counts for all fields in the index (on the listed shard).
{
  "_shards": {
    "total": 1,
    "successful": 1,
    "failed": 0
  },
  "my-index-000001": {
    "shards": [
      {
        "tracking_id": "MpOl0QlTQ4SYYhEe6KgJoQ",
        "tracking_started_at_millis": 1625558985010,
        "routing": {
          "state": "STARTED",
          "primary": true,
          "node": "gA6KeeVzQkGURFCUyV-e8Q",
          "relocating_node": null
        },
        "stats": {
          "all_fields": {
            "any": "6",
            "inverted_index": {
              "terms": 1,
              "postings": 1,
              "proximity": 1,
              "positions": 0,
              "term_frequencies": 1,
              "offsets": 0,
              "payloads": 0
            },
            "stored_fields": 2,
            "doc_values": 1,
            "points": 0,
            "norms": 1,
            "term_vectors": 0,
            "knn_vectors": 0
          },
          "fields": {
            "_id": {
              "any": 1,
              "inverted_index": {
                "terms": 1,
                "postings": 1,
                "proximity": 1,
                "positions": 0,
                "term_frequencies": 1,
                "offsets": 0,
                "payloads": 0
              },
              "stored_fields": 1,
              "doc_values": 0,
              "points": 0,
              "norms": 0,
              "term_vectors": 0,
              "knn_vectors": 0
            },
            "_source": {},
            "context": {},
            "message.keyword": {}
          }
        }
      }
    ]
  }
}
















Update field mappings Generally available

POST /{index}/_mapping

All methods and paths for this operation:

PUT /{index}/_mapping

POST /{index}/_mapping

Add new fields to an existing data stream or index. You can also use this API to change the search settings of existing fields and add new properties to existing object fields. For data streams, these changes are applied to all backing indices by default.

Add multi-fields to an existing field

Multi-fields let you index the same field in different ways. You can use this API to update the fields mapping parameter and enable multi-fields for an existing field. WARNING: If an index (or data stream) contains documents when you add a multi-field, those documents will not have values for the new multi-field. You can populate the new multi-field with the update by query API.

Change supported mapping parameters for an existing field

The documentation for each mapping parameter indicates whether you can update it for an existing field using this API. For example, you can use the update mapping API to update the ignore_above parameter.

Change the mapping of an existing field

Except for supported mapping parameters, you can't change the mapping or field type of an existing field. Changing an existing field could invalidate data that's already indexed.

If you need to change the mapping of a field in a data stream's backing indices, refer to documentation about modifying data streams. If you need to change the mapping of a field in other indices, create a new index with the correct mapping and reindex your data into that index.

Rename a field

Renaming a field would invalidate data already indexed under the old field name. Instead, add an alias field to create an alternate field name.

Required authorization

  • Index privileges: manage
External documentation

Path parameters

  • index string | array[string] Required

    A comma-separated list of index names the mapping should be added to (supports wildcards); use _all or omit to add the mapping on all indices.

Query parameters

  • allow_no_indices boolean

    If false, the request returns an error if any wildcard expression, index alias, or _all value targets only missing or closed indices. This behavior applies even if the request targets other open indices.

  • expand_wildcards string | array[string]

    Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports comma-separated values, such as open,hidden.

    Supported values include:

    • all: Match any data stream or index, including hidden ones.
    • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
    • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
    • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
    • none: Wildcard expressions are not accepted.

    Values are all, open, closed, hidden, or none.

  • ignore_unavailable boolean

    If false, the request returns an error if it targets a missing or closed index.

  • master_timeout string

    Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.

    Values are -1 or 0.

  • timeout string

    Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.

    Values are -1 or 0.

  • write_index_only boolean

    If true, the mappings are applied only to the current write index for the target.

application/json

Body Required

  • date_detection boolean

    Controls whether dynamic date detection is enabled.

  • dynamic string

    Controls whether new fields are added dynamically.

    Values are strict, runtime, true, or false.

  • dynamic_date_formats array[string]

    If date detection is enabled then new string fields are checked against 'dynamic_date_formats' and if the value matches then a new date field is added instead of string.

  • dynamic_templates array[object]

    Specify dynamic templates for the mapping.

  • _field_names object

    Control whether field names are enabled for the index.

    Hide _field_names attribute Show _field_names attribute object
    • enabled boolean Required
  • _meta object

    A mapping type can have custom meta data associated with it. These are not used at all by Elasticsearch, but can be used to store application-specific metadata.

    Hide _meta attribute Show _meta attribute object
    • * object Additional properties
  • numeric_detection boolean

    Automatically map strings into numeric data types for all fields.

    Default value is false.

  • properties object

    Mapping for a field. For new fields, this mapping can include:

    • Field name
    • Field data type
    • Mapping parameters
  • _routing object

    Enable making a routing value required on indexed documents.

    Hide _routing attribute Show _routing attribute object
    • required boolean Required
  • _source object

    Control whether the _source field is enabled on the index.

    Hide _source attributes Show _source attributes object
    • compress boolean
    • compress_threshold string
    • enabled boolean
    • excludes array[string]
    • includes array[string]
    • mode string

      Supported values include:

      • disabled
      • stored
      • synthetic: Instead of storing source documents on disk exactly as you send them, Elasticsearch can reconstruct source content on the fly upon retrieval.

      Values are disabled, stored, or synthetic.

  • runtime object

    Mapping of runtime fields for the index.

    Hide runtime attribute Show runtime attribute object
    • * object Additional properties
      Hide * attributes Show * attributes object
      • fields object

        For type composite

        Hide fields attribute Show fields attribute object
        • * object Additional properties
          Hide * attribute Show * attribute object
          • type string Required

            Values are boolean, composite, date, double, geo_point, geo_shape, ip, keyword, long, or lookup.

      • fetch_fields array[object]

        For type lookup

        Hide fetch_fields attributes Show fetch_fields attributes object
        • field string Required

          Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

        • format string
      • format string

        A custom format for date type runtime fields.

      • input_field string

        For type lookup

      • target_field string

        For type lookup

      • target_index string

        For type lookup

      • script object

        Painless script executed at query time.

        Hide script attributes Show script attributes object
        • source string

          The script source.

        • id string

          The id for a stored script.

        • params object

          Specifies any named parameters that are passed into the script as variables. Use parameters instead of hard-coded values to decrease compile time.

          Hide params attribute Show params attribute object
          • * object Additional properties
        • lang
        • options object
          Hide options attribute Show options attribute object
          • * string Additional properties
      • type string Required

        Field type, which can be: boolean, composite, date, double, geo_point, ip,keyword, long, or lookup.

        Values are boolean, composite, date, double, geo_point, geo_shape, ip, keyword, long, or lookup.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

    • _shards object
      Hide _shards attributes Show _shards attributes object
      • failed number Required

        The number of shards the operation or search attempted to run on but failed.

      • successful number Required

        The number of shards the operation or search succeeded on.

      • total number Required

        The number of shards the operation or search will run on overall.

      • failures array[object]
        Hide failures attributes Show failures attributes object
        • index
        • node string
        • reason
        • shard number
        • status string
        • primary boolean
      • skipped number
POST /{index}/_mapping
PUT /my-index-000001/_mapping
{
  "properties": {
    "user": {
      "properties": {
        "name": {
          "type": "keyword"
        }
      }
    }
  }
}
resp = client.indices.put_mapping(
    index="my-index-000001",
    properties={
        "user": {
            "properties": {
                "name": {
                    "type": "keyword"
                }
            }
        }
    },
)
const response = await client.indices.putMapping({
  index: "my-index-000001",
  properties: {
    user: {
      properties: {
        name: {
          type: "keyword",
        },
      },
    },
  },
});
response = client.indices.put_mapping(
  index: "my-index-000001",
  body: {
    "properties": {
      "user": {
        "properties": {
          "name": {
            "type": "keyword"
          }
        }
      }
    }
  }
)
$resp = $client->indices()->putMapping([
    "index" => "my-index-000001",
    "body" => [
        "properties" => [
            "user" => [
                "properties" => [
                    "name" => [
                        "type" => "keyword",
                    ],
                ],
            ],
        ],
    ],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"properties":{"user":{"properties":{"name":{"type":"keyword"}}}}}' "$ELASTICSEARCH_URL/my-index-000001/_mapping"
client.indices().putMapping(p -> p
    .index("my-index-000001")
    .properties("user", pr -> pr
        .object(o -> o
            .properties("name", pro -> pro
                .keyword(k -> k)
            )
        )
    )
);
Request example
The update mapping API can be applied to multiple data streams or indices with a single request. For example, run `PUT /my-index-000001,my-index-000002/_mapping` to update mappings for the `my-index-000001` and `my-index-000002` indices at the same time.
{
  "properties": {
    "user": {
      "properties": {
        "name": {
          "type": "keyword"
        }
      }
    }
  }
}





































































































































Create an inference endpoint Generally available; Added in 8.11.0

PUT /_inference/{task_type}/{inference_id}

All methods and paths for this operation:

PUT /_inference/{inference_id}

PUT /_inference/{task_type}/{inference_id}

IMPORTANT: The inference APIs enable you to use certain services, such as built-in machine learning models (ELSER, E5), models uploaded through Eland, Cohere, OpenAI, Mistral, Azure OpenAI, Google AI Studio, Google Vertex AI, Anthropic, Watsonx.ai, or Hugging Face. For built-in models and models uploaded through Eland, the inference APIs offer an alternative way to use and manage trained models. However, if you do not plan to use the inference APIs to use these models or if you want to use non-NLP models, use the machine learning trained model APIs.

The following integrations are available through the inference API. You can find the available task types next to the integration name:

  • AlibabaCloud AI Search (completion, rerank, sparse_embedding, text_embedding)
  • Amazon Bedrock (completion, text_embedding)
  • Amazon SageMaker (chat_completion, completion, rerank, sparse_embedding, text_embedding)
  • Anthropic (completion)
  • Azure AI Studio (completion, text_embedding)
  • Azure OpenAI (completion, text_embedding)
  • Cohere (completion, rerank, text_embedding)
  • DeepSeek (chat_completion, completion)
  • Elasticsearch (rerank, sparse_embedding, text_embedding - this service is for built-in models and models uploaded through Eland)
  • ELSER (sparse_embedding)
  • Google AI Studio (completion, text_embedding)
  • Google Vertex AI (chat_completion, completion, rerank, text_embedding)
  • Hugging Face (chat_completion, completion, rerank, text_embedding)
  • JinaAI (rerank, text_embedding)
  • Llama (chat_completion, completion, text_embedding)
  • Mistral (chat_completion, completion, text_embedding)
  • OpenAI (chat_completion, completion, text_embedding)
  • VoyageAI (rerank, text_embedding)
  • Watsonx inference integration (text_embedding)

Required authorization

  • Cluster privileges: manage_inference

Path parameters

  • task_type string Required

    The task type. Refer to the integration list in the API description for the available task types.

    Values are sparse_embedding, text_embedding, rerank, completion, or chat_completion.

  • inference_id string Required

    The inference Id

Query parameters

  • timeout string

    Specifies the amount of time to wait for the inference endpoint to be created.

    Values are -1 or 0.

application/json

Body Required

  • chunking_settings object

    Chunking configuration object

    Hide chunking_settings attributes Show chunking_settings attributes object
    • max_chunk_size number

      The maximum size of a chunk in words. This value cannot be higher than 300 or lower than 20 (for sentence strategy) or 10 (for word strategy).

      Default value is 250.

    • overlap number

      The number of overlapping words for chunks. It is applicable only to a word chunking strategy. This value cannot be higher than half the max_chunk_size value.

      Default value is 100.

    • sentence_overlap number

      The number of overlapping sentences for chunks. It is applicable only for a sentence chunking strategy. It can be either 1 or 0.

      Default value is 1.

    • strategy string

      The chunking strategy: sentence or word.

      Default value is sentence.

  • service string Required

    The service type

  • service_settings object Required

    Settings specific to the service

  • task_settings object

    Task settings specific to the service and task type

Responses

  • 200 application/json
    Hide response attributes Show response attributes object

    Represents an inference endpoint as returned by the GET API

    • chunking_settings object

      Chunking configuration object

      Hide chunking_settings attributes Show chunking_settings attributes object
      • max_chunk_size number

        The maximum size of a chunk in words. This value cannot be higher than 300 or lower than 20 (for sentence strategy) or 10 (for word strategy).

        Default value is 250.

      • overlap number

        The number of overlapping words for chunks. It is applicable only to a word chunking strategy. This value cannot be higher than half the max_chunk_size value.

        Default value is 100.

      • sentence_overlap number

        The number of overlapping sentences for chunks. It is applicable only for a sentence chunking strategy. It can be either 1 or 0.

        Default value is 1.

      • strategy string

        The chunking strategy: sentence or word.

        Default value is sentence.

    • service string Required

      The service type

    • service_settings object Required

      Settings specific to the service

    • task_settings object

      Task settings specific to the service and task type

    • inference_id string Required

      The inference Id

    • task_type string Required

      The task type

      Values are sparse_embedding, text_embedding, rerank, completion, or chat_completion.

PUT /_inference/{task_type}/{inference_id}
PUT _inference/rerank/my-rerank-model
{
 "service": "cohere",
 "service_settings": {
   "model_id": "rerank-english-v3.0",
   "api_key": "{{COHERE_API_KEY}}"
 }
}
resp = client.inference.put(
    task_type="rerank",
    inference_id="my-rerank-model",
    inference_config={
        "service": "cohere",
        "service_settings": {
            "model_id": "rerank-english-v3.0",
            "api_key": "{{COHERE_API_KEY}}"
        }
    },
)
const response = await client.inference.put({
  task_type: "rerank",
  inference_id: "my-rerank-model",
  inference_config: {
    service: "cohere",
    service_settings: {
      model_id: "rerank-english-v3.0",
      api_key: "{{COHERE_API_KEY}}",
    },
  },
});
response = client.inference.put(
  task_type: "rerank",
  inference_id: "my-rerank-model",
  body: {
    "service": "cohere",
    "service_settings": {
      "model_id": "rerank-english-v3.0",
      "api_key": "{{COHERE_API_KEY}}"
    }
  }
)
$resp = $client->inference()->put([
    "task_type" => "rerank",
    "inference_id" => "my-rerank-model",
    "body" => [
        "service" => "cohere",
        "service_settings" => [
            "model_id" => "rerank-english-v3.0",
            "api_key" => "{{COHERE_API_KEY}}",
        ],
    ],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"service":"cohere","service_settings":{"model_id":"rerank-english-v3.0","api_key":"{{COHERE_API_KEY}}"}}' "$ELASTICSEARCH_URL/_inference/rerank/my-rerank-model"
client.inference().put(p -> p
    .inferenceId("my-rerank-model")
    .taskType(TaskType.Rerank)
    .inferenceConfig(i -> i
        .service("cohere")
        .serviceSettings(JsonData.fromJson("{\"model_id\":\"rerank-english-v3.0\",\"api_key\":\"{{COHERE_API_KEY}}\"}"))
    )
);
Request example
An example body for a `PUT _inference/rerank/my-rerank-model` request.
{
 "service": "cohere",
 "service_settings": {
   "model_id": "rerank-english-v3.0",
   "api_key": "{{COHERE_API_KEY}}"
 }
}




























Create an Azure OpenAI inference endpoint Generally available; Added in 8.14.0

PUT /_inference/{task_type}/{azureopenai_inference_id}

Create an inference endpoint to perform an inference task with the azureopenai service.

The list of chat completion models that you can choose from in your Azure OpenAI deployment include:

The list of embeddings models that you can choose from in your deployment can be found in the Azure models documentation.

Required authorization

  • Cluster privileges: manage_inference

Path parameters

  • task_type string

    The type of the inference task that the model will perform. NOTE: The chat_completion task type only supports streaming and only through the _stream API.

    Values are completion or text_embedding.

  • azureopenai_inference_id string Required

    The unique identifier of the inference endpoint.

Query parameters

  • timeout string

    Specifies the amount of time to wait for the inference endpoint to be created.

    Values are -1 or 0.

application/json

Body

  • chunking_settings object

    The chunking configuration object.

    Hide chunking_settings attributes Show chunking_settings attributes object
    • max_chunk_size number

      The maximum size of a chunk in words. This value cannot be higher than 300 or lower than 20 (for sentence strategy) or 10 (for word strategy).

      Default value is 250.

    • overlap number

      The number of overlapping words for chunks. It is applicable only to a word chunking strategy. This value cannot be higher than half the max_chunk_size value.

      Default value is 100.

    • sentence_overlap number

      The number of overlapping sentences for chunks. It is applicable only for a sentence chunking strategy. It can be either 1 or 0.

      Default value is 1.

    • strategy string

      The chunking strategy: sentence or word.

      Default value is sentence.

  • service string Required

    The type of service supported for the specified task type. In this case, azureopenai.

    Value is azureopenai.

  • service_settings object Required

    Settings used to install the inference model. These settings are specific to the azureopenai service.

    Hide service_settings attributes Show service_settings attributes object
    • api_key string

      A valid API key for your Azure OpenAI account. You must specify either api_key or entra_id. If you do not provide either or you provide both, you will receive an error when you try to create your model.

      IMPORTANT: You need to provide the API key only once, during the inference model creation. The get inference endpoint API does not retrieve your API key. After creating the inference model, you cannot change the associated API key. If you want to use a different API key, delete the inference model and recreate it with the same name and the updated API key.

      External documentation
    • api_version string Required

      The Azure API version ID to use. It is recommended to use the latest supported non-preview version.

    • deployment_id string Required

      The deployment name of your deployed models. Your Azure OpenAI deployments can be found though the Azure OpenAI Studio portal that is linked to your subscription.

      External documentation
    • entra_id string

      A valid Microsoft Entra token. You must specify either api_key or entra_id. If you do not provide either or you provide both, you will receive an error when you try to create your model.

      External documentation
    • rate_limit object

      This setting helps to minimize the number of rate limit errors returned from Azure. The azureopenai service sets a default number of requests allowed per minute depending on the task type. For text_embedding, it is set to 1440. For completion, it is set to 120.

      Hide rate_limit attribute Show rate_limit attribute object
      • requests_per_minute number

        The number of requests allowed per minute.

    • resource_name string Required

      The name of your Azure OpenAI resource. You can find this from the list of resources in the Azure Portal for your subscription.

      External documentation
  • task_settings object

    Settings to configure the inference task. These settings are specific to the task type you specified.

    Hide task_settings attribute Show task_settings attribute object
    • user string

      For a completion or text_embedding task, specify the user issuing the request. This information can be used for abuse detection.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • chunking_settings object

      Chunking configuration object

      Hide chunking_settings attributes Show chunking_settings attributes object
      • max_chunk_size number

        The maximum size of a chunk in words. This value cannot be higher than 300 or lower than 20 (for sentence strategy) or 10 (for word strategy).

        Default value is 250.

      • overlap number

        The number of overlapping words for chunks. It is applicable only to a word chunking strategy. This value cannot be higher than half the max_chunk_size value.

        Default value is 100.

      • sentence_overlap number

        The number of overlapping sentences for chunks. It is applicable only for a sentence chunking strategy. It can be either 1 or 0.

        Default value is 1.

      • strategy string

        The chunking strategy: sentence or word.

        Default value is sentence.

    • service string Required

      The service type

    • service_settings object Required

      Settings specific to the service

    • task_settings object

      Task settings specific to the service and task type

    • inference_id string Required

      The inference Id

    • task_type string Required

      The task type

      Values are text_embedding or completion.

PUT /_inference/{task_type}/{azureopenai_inference_id}
PUT _inference/text_embedding/azure_openai_embeddings
{
    "service": "azureopenai",
    "service_settings": {
        "api_key": "Api-Key",
        "resource_name": "Resource-name",
        "deployment_id": "Deployment-id",
        "api_version": "2024-02-01"
    }
}
resp = client.inference.put(
    task_type="text_embedding",
    inference_id="azure_openai_embeddings",
    inference_config={
        "service": "azureopenai",
        "service_settings": {
            "api_key": "Api-Key",
            "resource_name": "Resource-name",
            "deployment_id": "Deployment-id",
            "api_version": "2024-02-01"
        }
    },
)
const response = await client.inference.put({
  task_type: "text_embedding",
  inference_id: "azure_openai_embeddings",
  inference_config: {
    service: "azureopenai",
    service_settings: {
      api_key: "Api-Key",
      resource_name: "Resource-name",
      deployment_id: "Deployment-id",
      api_version: "2024-02-01",
    },
  },
});
response = client.inference.put(
  task_type: "text_embedding",
  inference_id: "azure_openai_embeddings",
  body: {
    "service": "azureopenai",
    "service_settings": {
      "api_key": "Api-Key",
      "resource_name": "Resource-name",
      "deployment_id": "Deployment-id",
      "api_version": "2024-02-01"
    }
  }
)
$resp = $client->inference()->put([
    "task_type" => "text_embedding",
    "inference_id" => "azure_openai_embeddings",
    "body" => [
        "service" => "azureopenai",
        "service_settings" => [
            "api_key" => "Api-Key",
            "resource_name" => "Resource-name",
            "deployment_id" => "Deployment-id",
            "api_version" => "2024-02-01",
        ],
    ],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"service":"azureopenai","service_settings":{"api_key":"Api-Key","resource_name":"Resource-name","deployment_id":"Deployment-id","api_version":"2024-02-01"}}' "$ELASTICSEARCH_URL/_inference/text_embedding/azure_openai_embeddings"
client.inference().put(p -> p
    .inferenceId("azure_openai_embeddings")
    .taskType(TaskType.TextEmbedding)
    .inferenceConfig(i -> i
        .service("azureopenai")
        .serviceSettings(JsonData.fromJson("{\"api_key\":\"Api-Key\",\"resource_name\":\"Resource-name\",\"deployment_id\":\"Deployment-id\",\"api_version\":\"2024-02-01\"}"))
    )
);
Request examples
Run `PUT _inference/text_embedding/azure_openai_embeddings` to create an inference endpoint that performs a `text_embedding` task. You do not specify a model, as it is defined already in the Azure OpenAI deployment.
{
    "service": "azureopenai",
    "service_settings": {
        "api_key": "Api-Key",
        "resource_name": "Resource-name",
        "deployment_id": "Deployment-id",
        "api_version": "2024-02-01"
    }
}
Run `PUT _inference/completion/azure_openai_completion` to create an inference endpoint that performs a `completion` task.
{
    "service": "azureopenai",
    "service_settings": {
        "api_key": "Api-Key",
        "resource_name": "Resource-name",
        "deployment_id": "Deployment-id",
        "api_version": "2024-02-01"
    }
}


























































































































Simulate a pipeline Generally available; Added in 5.0.0

POST /_ingest/pipeline/{id}/_simulate

All methods and paths for this operation:

GET /_ingest/pipeline/_simulate

POST /_ingest/pipeline/_simulate
GET /_ingest/pipeline/{id}/_simulate
POST /_ingest/pipeline/{id}/_simulate

Run an ingest pipeline against a set of provided documents. You can either specify an existing pipeline to use with the provided documents or supply a pipeline definition in the body of the request.

Required authorization

  • Cluster privileges: read_pipeline

Path parameters

  • id string Required

    The pipeline to test. If you don't specify a pipeline in the request body, this parameter is required.

Query parameters

  • verbose boolean

    If true, the response includes output data for each processor in the executed pipeline.

application/json

Body Required

  • docs array[object] Required

    Sample documents to test in the pipeline.

    Hide docs attributes Show docs attributes object
    • _id string

      Unique identifier for the document. This ID must be unique within the _index.

    • _index string

      Name of the index containing the document.

    • _source object Required

      JSON body for the document.

  • pipeline object Additional properties

    The pipeline to test. If you don't specify the pipeline request path parameter, this parameter is required. If you specify both this and the request path parameter, the API only uses the request path parameter.

    Hide pipeline attributes Show pipeline attributes object
    • description string

      Description of the ingest pipeline.

    • on_failure array[object]

      Processors to run immediately after a processor failure.

      Hide on_failure attributes Show on_failure attributes object
      • append object
      • attachment object
      • bytes object
      • circle object
      • community_id object
      • convert object
      • csv object
      • date object
      • date_index_name object
      • dissect object
      • dot_expander object
      • drop object
      • enrich object
      • fail object
      • fingerprint object
      • foreach object
      • ip_location object
      • geo_grid object
      • geoip object
      • grok object
      • gsub object
      • html_strip object
      • inference object
      • join object
      • json object
      • kv object
      • lowercase object
      • network_direction object
      • pipeline object
      • redact object
      • registered_domain object
      • remove object
      • rename object
      • reroute object
      • script object
      • set object
      • set_security_user object
      • sort object
      • split object
      • terminate object
      • trim object
      • uppercase object
      • urldecode object
      • uri_parts object
      • user_agent object
    • processors array[object]

      Processors used to perform transformations on documents before indexing. Processors run sequentially in the order specified.

      Hide processors attributes Show processors attributes object
      • append object
      • attachment object
      • bytes object
      • circle object
      • community_id object
      • convert object
      • csv object
      • date object
      • date_index_name object
      • dissect object
      • dot_expander object
      • drop object
      • enrich object
      • fail object
      • fingerprint object
      • foreach object
      • ip_location object
      • geo_grid object
      • geoip object
      • grok object
      • gsub object
      • html_strip object
      • inference object
      • join object
      • json object
      • kv object
      • lowercase object
      • network_direction object
      • pipeline object
      • redact object
      • registered_domain object
      • remove object
      • rename object
      • reroute object
      • script object
      • set object
      • set_security_user object
      • sort object
      • split object
      • terminate object
      • trim object
      • uppercase object
      • urldecode object
      • uri_parts object
      • user_agent object
    • version number

      Version number used by external systems to track ingest pipelines.

    • deprecated boolean

      Marks this ingest pipeline as deprecated. When a deprecated ingest pipeline is referenced as the default or final pipeline when creating or updating a non-deprecated index template, Elasticsearch will emit a deprecation warning.

      Default value is false.

    • _meta object

      Arbitrary metadata about the ingest pipeline. This map is not automatically generated by Elasticsearch.

      Hide _meta attribute Show _meta attribute object
      • * object Additional properties

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • docs array[object] Required
      Hide docs attributes Show docs attributes object
      • doc object

        The simulated document, with optional metadata.

        Hide doc attributes Show doc attributes object
        • _id string Required

          Unique identifier for the document. This ID must be unique within the _index.

        • _index string Required

          Name of the index containing the document.

        • _ingest object Required
        • _routing string

          Value used to send the document to a specific primary shard.

        • _source object Required

          JSON body for the document.

          Hide _source attribute Show _source attribute object
          • * object Additional properties
        • _version
        • _version_type string

          Supported values include:

          • internal: Use internal versioning that starts at 1 and increments with each update or delete.
          • external: Only index the document if the specified version is strictly higher than the version of the stored document or if there is no existing document.
          • external_gte: Only index the document if the specified version is equal or higher than the version of the stored document or if there is no existing document. NOTE: The external_gte version type is meant for special use cases and should be used with care. If used incorrectly, it can result in loss of data.
          • force: This option is deprecated because it can cause primary and replica shards to diverge.

          Values are internal, external, external_gte, or force.

      • error object

        Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

        Hide error attributes Show error attributes object
        • type string Required

          The type of error

        • reason string | null

          A human-readable explanation of the error, in English.

        • stack_trace string

          The server stack trace. Present only if the error_trace=true parameter was sent with the request.

        • caused_by object

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

        • root_cause array[object]

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

        • suppressed array[object]

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

      • processor_results array[object]
        Hide processor_results attributes Show processor_results attributes object
        • doc object

          The simulated document, with optional metadata.

        • tag string
        • processor_type string
        • status string

          Values are success, error, error_ignored, skipped, or dropped.

        • description string
        • ignored_error object

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

        • error object

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

POST /_ingest/pipeline/{id}/_simulate
POST /_ingest/pipeline/_simulate
{
  "pipeline" :
  {
    "description": "_description",
    "processors": [
      {
        "set" : {
          "field" : "field2",
          "value" : "_value"
        }
      }
    ]
  },
  "docs": [
    {
      "_index": "index",
      "_id": "id",
      "_source": {
        "foo": "bar"
      }
    },
    {
      "_index": "index",
      "_id": "id",
      "_source": {
        "foo": "rab"
      }
    }
  ]
}
resp = client.ingest.simulate(
    pipeline={
        "description": "_description",
        "processors": [
            {
                "set": {
                    "field": "field2",
                    "value": "_value"
                }
            }
        ]
    },
    docs=[
        {
            "_index": "index",
            "_id": "id",
            "_source": {
                "foo": "bar"
            }
        },
        {
            "_index": "index",
            "_id": "id",
            "_source": {
                "foo": "rab"
            }
        }
    ],
)
const response = await client.ingest.simulate({
  pipeline: {
    description: "_description",
    processors: [
      {
        set: {
          field: "field2",
          value: "_value",
        },
      },
    ],
  },
  docs: [
    {
      _index: "index",
      _id: "id",
      _source: {
        foo: "bar",
      },
    },
    {
      _index: "index",
      _id: "id",
      _source: {
        foo: "rab",
      },
    },
  ],
});
response = client.ingest.simulate(
  body: {
    "pipeline": {
      "description": "_description",
      "processors": [
        {
          "set": {
            "field": "field2",
            "value": "_value"
          }
        }
      ]
    },
    "docs": [
      {
        "_index": "index",
        "_id": "id",
        "_source": {
          "foo": "bar"
        }
      },
      {
        "_index": "index",
        "_id": "id",
        "_source": {
          "foo": "rab"
        }
      }
    ]
  }
)
$resp = $client->ingest()->simulate([
    "body" => [
        "pipeline" => [
            "description" => "_description",
            "processors" => array(
                [
                    "set" => [
                        "field" => "field2",
                        "value" => "_value",
                    ],
                ],
            ),
        ],
        "docs" => array(
            [
                "_index" => "index",
                "_id" => "id",
                "_source" => [
                    "foo" => "bar",
                ],
            ],
            [
                "_index" => "index",
                "_id" => "id",
                "_source" => [
                    "foo" => "rab",
                ],
            ],
        ),
    ],
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"pipeline":{"description":"_description","processors":[{"set":{"field":"field2","value":"_value"}}]},"docs":[{"_index":"index","_id":"id","_source":{"foo":"bar"}},{"_index":"index","_id":"id","_source":{"foo":"rab"}}]}' "$ELASTICSEARCH_URL/_ingest/pipeline/_simulate"
client.ingest().simulate(s -> s
    .docs(List.of(Document.of(d -> d
            .id("id")
            .index("index")
            .source(JsonData.fromJson("{\"foo\":\"bar\"}"))),Document.of(d -> d
            .id("id")
            .index("index")
            .source(JsonData.fromJson("{\"foo\":\"rab\"}")))))
    .pipeline(p -> p
        .description("_description")
        .processors(pr -> pr
            .set(se -> se
                .field("field2")
                .value(JsonData.fromJson("\"_value\""))
            )
        )
    )
);
Request example
You can specify the used pipeline either in the request body or as a path parameter.
{
  "pipeline" :
  {
    "description": "_description",
    "processors": [
      {
        "set" : {
          "field" : "field2",
          "value" : "_value"
        }
      }
    ]
  },
  "docs": [
    {
      "_index": "index",
      "_id": "id",
      "_source": {
        "foo": "bar"
      }
    },
    {
      "_index": "index",
      "_id": "id",
      "_source": {
        "foo": "rab"
      }
    }
  ]
}
Response examples (200)
A successful response for running an ingest pipeline against a set of provided documents.
{
   "docs": [
      {
         "doc": {
            "_id": "id",
            "_index": "index",
            "_version": "-3",
            "_source": {
               "field2": "_value",
               "foo": "bar"
            },
            "_ingest": {
               "timestamp": "2017-05-04T22:30:03.187Z"
            }
         }
      },
      {
         "doc": {
            "_id": "id",
            "_index": "index",
            "_version": "-3",
            "_source": {
               "field2": "_value",
               "foo": "rab"
            },
            "_ingest": {
               "timestamp": "2017-05-04T22:30:03.188Z"
            }
         }
      }
   ]
}




























































































Create a datafeed Generally available; Added in 5.4.0

PUT /_ml/datafeeds/{datafeed_id}

Datafeeds retrieve data from Elasticsearch for analysis by an anomaly detection job. You can associate only one datafeed with each anomaly detection job. The datafeed contains a query that runs at a defined interval (frequency). If you are concerned about delayed data, you can add a delay (query_delay') at each interval. By default, the datafeed uses the following query:{"match_all": {"boost": 1}}`.

When Elasticsearch security features are enabled, your datafeed remembers which roles the user who created it had at the time of creation and runs the query using those same roles. If you provide secondary authorization headers, those credentials are used instead. You must use Kibana, this API, or the create anomaly detection jobs API to create a datafeed. Do not add a datafeed directly to the .ml-config index. Do not give users write privileges on the .ml-config index.

Required authorization

  • Index privileges: read
  • Cluster privileges: manage_ml

Path parameters

  • datafeed_id string Required

    A numerical character string that uniquely identifies the datafeed. This identifier can contain lowercase alphanumeric characters (a-z and 0-9), hyphens, and underscores. It must start and end with alphanumeric characters.

Query parameters

  • allow_no_indices boolean

    If true, wildcard indices expressions that resolve into no concrete indices are ignored. This includes the _all string or when no indices are specified.

  • expand_wildcards string | array[string]

    Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports comma-separated values.

    Supported values include:

    • all: Match any data stream or index, including hidden ones.
    • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
    • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
    • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
    • none: Wildcard expressions are not accepted.

    Values are all, open, closed, hidden, or none.

  • ignore_throttled boolean Deprecated

    If true, concrete, expanded, or aliased indices are ignored when frozen.

  • ignore_unavailable boolean

    If true, unavailable indices (missing or closed) are ignored.

application/json

Body Required

  • aggregations object

    If set, the datafeed performs aggregation searches. Support for aggregations is limited and should be used only with low cardinality data.

  • chunking_config object

    Datafeeds might be required to search over long time periods, for several months or years. This search is split into time chunks in order to ensure the load on Elasticsearch is managed. Chunking configuration controls how the size of these time chunks are calculated; it is an advanced configuration option.

    Hide chunking_config attributes Show chunking_config attributes object
    • mode string Required

      If the mode is auto, the chunk size is dynamically calculated; this is the recommended value when the datafeed does not use aggregations. If the mode is manual, chunking is applied according to the specified time_span; use this mode when the datafeed uses aggregations. If the mode is off, no chunking is applied.

      Values are auto, manual, or off.

    • time_span string

      The time span that each search will be querying. This setting is applicable only when the mode is set to manual.

  • delayed_data_check_config object

    Specifies whether the datafeed checks for missing data and the size of the window. The datafeed can optionally search over indices that have already been read in an effort to determine whether any data has subsequently been added to the index. If missing data is found, it is a good indication that the query_delay is set too low and the data is being indexed after the datafeed has passed that moment in time. This check runs only on real-time datafeeds.

    Hide delayed_data_check_config attributes Show delayed_data_check_config attributes object
    • check_window string

      The window of time that is searched for late data. This window of time ends with the latest finalized bucket. It defaults to null, which causes an appropriate check_window to be calculated when the real-time datafeed runs. In particular, the default check_window span calculation is based on the maximum of 2h or 8 * bucket_span.

    • enabled boolean Required

      Specifies whether the datafeed periodically checks for delayed data.

  • frequency string

    The interval at which scheduled queries are made while the datafeed runs in real time. The default value is either the bucket span for short bucket spans, or, for longer bucket spans, a sensible fraction of the bucket span. When frequency is shorter than the bucket span, interim results for the last (partial) bucket are written then eventually overwritten by the full bucket results. If the datafeed uses aggregations, this value must be divisible by the interval of the date histogram aggregation.

  • indices string | array[string]

    An array of index names. Wildcards are supported. If any of the indices are in remote clusters, the master nodes and the machine learning nodes must have the remote_cluster_client role.

  • indices_options object

    Specifies index expansion options that are used during search

    Hide indices_options attributes Show indices_options attributes object
    • allow_no_indices boolean

      If false, the request returns an error if any wildcard expression, index alias, or _all value targets only missing or closed indices. This behavior applies even if the request targets other open indices. For example, a request targeting foo*,bar* returns an error if an index starts with foo but no index starts with bar.

    • expand_wildcards string | array[string]

      Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports comma-separated values, such as open,hidden.

      Supported values include:

      • all: Match any data stream or index, including hidden ones.
      • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
      • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
      • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
      • none: Wildcard expressions are not accepted.
    • ignore_unavailable boolean

      If true, missing or closed indices are not included in the response.

      Default value is false.

    • ignore_throttled boolean

      If true, concrete, expanded or aliased indices are ignored when frozen.

      Default value is true.

  • job_id string

    Identifier for the anomaly detection job.

  • max_empty_searches number

    If a real-time datafeed has never seen any data (including during any initial training period), it automatically stops and closes the associated job after this many real-time searches return no documents. In other words, it stops after frequency times max_empty_searches of real-time operation. If not set, a datafeed with no end time that sees no data remains started until it is explicitly stopped. By default, it is not set.

  • query object

    The Elasticsearch query domain-specific language (DSL). This value corresponds to the query object in an Elasticsearch search POST body. All the options that are supported by Elasticsearch can be used, as this object is passed verbatim to Elasticsearch.

    External documentation
  • query_delay string

    The number of seconds behind real time that data is queried. For example, if data from 10:04 a.m. might not be searchable in Elasticsearch until 10:06 a.m., set this property to 120 seconds. The default value is randomly selected between 60s and 120s. This randomness improves the query performance when there are multiple jobs running on the same node.

  • runtime_mappings object

    Specifies runtime fields for the datafeed search.

    Hide runtime_mappings attribute Show runtime_mappings attribute object
    • * object Additional properties
      Hide * attributes Show * attributes object
      • fields object

        For type composite

        Hide fields attribute Show fields attribute object
        • * object Additional properties
          Hide * attribute Show * attribute object
          • type string Required

            Values are boolean, composite, date, double, geo_point, geo_shape, ip, keyword, long, or lookup.

      • fetch_fields array[object]

        For type lookup

        Hide fetch_fields attributes Show fetch_fields attributes object
        • field string Required

          Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

        • format string
      • format string

        A custom format for date type runtime fields.

      • input_field string

        For type lookup

      • target_field string

        For type lookup

      • target_index string

        For type lookup

      • script object

        Painless script executed at query time.

        Hide script attributes Show script attributes object
        • source string

          The script source.

        • id string

          The id for a stored script.

        • params object

          Specifies any named parameters that are passed into the script as variables. Use parameters instead of hard-coded values to decrease compile time.

          Hide params attribute Show params attribute object
          • * object Additional properties
        • lang
        • options object
          Hide options attribute Show options attribute object
          • * string Additional properties
      • type string Required

        Field type, which can be: boolean, composite, date, double, geo_point, ip,keyword, long, or lookup.

        Values are boolean, composite, date, double, geo_point, geo_shape, ip, keyword, long, or lookup.

  • script_fields object

    Specifies scripts that evaluate custom expressions and returns script fields to the datafeed. The detector configuration objects in a job can contain functions that use these script fields.

    Hide script_fields attribute Show script_fields attribute object
    • * object Additional properties
      Hide * attributes Show * attributes object
      • script object Required
        Hide script attributes Show script attributes object
        • source string

          The script source.

        • id string

          The id for a stored script.

        • params object

          Specifies any named parameters that are passed into the script as variables. Use parameters instead of hard-coded values to decrease compile time.

          Hide params attribute Show params attribute object
          • * object Additional properties
        • lang string

          Specifies the language the script is written in.

          Supported values include:

          • painless: Painless scripting language, purpose-built for Elasticsearch.
          • expression: Lucene’s expressions language, compiles a JavaScript expression to bytecode.
          • mustache: Mustache templated, used for templates.
          • java: Expert Java API
          Any of:

          Specifies the language the script is written in.

          Supported values include:

          • painless: Painless scripting language, purpose-built for Elasticsearch.
          • expression: Lucene’s expressions language, compiles a JavaScript expression to bytecode.
          • mustache: Mustache templated, used for templates.
          • java: Expert Java API

          Values are painless, expression, mustache, or java.

        • options object
          Hide options attribute Show options attribute object
          • * string Additional properties
      • ignore_failure boolean
  • scroll_size number

    The size parameter that is used in Elasticsearch searches when the datafeed does not use aggregations. The maximum value is the value of index.max_result_window, which is 10,000 by default.

    Default value is 1000.

  • headers object

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • aggregations object
    • authorization object
      Hide authorization attributes Show authorization attributes object
      • api_key object

        If an API key was used for the most recent update to the datafeed, its name and identifier are listed in the response.

        Hide api_key attributes Show api_key attributes object
        • id string Required

          The identifier for the API key.

        • name string Required

          The name of the API key.

      • roles array[string]

        If a user ID was used for the most recent update to the datafeed, its roles at the time of the update are listed in the response.

      • service_account string

        If a service account was used for the most recent update to the datafeed, the account name is listed in the response.

    • chunking_config object Required
      Hide chunking_config attributes Show chunking_config attributes object
      • mode string Required

        If the mode is auto, the chunk size is dynamically calculated; this is the recommended value when the datafeed does not use aggregations. If the mode is manual, chunking is applied according to the specified time_span; use this mode when the datafeed uses aggregations. If the mode is off, no chunking is applied.

        Values are auto, manual, or off.

      • time_span string

        The time span that each search will be querying. This setting is applicable only when the mode is set to manual.

    • delayed_data_check_config object
      Hide delayed_data_check_config attributes Show delayed_data_check_config attributes object
      • check_window string

        The window of time that is searched for late data. This window of time ends with the latest finalized bucket. It defaults to null, which causes an appropriate check_window to be calculated when the real-time datafeed runs. In particular, the default check_window span calculation is based on the maximum of 2h or 8 * bucket_span.

      • enabled boolean Required

        Specifies whether the datafeed periodically checks for delayed data.

    • datafeed_id string Required
    • frequency string

      A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

    • indices array[string] Required
    • job_id string Required
    • indices_options object

      Controls how to deal with unavailable concrete indices (closed or missing), how wildcard expressions are expanded to actual indices (all, closed or open indices) and how to deal with wildcard expressions that resolve to no indices.

      Hide indices_options attributes Show indices_options attributes object
      • allow_no_indices boolean

        If false, the request returns an error if any wildcard expression, index alias, or _all value targets only missing or closed indices. This behavior applies even if the request targets other open indices. For example, a request targeting foo*,bar* returns an error if an index starts with foo but no index starts with bar.

      • expand_wildcards string | array[string]

        Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports comma-separated values, such as open,hidden.

        Supported values include:

        • all: Match any data stream or index, including hidden ones.
        • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
        • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
        • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
        • none: Wildcard expressions are not accepted.
      • ignore_unavailable boolean

        If true, missing or closed indices are not included in the response.

        Default value is false.

      • ignore_throttled boolean

        If true, concrete, expanded or aliased indices are ignored when frozen.

        Default value is true.

    • max_empty_searches number
    • query object Required

      An Elasticsearch Query DSL (Domain Specific Language) object that defines a query.

      External documentation
    • query_delay string Required

      A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

    • runtime_mappings object
      Hide runtime_mappings attribute Show runtime_mappings attribute object
      • * object Additional properties
        Hide * attributes Show * attributes object
        • fields object

          For type composite

          Hide fields attribute Show fields attribute object
          • * object Additional properties
        • fetch_fields array[object]

          For type lookup

          Hide fetch_fields attributes Show fetch_fields attributes object
          • field
          • format string
        • format string

          A custom format for date type runtime fields.

        • input_field string

          For type lookup

        • target_field string

          For type lookup

        • target_index string

          For type lookup

        • script object

          Painless script executed at query time.

          Hide script attributes Show script attributes object
          • source string

            The script source.

          • params object

            Specifies any named parameters that are passed into the script as variables. Use parameters instead of hard-coded values to decrease compile time.

          • options object
        • type string Required

          Field type, which can be: boolean, composite, date, double, geo_point, ip,keyword, long, or lookup.

          Values are boolean, composite, date, double, geo_point, geo_shape, ip, keyword, long, or lookup.

    • script_fields object
      Hide script_fields attribute Show script_fields attribute object
      • * object Additional properties
        Hide * attributes Show * attributes object
        • script object Required
          Hide script attributes Show script attributes object
          • source string

            The script source.

          • id string

            The id for a stored script.

          • params object

            Specifies any named parameters that are passed into the script as variables. Use parameters instead of hard-coded values to decrease compile time.

            Hide params attribute Show params attribute object
            • * object Additional properties
          • lang
          • options object
            Hide options attribute Show options attribute object
            • * string Additional properties
        • ignore_failure boolean
    • scroll_size number Required
PUT /_ml/datafeeds/{datafeed_id}
PUT _ml/datafeeds/datafeed-test-job?pretty
{
  "indices": [
    "kibana_sample_data_logs"
  ],
  "query": {
    "bool": {
      "must": [
        {
          "match_all": {}
        }
      ]
    }
  },
  "job_id": "test-job"
}
resp = client.ml.put_datafeed(
    datafeed_id="datafeed-test-job",
    pretty=True,
    indices=[
        "kibana_sample_data_logs"
    ],
    query={
        "bool": {
            "must": [
                {
                    "match_all": {}
                }
            ]
        }
    },
    job_id="test-job",
)
const response = await client.ml.putDatafeed({
  datafeed_id: "datafeed-test-job",
  pretty: "true",
  indices: ["kibana_sample_data_logs"],
  query: {
    bool: {
      must: [
        {
          match_all: {},
        },
      ],
    },
  },
  job_id: "test-job",
});
response = client.ml.put_datafeed(
  datafeed_id: "datafeed-test-job",
  pretty: "true",
  body: {
    "indices": [
      "kibana_sample_data_logs"
    ],
    "query": {
      "bool": {
        "must": [
          {
            "match_all": {}
          }
        ]
      }
    },
    "job_id": "test-job"
  }
)
$resp = $client->ml()->putDatafeed([
    "datafeed_id" => "datafeed-test-job",
    "pretty" => "true",
    "body" => [
        "indices" => array(
            "kibana_sample_data_logs",
        ),
        "query" => [
            "bool" => [
                "must" => array(
                    [
                        "match_all" => new ArrayObject([]),
                    ],
                ),
            ],
        ],
        "job_id" => "test-job",
    ],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"indices":["kibana_sample_data_logs"],"query":{"bool":{"must":[{"match_all":{}}]}},"job_id":"test-job"}' "$ELASTICSEARCH_URL/_ml/datafeeds/datafeed-test-job?pretty"
Request example
An example body for a `PUT _ml/datafeeds/datafeed-test-job?pretty` request.
{
  "indices": [
    "kibana_sample_data_logs"
  ],
  "query": {
    "bool": {
      "must": [
        {
          "match_all": {}
        }
      ]
    }
  },
  "job_id": "test-job"
}































































































































































































































































Get deprecation information Generally available; Added in 6.1.0

GET /{index}/_migration/deprecations

All methods and paths for this operation:

GET /_migration/deprecations

GET /{index}/_migration/deprecations

Get information about different cluster, node, and index level settings that use deprecated features that will be removed or changed in the next major version.

TIP: This APIs is designed for indirect use by the Upgrade Assistant. You are strongly recommended to use the Upgrade Assistant.

Required authorization

  • Cluster privileges: manage

Path parameters

  • index string Required

    Comma-separate list of data streams or indices to check. Wildcard (*) expressions are supported.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • cluster_settings array[object] Required

      Cluster-level deprecation warnings.

      Hide cluster_settings attributes Show cluster_settings attributes object
      • details string

        Optional details about the deprecation warning.

      • level string Required

        The level property describes the significance of the issue.

        Supported values include:

        • none
        • info
        • warning: You can upgrade directly, but you are using deprecated functionality which will not be available or behave differently in the next major version.
        • critical: You cannot upgrade without fixing this problem.

        Values are none, info, warning, or critical.

      • message string Required

        Descriptive information about the deprecation warning.

      • url string Required

        A link to the breaking change documentation, where you can find more information about this change.

      • resolve_during_rolling_upgrade boolean Required
      • _meta object
        Hide _meta attribute Show _meta attribute object
        • * object Additional properties
    • index_settings object Required

      Index warnings are sectioned off per index and can be filtered using an index-pattern in the query. This section includes warnings for the backing indices of data streams specified in the request path.

      Hide index_settings attribute Show index_settings attribute object
      • * array[object] Additional properties
        Hide * attributes Show * attributes object
        • details string

          Optional details about the deprecation warning.

        • level string Required

          The level property describes the significance of the issue.

          Supported values include:

          • none
          • info
          • warning: You can upgrade directly, but you are using deprecated functionality which will not be available or behave differently in the next major version.
          • critical: You cannot upgrade without fixing this problem.

          Values are none, info, warning, or critical.

        • message string Required

          Descriptive information about the deprecation warning.

        • url string Required

          A link to the breaking change documentation, where you can find more information about this change.

        • resolve_during_rolling_upgrade boolean Required
        • _meta object
          Hide _meta attribute Show _meta attribute object
          • * object Additional properties
    • data_streams object Required
      Hide data_streams attribute Show data_streams attribute object
      • * array[object] Additional properties
        Hide * attributes Show * attributes object
        • details string

          Optional details about the deprecation warning.

        • level string Required

          The level property describes the significance of the issue.

          Supported values include:

          • none
          • info
          • warning: You can upgrade directly, but you are using deprecated functionality which will not be available or behave differently in the next major version.
          • critical: You cannot upgrade without fixing this problem.

          Values are none, info, warning, or critical.

        • message string Required

          Descriptive information about the deprecation warning.

        • url string Required

          A link to the breaking change documentation, where you can find more information about this change.

        • resolve_during_rolling_upgrade boolean Required
        • _meta object
          Hide _meta attribute Show _meta attribute object
          • * object Additional properties
    • node_settings array[object] Required

      Node-level deprecation warnings. Since only a subset of your nodes might incorporate these settings, it is important to read the details section for more information about which nodes are affected.

      Hide node_settings attributes Show node_settings attributes object
      • details string

        Optional details about the deprecation warning.

      • level string Required

        The level property describes the significance of the issue.

        Supported values include:

        • none
        • info
        • warning: You can upgrade directly, but you are using deprecated functionality which will not be available or behave differently in the next major version.
        • critical: You cannot upgrade without fixing this problem.

        Values are none, info, warning, or critical.

      • message string Required

        Descriptive information about the deprecation warning.

      • url string Required

        A link to the breaking change documentation, where you can find more information about this change.

      • resolve_during_rolling_upgrade boolean Required
      • _meta object
        Hide _meta attribute Show _meta attribute object
        • * object Additional properties
    • ml_settings array[object] Required

      Machine learning-related deprecation warnings.

      Hide ml_settings attributes Show ml_settings attributes object
      • details string

        Optional details about the deprecation warning.

      • level string Required

        The level property describes the significance of the issue.

        Supported values include:

        • none
        • info
        • warning: You can upgrade directly, but you are using deprecated functionality which will not be available or behave differently in the next major version.
        • critical: You cannot upgrade without fixing this problem.

        Values are none, info, warning, or critical.

      • message string Required

        Descriptive information about the deprecation warning.

      • url string Required

        A link to the breaking change documentation, where you can find more information about this change.

      • resolve_during_rolling_upgrade boolean Required
      • _meta object
        Hide _meta attribute Show _meta attribute object
        • * object Additional properties
    • templates object Required

      Template warnings are sectioned off per template and include deprecations for both component templates and index templates.

      Hide templates attribute Show templates attribute object
      • * array[object] Additional properties
        Hide * attributes Show * attributes object
        • details string

          Optional details about the deprecation warning.

        • level string Required

          The level property describes the significance of the issue.

          Supported values include:

          • none
          • info
          • warning: You can upgrade directly, but you are using deprecated functionality which will not be available or behave differently in the next major version.
          • critical: You cannot upgrade without fixing this problem.

          Values are none, info, warning, or critical.

        • message string Required

          Descriptive information about the deprecation warning.

        • url string Required

          A link to the breaking change documentation, where you can find more information about this change.

        • resolve_during_rolling_upgrade boolean Required
        • _meta object
          Hide _meta attribute Show _meta attribute object
          • * object Additional properties
    • ilm_policies object Required

      ILM policy warnings are sectioned off per policy.

      Hide ilm_policies attribute Show ilm_policies attribute object
      • * array[object] Additional properties
        Hide * attributes Show * attributes object
        • details string

          Optional details about the deprecation warning.

        • level string Required

          The level property describes the significance of the issue.

          Supported values include:

          • none
          • info
          • warning: You can upgrade directly, but you are using deprecated functionality which will not be available or behave differently in the next major version.
          • critical: You cannot upgrade without fixing this problem.

          Values are none, info, warning, or critical.

        • message string Required

          Descriptive information about the deprecation warning.

        • url string Required

          A link to the breaking change documentation, where you can find more information about this change.

        • resolve_during_rolling_upgrade boolean Required
        • _meta object
          Hide _meta attribute Show _meta attribute object
          • * object Additional properties
GET /{index}/_migration/deprecations
GET /_migration/deprecations
resp = client.migration.deprecations()
const response = await client.migration.deprecations();
response = client.migration.deprecations
$resp = $client->migration()->deprecations();
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_migration/deprecations"
client.migration().deprecations(d -> d);
Response examples (200)
An abbreviated response from `GET /_migration/deprecations`.
{
  "cluster_settings": [
    {
      "level": "critical",
      "message": "Cluster name cannot contain ':'",
      "url": "https://www.elastic.co/guide/en/elasticsearch/reference/7.0/breaking-changes-7.0.html#_literal_literal_is_no_longer_allowed_in_cluster_name",
      "details": "This cluster is named [mycompany:logging], which contains the illegal character ':'."
    }
  ],
  "node_settings": [],
  "index_settings": {
    "logs:apache": [
      {
        "level": "warning",
        "message": "Index name cannot contain ':'",
        "url": "https://www.elastic.co/guide/en/elasticsearch/reference/7.0/breaking-changes-7.0.html#_literal_literal_is_no_longer_allowed_in_index_name",
        "details": "This index is named [logs:apache], which contains the illegal character ':'."
      }
    ]
  },
  "ml_settings": []
}










































Get rollup job information Deprecated Technical preview; Added in 6.3.0

GET /_rollup/job/{id}

All methods and paths for this operation:

GET /_rollup/job

GET /_rollup/job/{id}

Get the configuration, stats, and status of rollup jobs.

NOTE: This API returns only active (both STARTED and STOPPED) jobs. If a job was created, ran for a while, then was deleted, the API does not return any details about it. For details about a historical rollup job, the rollup capabilities API may be more useful.

Required authorization

  • Cluster privileges: monitor_rollup

Path parameters

  • id string Required

    Identifier for the rollup job. If it is _all or omitted, the API returns all rollup jobs.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • jobs array[object] Required
      Hide jobs attributes Show jobs attributes object
      • config object Required

        The rollup job configuration.

        Hide config attributes Show config attributes object
        • cron string Required
        • groups object Required
        • id string Required
        • index_pattern string Required
        • metrics array[object] Required
        • page_size number Required
        • rollup_index string Required
        • timeout string Required

          A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

      • stats object Required

        Transient statistics about the rollup job, such as how many documents have been processed and how many rollup summary docs have been indexed. These stats are not persisted. If a node is restarted, these stats are reset.

        Hide stats attributes Show stats attributes object
        • documents_processed number Required
        • index_failures number Required
        • index_total number Required
        • pages_processed number Required
        • rollups_indexed number Required
        • search_failures number Required
        • search_total number Required
        • trigger_count number Required
        • processing_total number Required
      • status object Required

        The current status of the indexer for the rollup job.

        Hide status attributes Show status attributes object
        • current_position object
          Hide current_position attribute Show current_position attribute object
          • * object Additional properties
        • job_state string Required

          Values are started, indexing, stopping, stopped, or aborting.

        • upgraded_doc_id boolean
GET /_rollup/job/{id}
GET _rollup/job/sensor
resp = client.rollup.get_jobs(
    id="sensor",
)
const response = await client.rollup.getJobs({
  id: "sensor",
});
response = client.rollup.get_jobs(
  id: "sensor"
)
$resp = $client->rollup()->getJobs([
    "id" => "sensor",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_rollup/job/sensor"
client.rollup().getJobs(g -> g
    .id("sensor")
);
Response examples (200)
A successful response from `GET _rollup/job/sensor`.
{
  "jobs": [
    {
      "config": {
        "id": "sensor",
        "index_pattern": "sensor-*",
        "rollup_index": "sensor_rollup",
        "cron": "*/30 * * * * ?",
        "groups": {
          "date_histogram": {
            "fixed_interval": "1h",
            "delay": "7d",
            "field": "timestamp",
            "time_zone": "UTC"
          },
          "terms": {
            "fields": [
              "node"
            ]
          }
        },
        "metrics": [
          {
            "field": "temperature",
            "metrics": [
              "min",
              "max",
              "sum"
            ]
          },
          {
            "field": "voltage",
            "metrics": [
              "avg"
            ]
          }
        ],
        "timeout": "20s",
        "page_size": 1000
      },
      "status": {
        "job_state": "stopped"
      },
      "stats": {
        "pages_processed": 0,
        "documents_processed": 0,
        "rollups_indexed": 0,
        "trigger_count": 0,
        "index_failures": 0,
        "index_time_in_ms": 0,
        "index_total": 0,
        "search_failures": 0,
        "search_time_in_ms": 0,
        "search_total": 0,
        "processing_time_in_ms": 0,
        "processing_total": 0
      }
    }
  ]
}