Get behavioral analytics collections Deprecated Technical preview

GET /_application/analytics/{name}

Path parameters

  • name array[string] Required

    A list of analytics collections to limit the returned information

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • * object Additional properties
      Hide * attribute Show * attribute object
      • event_data_stream object Required
        Hide event_data_stream attribute Show event_data_stream attribute object
GET /_application/analytics/{name}
curl \
 --request GET 'http://api.example.com/_application/analytics/{name}' \
 --header "Authorization: $API_KEY"
Response examples (200)
A successful response from `GET _application/analytics/my*`
{
  "my_analytics_collection": {
      "event_data_stream": {
          "name": "behavioral_analytics-events-my_analytics_collection"
      }
  },
  "my_analytics_collection2": {
      "event_data_stream": {
          "name": "behavioral_analytics-events-my_analytics_collection2"
      }
  }
}













































Get a document count

GET /_cat/count/{index}

Get quick access to a document count for a data stream, an index, or an entire cluster. The document count only includes live documents, not deleted documents which have not yet been removed by the merge process.

IMPORTANT: CAT APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the count API.

Path parameters

  • index string | array[string] Required

    A comma-separated list of data streams, indices, and aliases used to limit the request. It supports wildcards (*). To target all data streams and indices, omit this parameter or use * or _all.

Query parameters

  • h string | array[string]

    List of columns to appear in the response. Supports simple wildcards.

  • s string | array[string]

    List of columns that determine how the table should be sorted. Sorting defaults to ascending and can be changed by setting :asc or :desc as a suffix to the column name.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • epoch number | string

      Some APIs will return values such as numbers also as a string (notably epoch timestamps). This behavior is used to capture this behavior while keeping the semantics of the field type.

      Depending on the target language, code generators can keep the union or remove it and leniently parse strings to the target type.

    • Time of day, expressed as HH:MM:SS

    • count string

      the document count

GET /_cat/count/{index}
curl \
 --request GET 'http://api.example.com/_cat/count/{index}' \
 --header "Authorization: $API_KEY"
Response examples (200)
A successful response from `GET /_cat/count/my-index-000001?v=true&format=json`. It retrieves the document count for the `my-index-000001` data stream or index.
[
  {
    "epoch": "1475868259",
    "timestamp": "15:24:20",
    "count": "120"
  }
]
A successful response from `GET /_cat/count?v=true&format=json`. It retrieves the document count for all data streams and indices in the cluster.
[
  {
    "epoch": "1475868259",
    "timestamp": "15:24:20",
    "count": "121"
  }
]

Get field data cache information

GET /_cat/fielddata

Get the amount of heap memory currently used by the field data cache on every data node in the cluster.

IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the nodes stats API.

Query parameters

  • bytes string

    The unit used to display byte values.

    Values are b, kb, mb, gb, tb, or pb.

  • fields string | array[string]

    Comma-separated list of fields used to limit returned information.

  • h string | array[string]

    List of columns to appear in the response. Supports simple wildcards.

  • s string | array[string]

    List of columns that determine how the table should be sorted. Sorting defaults to ascending and can be changed by setting :asc or :desc as a suffix to the column name.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
GET /_cat/fielddata
curl \
 --request GET 'http://api.example.com/_cat/fielddata' \
 --header "Authorization: $API_KEY"
Response examples (200)
A successful response from `GET /_cat/fielddata?v=true&fields=body&format=json`. You can specify an individual field in the request body or URL path. This example retrieves heap memory size information for the `body` field.
[
  {
    "id": "Nqk-6inXQq-OxUfOUI8jNQ",
    "host": "127.0.0.1",
    "ip": "127.0.0.1",
    "node": "Nqk-6in",
    "field": "body",
    "size": "544b"
  }
]
A successful response from `GET /_cat/fielddata/body,soul?v=true&format=json`. You can specify a comma-separated list of fields in the request body or URL path. This example retrieves heap memory size information for the `body` and `soul` fields. To get information for all fields, run `GET /_cat/fielddata?v=true`.
[
  {
    "id": "Nqk-6inXQq-OxUfOUI8jNQ",
    "host": "1127.0.0.1",
    "ip": "127.0.0.1",
    "node": "Nqk-6in",
    "field": "body",
    "size": "544b"
  },
  {
    "id": "Nqk-6inXQq-OxUfOUI8jNQ",
    "host": "127.0.0.1",
    "ip": "127.0.0.1",
    "node": "Nqk-6in",
    "field": "soul",
    "size": "480b"
  }
]








Responses

GET /_cat
curl \
 --request GET 'http://api.example.com/_cat' \
 --header "Authorization: $API_KEY"




Get index information

GET /_cat/indices/{index}

Get high-level information about indices in a cluster, including backing indices for data streams.

Use this request to get the following information for each index in a cluster:

  • shard count
  • document count
  • deleted document count
  • primary store size
  • total store size of all shards, including shard replicas

These metrics are retrieved directly from Lucene, which Elasticsearch uses internally to power indexing and search. As a result, all document counts include hidden nested documents. To get an accurate count of Elasticsearch documents, use the cat count or count APIs.

CAT APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use an index endpoint.

Path parameters

  • index string | array[string] Required

    Comma-separated list of data streams, indices, and aliases used to limit the request. Supports wildcards (*). To target all data streams and indices, omit this parameter or use * or _all.

Query parameters

  • bytes string

    The unit used to display byte values.

    Values are b, kb, mb, gb, tb, or pb.

  • expand_wildcards string | array[string]

    The type of index that wildcard patterns can match.

    Supported values include:

    • all: Match any data stream or index, including hidden ones.
    • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
    • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
    • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
    • none: Wildcard expressions are not accepted.
  • health string

    The health status used to limit returned indices. By default, the response includes indices of any health status.

    Supported values include:

    • green (or GREEN): All shards are assigned.
    • yellow (or YELLOW): All primary shards are assigned, but one or more replica shards are unassigned. If a node in the cluster fails, some data could be unavailable until that node is repaired.
    • red (or RED): One or more primary shards are unassigned, so some data is unavailable. This can occur briefly during cluster startup as primary shards are assigned.

    Values are green, GREEN, yellow, YELLOW, red, or RED.

  • If true, the response includes information from segments that are not loaded into memory.

  • pri boolean

    If true, the response only includes information from primary shards.

  • time string

    The unit used to display time values.

    Values are nanos, micros, ms, s, m, h, or d.

  • Period to wait for a connection to the master node.

  • h string | array[string]

    List of columns to appear in the response. Supports simple wildcards.

  • s string | array[string]

    List of columns that determine how the table should be sorted. Sorting defaults to ascending and can be changed by setting :asc or :desc as a suffix to the column name.

Responses

GET /_cat/indices/{index}
curl \
 --request GET 'http://api.example.com/_cat/indices/{index}' \
 --header "Authorization: $API_KEY"
Response examples (200)
A successful response from `GET /_cat/indices/my-index-*?v=true&s=index&format=json`.
[
  {
    "health": "yellow",
    "status": "open",
    "index": "my-index-000001",
    "uuid": "u8FNjxh8Rfy_awN11oDKYQ",
    "pri": "1",
    "rep": "1",
    "docs.count": "1200",
    "docs.deleted": "0",
    "store.size": "88.1kb",
    "pri.store.size": "88.1kb",
    "dataset.size": "88.1kb"
  },
  {
    "health": "green",
    "status": "open",
    "index": "my-index-000002",
    "uuid": "nYFWZEO7TUiOjLQXBaYJpA ",
    "pri": "1",
    "rep": "0",
    "docs.count": "0",
    "docs.deleted": "0",
    "store.size": "260b",
    "pri.store.size": "260b",
    "dataset.size": "260b"
  }
]




Get data frame analytics jobs Added in 7.7.0

GET /_cat/ml/data_frame/analytics

Get configuration and usage information about data frame analytics jobs.

IMPORTANT: CAT APIs are only intended for human consumption using the Kibana console or command line. They are not intended for use by applications. For application consumption, use the get data frame analytics jobs statistics API.

Query parameters

  • Whether to ignore if a wildcard expression matches no configs. (This includes _all string or when no configs have been specified)

  • bytes string

    The unit in which to display byte values

    Values are b, kb, mb, gb, tb, or pb.

  • h string | array[string]

    Comma-separated list of column names to display.

    Supported values include:

    • assignment_explanation (or ae): Contains messages relating to the selection of a node.
    • create_time (or ct, createTime): The time when the data frame analytics job was created.
    • description (or d): A description of a job.
    • dest_index (or di, destIndex): Name of the destination index.
    • failure_reason (or fr, failureReason): Contains messages about the reason why a data frame analytics job failed.
    • id: Identifier for the data frame analytics job.
    • model_memory_limit (or mml, modelMemoryLimit): The approximate maximum amount of memory resources that are permitted for the data frame analytics job.
    • node.address (or na, nodeAddress): The network address of the node that the data frame analytics job is assigned to.
    • node.ephemeral_id (or ne, nodeEphemeralId): The ephemeral ID of the node that the data frame analytics job is assigned to.
    • node.id (or ni, nodeId): The unique identifier of the node that the data frame analytics job is assigned to.
    • node.name (or nn, nodeName): The name of the node that the data frame analytics job is assigned to.
    • progress (or p): The progress report of the data frame analytics job by phase.
    • source_index (or si, sourceIndex): Name of the source index.
    • state (or s): Current state of the data frame analytics job.
    • type (or t): The type of analysis that the data frame analytics job performs.
    • version (or v): The Elasticsearch version number in which the data frame analytics job was created.
  • s string | array[string]

    Comma-separated list of column names or column aliases used to sort the response.

    Supported values include:

    • assignment_explanation (or ae): Contains messages relating to the selection of a node.
    • create_time (or ct, createTime): The time when the data frame analytics job was created.
    • description (or d): A description of a job.
    • dest_index (or di, destIndex): Name of the destination index.
    • failure_reason (or fr, failureReason): Contains messages about the reason why a data frame analytics job failed.
    • id: Identifier for the data frame analytics job.
    • model_memory_limit (or mml, modelMemoryLimit): The approximate maximum amount of memory resources that are permitted for the data frame analytics job.
    • node.address (or na, nodeAddress): The network address of the node that the data frame analytics job is assigned to.
    • node.ephemeral_id (or ne, nodeEphemeralId): The ephemeral ID of the node that the data frame analytics job is assigned to.
    • node.id (or ni, nodeId): The unique identifier of the node that the data frame analytics job is assigned to.
    • node.name (or nn, nodeName): The name of the node that the data frame analytics job is assigned to.
    • progress (or p): The progress report of the data frame analytics job by phase.
    • source_index (or si, sourceIndex): Name of the source index.
    • state (or s): Current state of the data frame analytics job.
    • type (or t): The type of analysis that the data frame analytics job performs.
    • version (or v): The Elasticsearch version number in which the data frame analytics job was created.
  • time string

    Unit used to display time values.

    Values are nanos, micros, ms, s, m, h, or d.

Responses

GET /_cat/ml/data_frame/analytics
curl \
 --request GET 'http://api.example.com/_cat/ml/data_frame/analytics' \
 --header "Authorization: $API_KEY"
Response examples (200)
A successful response from `GET _cat/ml/data_frame/analytics?v=true&format=json`.
[
  {
    "id": "classifier_job_1",
    "type": "classification",
    "create_time": "2020-02-12T11:49:09.594Z",
    "state": "stopped"
  },
    {
    "id": "classifier_job_2",
    "type": "classification",
    "create_time": "2020-02-12T11:49:14.479Z",
    "state": "stopped"
  },
  {
    "id": "classifier_job_3",
    "type": "classification",
    "create_time": "2020-02-12T11:49:16.928Z",
    "state": "stopped"
  },
  {
    "id": "classifier_job_4",
    "type": "classification",
    "create_time": "2020-02-12T11:49:19.127Z",
    "state": "stopped"
  },
  {
    "id": "classifier_job_5",
    "type": "classification",
    "create_time": "2020-02-12T11:49:21.349Z",
    "state": "stopped"
  }
]




Get datafeeds Added in 7.7.0

GET /_cat/ml/datafeeds

Get configuration and usage information about datafeeds. This API returns a maximum of 10,000 datafeeds. If the Elasticsearch security features are enabled, you must have monitor_ml, monitor, manage_ml, or manage cluster privileges to use this API.

IMPORTANT: CAT APIs are only intended for human consumption using the Kibana console or command line. They are not intended for use by applications. For application consumption, use the get datafeed statistics API.

Query parameters

  • Specifies what to do when the request:

    • Contains wildcard expressions and there are no datafeeds that match.
    • Contains the _all string or no identifiers and there are no matches.
    • Contains wildcard expressions and there are only partial matches.

    If true, the API returns an empty datafeeds array when there are no matches and the subset of results when there are partial matches. If false, the API returns a 404 status code when there are no matches or only partial matches.

  • h string | array[string]

    Comma-separated list of column names to display.

    Supported values include:

    • ae (or assignment_explanation): For started datafeeds only, contains messages relating to the selection of a node.
    • bc (or buckets.count, bucketsCount): The number of buckets processed.
    • id: A numerical character string that uniquely identifies the datafeed.
    • na (or node.address, nodeAddress): For started datafeeds only, the network address of the node where the datafeed is started.
    • ne (or node.ephemeral_id, nodeEphemeralId): For started datafeeds only, the ephemeral ID of the node where the datafeed is started.
    • ni (or node.id, nodeId): For started datafeeds only, the unique identifier of the node where the datafeed is started.
    • nn (or node.name, nodeName): For started datafeeds only, the name of the node where the datafeed is started.
    • sba (or search.bucket_avg, searchBucketAvg): The average search time per bucket, in milliseconds.
    • sc (or search.count, searchCount): The number of searches run by the datafeed.
    • seah (or search.exp_avg_hour, searchExpAvgHour): The exponential average search time per hour, in milliseconds.
    • st (or search.time, searchTime): The total time the datafeed spent searching, in milliseconds.
    • s (or state): The status of the datafeed: starting, started, stopping, or stopped. If starting, the datafeed has been requested to start but has not yet started. If started, the datafeed is actively receiving data. If stopping, the datafeed has been requested to stop gracefully and is completing its final action. If stopped, the datafeed is stopped and will not receive data until it is re-started.
  • s string | array[string]

    Comma-separated list of column names or column aliases used to sort the response.

    Supported values include:

    • ae (or assignment_explanation): For started datafeeds only, contains messages relating to the selection of a node.
    • bc (or buckets.count, bucketsCount): The number of buckets processed.
    • id: A numerical character string that uniquely identifies the datafeed.
    • na (or node.address, nodeAddress): For started datafeeds only, the network address of the node where the datafeed is started.
    • ne (or node.ephemeral_id, nodeEphemeralId): For started datafeeds only, the ephemeral ID of the node where the datafeed is started.
    • ni (or node.id, nodeId): For started datafeeds only, the unique identifier of the node where the datafeed is started.
    • nn (or node.name, nodeName): For started datafeeds only, the name of the node where the datafeed is started.
    • sba (or search.bucket_avg, searchBucketAvg): The average search time per bucket, in milliseconds.
    • sc (or search.count, searchCount): The number of searches run by the datafeed.
    • seah (or search.exp_avg_hour, searchExpAvgHour): The exponential average search time per hour, in milliseconds.
    • st (or search.time, searchTime): The total time the datafeed spent searching, in milliseconds.
    • s (or state): The status of the datafeed: starting, started, stopping, or stopped. If starting, the datafeed has been requested to start but has not yet started. If started, the datafeed is actively receiving data. If stopping, the datafeed has been requested to stop gracefully and is completing its final action. If stopped, the datafeed is stopped and will not receive data until it is re-started.
  • time string

    The unit used to display time values.

    Values are nanos, micros, ms, s, m, h, or d.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • id string

      The datafeed identifier.

    • state string

      Values are started, stopped, starting, or stopping.

    • For started datafeeds only, contains messages relating to the selection of a node.

    • The number of buckets processed.

    • The number of searches run by the datafeed.

    • The total time the datafeed spent searching, in milliseconds.

    • The average search time per bucket, in milliseconds.

    • The exponential average search time per hour, in milliseconds.

    • node.id string

      The unique identifier of the assigned node. For started datafeeds only, this information pertains to the node upon which the datafeed is started.

    • The name of the assigned node. For started datafeeds only, this information pertains to the node upon which the datafeed is started.

    • The ephemeral identifier of the assigned node. For started datafeeds only, this information pertains to the node upon which the datafeed is started.

    • The network address of the assigned node. For started datafeeds only, this information pertains to the node upon which the datafeed is started.

GET /_cat/ml/datafeeds
curl \
 --request GET 'http://api.example.com/_cat/ml/datafeeds' \
 --header "Authorization: $API_KEY"
Response examples (200)
A successful response from `GET _cat/ml/datafeeds?v=true&format=json`.
[
  {
    "id": "datafeed-high_sum_total_sales",
    "state": "stopped",
    "buckets.count": "743",
    "search.count": "7"
  },
  {
    "id": "datafeed-low_request_rate",
    "state": "stopped",
    "buckets.count": "1457",
    "search.count": "3"
  },
  {
    "id": "datafeed-response_code_rates",
    "state": "stopped",
    "buckets.count": "1460",
    "search.count": "18"
  },
  {
    "id": "datafeed-url_scanning",
    "state": "stopped",
    "buckets.count": "1460",
    "search.count": "18"
  }
]








































































Get task information Technical preview

GET /_cat/tasks

Get information about tasks currently running in the cluster. IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the task management API.

Query parameters

  • actions array[string]

    The task action names, which are used to limit the response.

  • detailed boolean

    If true, the response includes detailed information about shard recoveries.

  • nodes array[string]

    Unique node identifiers, which are used to limit the response.

  • The parent task identifier, which is used to limit the response.

  • h string | array[string]

    List of columns to appear in the response. Supports simple wildcards.

  • s string | array[string]

    List of columns that determine how the table should be sorted. Sorting defaults to ascending and can be changed by setting :asc or :desc as a suffix to the column name.

  • time string

    Unit used to display time values.

    Values are nanos, micros, ms, s, m, h, or d.

  • timeout string

    Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.

  • If true, the request blocks until the task has completed.

Responses

GET /_cat/tasks
curl \
 --request GET 'http://api.example.com/_cat/tasks' \
 --header "Authorization: $API_KEY"
Response examples (200)
A successful response from `GET _cat/tasks?v=true&format=json`.
[
  {
    "action": "cluster:monitor/tasks/lists[n]",
    "task_id": "oTUltX4IQMOUUVeiohTt8A:124",
    "parent_task_id": "oTUltX4IQMOUUVeiohTt8A:123",
    "type": "direct",
    "start_time": "1458585884904",
    "timestamp": "01:48:24",
    "running_time": "44.1micros",
    "ip": "127.0.0.1:9300",
    "node": "oTUltX4IQMOUUVeiohTt8A"
  },
  {
    "action": "cluster:monitor/tasks/lists",
    "task_id": "oTUltX4IQMOUUVeiohTt8A:123",
    "parent_task_id": "-",
    "type": "transport",
    "start_time": "1458585884904",
    "timestamp": "01:48:24",
    "running_time": "186.2micros",
    "ip": "127.0.0.1:9300",
    "node": "oTUltX4IQMOUUVeiohTt8A"
  }
]

Get index template information Added in 5.2.0

GET /_cat/templates

Get information about the index templates in a cluster. You can use index templates to apply index settings and field mappings to new indices at creation. IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the get index template API.

Query parameters

  • h string | array[string]

    List of columns to appear in the response. Supports simple wildcards.

  • s string | array[string]

    List of columns that determine how the table should be sorted. Sorting defaults to ascending and can be changed by setting :asc or :desc as a suffix to the column name.

  • local boolean

    If true, the request computes the list of selected nodes from the local cluster state. If false the list of selected nodes are computed from the cluster state of the master node. In both cases the coordinating node will send requests for further information to each selected node.

  • Period to wait for a connection to the master node.

Responses

GET /_cat/templates
curl \
 --request GET 'http://api.example.com/_cat/templates' \
 --header "Authorization: $API_KEY"
Response examples (200)
A successful response from `GET _cat/templates/my-template-*?v=true&s=name&format=json`.
[
  {
    "name": "my-template-0",
    "index_patterns": "[te*]",
    "order": "500",
    "version": null,
    "composed_of": "[]"
  },
  {
    "name": "my-template-1",
    "index_patterns": "[tea*]",
    "order": "501",
    "version": null,
    "composed_of": "[]"
  },
  {
    "name": "my-template-2",
    "index_patterns": "[teak*]",
    "order": "502",
    "version": "7",
    "composed_of": "[]"
  }
]
















Get transform information Added in 7.7.0

GET /_cat/transforms/{transform_id}

Get configuration and usage information about transforms.

CAT APIs are only intended for human consumption using the Kibana console or command line. They are not intended for use by applications. For application consumption, use the get transform statistics API.

Path parameters

  • transform_id string Required

    A transform identifier or a wildcard expression. If you do not specify one of these options, the API returns information for all transforms.

Query parameters

  • Specifies what to do when the request: contains wildcard expressions and there are no transforms that match; contains the _all string or no identifiers and there are no matches; contains wildcard expressions and there are only partial matches. If true, it returns an empty transforms array when there are no matches and the subset of results when there are partial matches. If false, the request returns a 404 status code when there are no matches or only partial matches.

  • from number

    Skips the specified number of transforms.

  • h string | array[string]

    Comma-separated list of column names to display.

    Supported values include:

    • changes_last_detection_time (or cldt): The timestamp when changes were last detected in the source indices.
    • checkpoint (or cp): The sequence number for the checkpoint.
    • checkpoint_duration_time_exp_avg (or cdtea, checkpointTimeExpAvg): Exponential moving average of the duration of the checkpoint, in milliseconds.
    • checkpoint_progress (or c, checkpointProgress): The progress of the next checkpoint that is currently in progress.
    • create_time (or ct, createTime): The time the transform was created.
    • delete_time (or dtime): The amount of time spent deleting, in milliseconds.
    • description (or d): The description of the transform.
    • dest_index (or di, destIndex): The destination index for the transform. The mappings of the destination index are deduced based on the source fields when possible. If alternate mappings are required, use the Create index API prior to starting the transform.
    • documents_deleted (or docd): The number of documents that have been deleted from the destination index due to the retention policy for this transform.
    • documents_indexed (or doci): The number of documents that have been indexed into the destination index for the transform.
    • docs_per_second (or dps): Specifies a limit on the number of input documents per second. This setting throttles the transform by adding a wait time between search requests. The default value is null, which disables throttling.
    • documents_processed (or docp): The number of documents that have been processed from the source index of the transform.
    • frequency (or f): The interval between checks for changes in the source indices when the transform is running continuously. Also determines the retry interval in the event of transient failures while the transform is searching or indexing. The minimum value is 1s and the maximum is 1h. The default value is 1m.
    • id: Identifier for the transform.
    • index_failure (or if): The number of indexing failures.
    • index_time (or itime): The amount of time spent indexing, in milliseconds.
    • index_total (or it): The number of index operations.
    • indexed_documents_exp_avg (or idea): Exponential moving average of the number of new documents that have been indexed.
    • last_search_time (or lst, lastSearchTime): The timestamp of the last search in the source indices. This field is only shown if the transform is running.
    • max_page_search_size (or mpsz): Defines the initial page size to use for the composite aggregation for each checkpoint. If circuit breaker exceptions occur, the page size is dynamically adjusted to a lower value. The minimum value is 10 and the maximum is 65,536. The default value is 500.
    • pages_processed (or pp): The number of search or bulk index operations processed. Documents are processed in batches instead of individually.
    • pipeline (or p): The unique identifier for an ingest pipeline.
    • processed_documents_exp_avg (or pdea): Exponential moving average of the number of documents that have been processed.
    • processing_time (or pt): The amount of time spent processing results, in milliseconds.
    • reason (or r): If a transform has a failed state, this property provides details about the reason for the failure.
    • search_failure (or sf): The number of search failures.
    • search_time (or stime): The amount of time spent searching, in milliseconds.
    • search_total (or st): The number of search operations on the source index for the transform.
    • source_index (or si, sourceIndex): The source indices for the transform. It can be a single index, an index pattern (for example, "my-index-*"), an array of indices (for example, ["my-index-000001", "my-index-000002"]), or an array of index patterns (for example, ["my-index-*", "my-other-index-*"]. For remote indices use the syntax "remote_name:index_name". If any indices are in remote clusters then the master node and at least one transform node must have the remote_cluster_client node role.
    • state (or s): The status of the transform, which can be one of the following values:

      • aborting: The transform is aborting.
      • failed: The transform failed. For more information about the failure, check the reason field.
      • indexing: The transform is actively processing data and creating new documents.
      • started: The transform is running but not actively indexing data.
      • stopped: The transform is stopped.
      • stopping: The transform is stopping.
    • transform_type (or tt): Indicates the type of transform: batch or continuous.

    • trigger_count (or tc): The number of times the transform has been triggered by the scheduler. For example, the scheduler triggers the transform indexer to check for updates or ingest new data at an interval specified in the frequency property.

    • version (or v): The version of Elasticsearch that existed on the node when the transform was created.

  • s string | array[string]

    Comma-separated list of column names or column aliases used to sort the response.

    Supported values include:

    • changes_last_detection_time (or cldt): The timestamp when changes were last detected in the source indices.
    • checkpoint (or cp): The sequence number for the checkpoint.
    • checkpoint_duration_time_exp_avg (or cdtea, checkpointTimeExpAvg): Exponential moving average of the duration of the checkpoint, in milliseconds.
    • checkpoint_progress (or c, checkpointProgress): The progress of the next checkpoint that is currently in progress.
    • create_time (or ct, createTime): The time the transform was created.
    • delete_time (or dtime): The amount of time spent deleting, in milliseconds.
    • description (or d): The description of the transform.
    • dest_index (or di, destIndex): The destination index for the transform. The mappings of the destination index are deduced based on the source fields when possible. If alternate mappings are required, use the Create index API prior to starting the transform.
    • documents_deleted (or docd): The number of documents that have been deleted from the destination index due to the retention policy for this transform.
    • documents_indexed (or doci): The number of documents that have been indexed into the destination index for the transform.
    • docs_per_second (or dps): Specifies a limit on the number of input documents per second. This setting throttles the transform by adding a wait time between search requests. The default value is null, which disables throttling.
    • documents_processed (or docp): The number of documents that have been processed from the source index of the transform.
    • frequency (or f): The interval between checks for changes in the source indices when the transform is running continuously. Also determines the retry interval in the event of transient failures while the transform is searching or indexing. The minimum value is 1s and the maximum is 1h. The default value is 1m.
    • id: Identifier for the transform.
    • index_failure (or if): The number of indexing failures.
    • index_time (or itime): The amount of time spent indexing, in milliseconds.
    • index_total (or it): The number of index operations.
    • indexed_documents_exp_avg (or idea): Exponential moving average of the number of new documents that have been indexed.
    • last_search_time (or lst, lastSearchTime): The timestamp of the last search in the source indices. This field is only shown if the transform is running.
    • max_page_search_size (or mpsz): Defines the initial page size to use for the composite aggregation for each checkpoint. If circuit breaker exceptions occur, the page size is dynamically adjusted to a lower value. The minimum value is 10 and the maximum is 65,536. The default value is 500.
    • pages_processed (or pp): The number of search or bulk index operations processed. Documents are processed in batches instead of individually.
    • pipeline (or p): The unique identifier for an ingest pipeline.
    • processed_documents_exp_avg (or pdea): Exponential moving average of the number of documents that have been processed.
    • processing_time (or pt): The amount of time spent processing results, in milliseconds.
    • reason (or r): If a transform has a failed state, this property provides details about the reason for the failure.
    • search_failure (or sf): The number of search failures.
    • search_time (or stime): The amount of time spent searching, in milliseconds.
    • search_total (or st): The number of search operations on the source index for the transform.
    • source_index (or si, sourceIndex): The source indices for the transform. It can be a single index, an index pattern (for example, "my-index-*"), an array of indices (for example, ["my-index-000001", "my-index-000002"]), or an array of index patterns (for example, ["my-index-*", "my-other-index-*"]. For remote indices use the syntax "remote_name:index_name". If any indices are in remote clusters then the master node and at least one transform node must have the remote_cluster_client node role.
    • state (or s): The status of the transform, which can be one of the following values:

      • aborting: The transform is aborting.
      • failed: The transform failed. For more information about the failure, check the reason field.
      • indexing: The transform is actively processing data and creating new documents.
      • started: The transform is running but not actively indexing data.
      • stopped: The transform is stopped.
      • stopping: The transform is stopping.
    • transform_type (or tt): Indicates the type of transform: batch or continuous.

    • trigger_count (or tc): The number of times the transform has been triggered by the scheduler. For example, the scheduler triggers the transform indexer to check for updates or ingest new data at an interval specified in the frequency property.

    • version (or v): The version of Elasticsearch that existed on the node when the transform was created.

  • time string

    The unit used to display time values.

    Values are nanos, micros, ms, s, m, h, or d.

  • size number

    The maximum number of transforms to obtain.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • id string
    • state string

      The status of the transform. Returned values include: aborting: The transform is aborting. failed: The transform failed. For more information about the failure, check thereasonfield. indexing: The transform is actively processing data and creating new documents. started: The transform is running but not actively indexing data. stopped: The transform is stopped. stopping`: The transform is stopping.

    • The sequence number for the checkpoint.

    • The number of documents that have been processed from the source index of the transform.

    • checkpoint_progress string | null

      The progress of the next checkpoint that is currently in progress.

    • last_search_time string | null

      The timestamp of the last search in the source indices. This field is shown only if the transform is running.

    • changes_last_detection_time string | null

      The timestamp when changes were last detected in the source indices.

    • The time the transform was created.

    • version string
    • The source indices for the transform.

    • The destination index for the transform.

    • pipeline string

      The unique identifier for the ingest pipeline.

    • The description of the transform.

    • The type of transform: batch or continuous.

    • The interval between checks for changes in the source indices when the transform is running continuously.

    • The initial page size that is used for the composite aggregation for each checkpoint.

    • The number of input documents per second.

    • reason string

      If a transform has a failed state, these details describe the reason for failure.

    • The total number of search operations on the source index for the transform.

    • The total number of search failures.

    • The total amount of search time, in milliseconds.

    • The total number of index operations done by the transform.

    • The total number of indexing failures.

    • The total time spent indexing documents, in milliseconds.

    • The number of documents that have been indexed into the destination index for the transform.

    • The total time spent deleting documents, in milliseconds.

    • The number of documents deleted from the destination index due to the retention policy for the transform.

    • The number of times the transform has been triggered by the scheduler. For example, the scheduler triggers the transform indexer to check for updates or ingest new data at an interval specified in the frequency property.

    • The number of search or bulk index operations processed. Documents are processed in batches instead of individually.

    • The total time spent processing results, in milliseconds.

    • The exponential moving average of the duration of the checkpoint, in milliseconds.

    • The exponential moving average of the number of new documents that have been indexed.

    • The exponential moving average of the number of documents that have been processed.

GET /_cat/transforms/{transform_id}
curl \
 --request GET 'http://api.example.com/_cat/transforms/{transform_id}' \
 --header "Authorization: $API_KEY"
Response examples (200)
A successful response from `GET /_cat/transforms?v=true&format=json`.
[
  {
    "id" : "ecommerce_transform",
    "state" : "started",
    "checkpoint" : "1",
    "documents_processed" : "705",
    "checkpoint_progress" : "100.00",
    "changes_last_detection_time" : null
  }
]





Explain the shard allocations Added in 5.0.0

POST /_cluster/allocation/explain

Get explanations for shard allocations in the cluster. For unassigned shards, it provides an explanation for why the shard is unassigned. For assigned shards, it provides an explanation for why the shard is remaining on its current node and has not moved or rebalanced to another node. This API can be very useful when attempting to diagnose why a shard is unassigned or why a shard continues to remain on its current node when you might expect otherwise.

Query parameters

application/json

Body

  • Specifies the node ID or the name of the node to only explain a shard that is currently located on the specified node.

  • index string
  • primary boolean

    If true, returns explanation for the primary shard for the given shard ID.

  • shard number

    Specifies the ID of the shard that you would like an explanation for.

Responses

POST /_cluster/allocation/explain
curl \
 --request POST 'http://api.example.com/_cluster/allocation/explain' \
 --header "Authorization: $API_KEY" \
 --header "Content-Type: application/json" \
 --data '"{\n  \"index\": \"my-index-000001\",\n  \"shard\": 0,\n  \"primary\": false,\n  \"current_node\": \"my-node\"\n}"'
Request example
Run `GET _cluster/allocation/explain` to get an explanation for a shard's current allocation.
{
  "index": "my-index-000001",
  "shard": 0,
  "primary": false,
  "current_node": "my-node"
}
Response examples (200)
An example of an allocation explanation for an unassigned primary shard. In this example, a newly created index has an index setting that requires that it only be allocated to a node named `nonexistent_node`, which does not exist, so the index is unable to allocate.
{
  "index" : "my-index-000001",
  "shard" : 0,
  "primary" : true,
  "current_state" : "unassigned",
  "unassigned_info" : {
    "reason" : "INDEX_CREATED",
    "at" : "2017-01-04T18:08:16.600Z",
    "last_allocation_status" : "no"
  },
  "can_allocate" : "no",
  "allocate_explanation" : "Elasticsearch isn't allowed to allocate this shard to any of the nodes in the cluster. Choose a node to which you expect this shard to be allocated, find this node in the node-by-node explanation, and address the reasons which prevent Elasticsearch from allocating this shard there.",
  "node_allocation_decisions" : [
    {
      "node_id" : "8qt2rY-pT6KNZB3-hGfLnw",
      "node_name" : "node-0",
      "transport_address" : "127.0.0.1:9401",
      "roles" : ["data", "data_cold", "data_content", "data_frozen", "data_hot", "data_warm", "ingest", "master", "ml", "remote_cluster_client", "transform"],
      "node_attributes" : {},
      "node_decision" : "no",
      "weight_ranking" : 1,
      "deciders" : [
        {
          "decider" : "filter",
          "decision" : "NO",
          "explanation" : "node does not match index setting [index.routing.allocation.include] filters [_name:\"nonexistent_node\"]"
        }
      ]
    }
  ]
}
An example of an allocation explanation for an unassigned primary shard that has reached the maximum number of allocation retry attempts. After the maximum number of retries is reached, Elasticsearch stops attempting to allocate the shard in order to prevent infinite retries which may impact cluster performance.
{
  "index" : "my-index-000001",
  "shard" : 0,
  "primary" : true,
  "current_state" : "unassigned",
  "unassigned_info" : {
    "at" : "2017-01-04T18:03:28.464Z",
    "failed shard on node [mEKjwwzLT1yJVb8UxT6anw]: failed recovery, failure RecoveryFailedException",
    "reason": "ALLOCATION_FAILED",
    "failed_allocation_attempts": 5,
    "last_allocation_status": "no",
  },
  "can_allocate": "no",
  "allocate_explanation": "cannot allocate because allocation is not permitted to any of the nodes",
  "node_allocation_decisions" : [
    {
      "node_id" : "3sULLVJrRneSg0EfBB-2Ew",
      "node_name" : "node_t0",
      "transport_address" : "127.0.0.1:9400",
      "roles" : ["data_content", "data_hot"],
      "node_decision" : "no",
      "store" : {
        "matching_size" : "4.2kb",
        "matching_size_in_bytes" : 4325
      },
      "deciders" : [
        {
          "decider": "max_retry",
          "decision" : "NO",
          "explanation": "shard has exceeded the maximum number of retries [5] on failed allocation attempts - manually call [POST /_cluster/reroute?retry_failed] to retry, [unassigned_info[[reason=ALLOCATION_FAILED], at[2024-07-30T21:04:12.166Z], failed_attempts[5], failed_nodes[[mEKjwwzLT1yJVb8UxT6anw]], delayed=false, details[failed shard on node [mEKjwwzLT1yJVb8UxT6anw]: failed recovery, failure RecoveryFailedException], allocation_status[deciders_no]]]"
        }
      ]
    }
  ]
}
























Get cluster info Added in 8.9.0

GET /_info/{target}

Returns basic information about the cluster.

Path parameters

  • target string | array[string] Required

    Limits the information returned to the specific target. Supports a comma-separated list, such as http,ingest.

    Supported values include: _all, http, ingest, thread_pool, script

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • cluster_name string Required
    • http object
      Hide http attributes Show http attributes object
      • Current number of open HTTP connections for the node.

      • Total number of HTTP connections opened for the node.

      • clients array[object]

        Information on current and recently-closed HTTP client connections. Clients that have been closed longer than the http.client_stats.closed_channels.max_age setting will not be represented here.

        Hide clients attributes Show clients attributes object
        • id number

          Unique ID for the HTTP client.

        • agent string

          Reported agent for the HTTP client. If unavailable, this property is not included in the response.

        • Local address for the HTTP connection.

        • Remote address for the HTTP connection.

        • last_uri string

          The URI of the client’s most recent request.

        • Time at which the client opened the connection.

        • Time at which the client closed the connection if the connection is closed.

        • Time of the most recent request from this client.

        • Number of requests from this client.

        • Cumulative size in bytes of all requests from this client.

        • Value from the client’s x-opaque-id HTTP header. If unavailable, this property is not included in the response.

      • routes object Required Added in 8.12.0

        Detailed HTTP stats broken down by route

        Hide routes attribute Show routes attribute object
    • ingest object
      Hide ingest attributes Show ingest attributes object
      • Contains statistics about ingest pipelines for the node.

        Hide pipelines attribute Show pipelines attribute object
        • * object Additional properties
          Hide * attributes Show * attributes object
          • count number Required

            Total number of documents ingested during the lifetime of this node.

          • current number Required

            Total number of documents currently being ingested.

          • failed number Required

            Total number of failed ingest operations during the lifetime of this node.

          • processors array[object] Required

            Total number of ingest processors.

            Hide processors attribute Show processors attribute object
            • * object Additional properties
          • Time unit for milliseconds

          • ingested_as_first_pipeline_in_bytes number Required Added in 8.15.0

            Total number of bytes of all documents ingested by the pipeline. This field is only present on pipelines which are the first to process a document. Thus, it is not present on pipelines which only serve as a final pipeline after a default pipeline, a pipeline run after a reroute processor, or pipelines in pipeline processors.

          • produced_as_first_pipeline_in_bytes number Required Added in 8.15.0

            Total number of bytes of all documents produced by the pipeline. This field is only present on pipelines which are the first to process a document. Thus, it is not present on pipelines which only serve as a final pipeline after a default pipeline, a pipeline run after a reroute processor, or pipelines in pipeline processors. In situations where there are subsequent pipelines, the value represents the size of the document after all pipelines have run.

      • total object
        Hide total attributes Show total attributes object
        • count number Required

          Total number of documents ingested during the lifetime of this node.

        • current number Required

          Total number of documents currently being ingested.

        • failed number Required

          Total number of failed ingest operations during the lifetime of this node.

        • Time unit for milliseconds

    • Hide thread_pool attribute Show thread_pool attribute object
      • * object Additional properties
        Hide * attributes Show * attributes object
        • active number

          Number of active threads in the thread pool.

        • Number of tasks completed by the thread pool executor.

        • largest number

          Highest number of active threads in the thread pool.

        • queue number

          Number of tasks in queue for the thread pool.

        • rejected number

          Number of tasks rejected by the thread pool executor.

        • threads number

          Number of threads in the thread pool.

    • script object
      Hide script attributes Show script attributes object
GET /_info/{target}
curl \
 --request GET 'http://api.example.com/_info/{target}' \
 --header "Authorization: $API_KEY"








































































Reload the keystore on nodes in the cluster Added in 6.5.0

POST /_nodes/{node_id}/reload_secure_settings

Secure settings are stored in an on-disk keystore. Certain of these settings are reloadable. That is, you can change them on disk and reload them without restarting any nodes in the cluster. When you have updated reloadable secure settings in your keystore, you can use this API to reload those settings on each node.

When the Elasticsearch keystore is password protected and not simply obfuscated, you must provide the password for the keystore when you reload the secure settings. Reloading the settings for the whole cluster assumes that the keystores for all nodes are protected with the same password; this method is allowed only when inter-node communications are encrypted. Alternatively, you can reload the secure settings on each node by locally accessing the API and passing the node-specific Elasticsearch keystore password.

Path parameters

  • node_id string | array[string] Required

    The names of particular nodes in the cluster to target.

Query parameters

  • timeout string

    Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.

application/json

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • _nodes object
      Hide _nodes attributes Show _nodes attributes object
      • failures array[object]
        Hide failures attributes Show failures attributes object
      • total number Required

        Total number of nodes selected by the request.

      • successful number Required

        Number of nodes that responded successfully to the request.

      • failed number Required

        Number of nodes that rejected the request or failed to respond. If this value is not 0, a reason for the rejection or failure is included in the response.

    • cluster_name string Required
    • nodes object Required
      Hide nodes attribute Show nodes attribute object
      • * object Additional properties
        Hide * attributes Show * attributes object
POST /_nodes/{node_id}/reload_secure_settings
curl \
 --request POST 'http://api.example.com/_nodes/{node_id}/reload_secure_settings' \
 --header "Authorization: $API_KEY" \
 --header "Content-Type: application/json" \
 --data '"{\n  \"secure_settings_password\": \"keystore-password\"\n}"'
Request example
Run `POST _nodes/reload_secure_settings` to reload the keystore on nodes in the cluster.
{
  "secure_settings_password": "keystore-password"
}
Response examples (200)
A successful response when reloading keystore on nodes in your cluster.
{
  "_nodes": {
    "total": 1,
    "successful": 1,
    "failed": 0
  },
  "cluster_name": "my_cluster",
  "nodes": {
    "pQHNt5rXTTWNvUgOrdynKg": {
      "name": "node-0"
    }
  }
}












Get node statistics

GET /_nodes/{node_id}/stats/{metric}

Get statistics for nodes in a cluster. By default, all stats are returned. You can limit the returned information by using metrics.

Path parameters

  • node_id string | array[string] Required

    Comma-separated list of node IDs or names used to limit returned information.

  • metric string | array[string] Required

    Limit the information returned to the specified metrics

Query parameters

  • completion_fields string | array[string]

    Comma-separated list or wildcard expressions of fields to include in fielddata and suggest statistics.

  • fielddata_fields string | array[string]

    Comma-separated list or wildcard expressions of fields to include in fielddata statistics.

  • fields string | array[string]

    Comma-separated list or wildcard expressions of fields to include in the statistics.

  • groups boolean

    Comma-separated list of search groups to include in the search statistics.

  • If true, the call reports the aggregated disk usage of each one of the Lucene index files (only applies if segment stats are requested).

  • level string

    Indicates whether statistics are aggregated at the cluster, index, or shard level.

    Values are cluster, indices, or shards.

  • timeout string

    Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.

  • types array[string]

    A comma-separated list of document types for the indexing index metric.

  • If true, the response includes information from segments that are not loaded into memory.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • _nodes object
      Hide _nodes attributes Show _nodes attributes object
      • failures array[object]
        Hide failures attributes Show failures attributes object
      • total number Required

        Total number of nodes selected by the request.

      • successful number Required

        Number of nodes that responded successfully to the request.

      • failed number Required

        Number of nodes that rejected the request or failed to respond. If this value is not 0, a reason for the rejection or failure is included in the response.

    • nodes object Required
      Hide nodes attribute Show nodes attribute object
      • * object Additional properties
        Hide * attributes Show * attributes object
        • Statistics about adaptive replica selection.

          Hide adaptive_selection attribute Show adaptive_selection attribute object
          • * object Additional properties
            Hide * attributes Show * attributes object
            • The exponentially weighted moving average queue size of search requests on the keyed node.

            • A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

            • The exponentially weighted moving average response time, in nanoseconds, of search requests on the keyed node.

            • A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

            • The exponentially weighted moving average service time, in nanoseconds, of search requests on the keyed node.

            • The number of outstanding search requests to the keyed node from the node these stats are for.

            • rank string

              The rank of this node; used for shard selection when routing search requests.

        • breakers object

          Statistics about the field data circuit breaker.

          Hide breakers attribute Show breakers attribute object
          • * object Additional properties
            Hide * attributes Show * attributes object
            • Estimated memory used for the operation.

            • Estimated memory used, in bytes, for the operation.

            • Memory limit for the circuit breaker.

            • Memory limit, in bytes, for the circuit breaker.

            • overhead number

              A constant that all estimates for the circuit breaker are multiplied with to calculate a final estimate.

            • tripped number

              Total number of times the circuit breaker has been triggered and prevented an out of memory error.

        • fs object
          Hide fs attributes Show fs attributes object
          • data array[object]

            List of all file stores.

          • Last time the file stores statistics were refreshed. Recorded in milliseconds since the Unix Epoch.

          • total object
            Hide total attributes Show total attributes object
            • Total disk space available to this Java virtual machine on all file stores. Depending on OS or process level restrictions, this might appear less than free. This is the actual amount of free disk space the Elasticsearch node can utilise.

            • Total number of bytes available to this Java virtual machine on all file stores. Depending on OS or process level restrictions, this might appear less than free_in_bytes. This is the actual amount of free disk space the Elasticsearch node can utilise.

            • free string

              Total unallocated disk space in all file stores.

            • Total number of unallocated bytes in all file stores.

            • total string

              Total size of all file stores.

            • Total size of all file stores in bytes.

          • io_stats object
            Hide io_stats attributes Show io_stats attributes object
            • devices array[object]

              Array of disk metrics for each device that is backing an Elasticsearch data path. These disk metrics are probed periodically and averages between the last probe and the current probe are computed.

            • total object
        • host string
        • http object
          Hide http attributes Show http attributes object
          • Current number of open HTTP connections for the node.

          • Total number of HTTP connections opened for the node.

          • clients array[object]

            Information on current and recently-closed HTTP client connections. Clients that have been closed longer than the http.client_stats.closed_channels.max_age setting will not be represented here.

          • routes object Required Added in 8.12.0

            Detailed HTTP stats broken down by route

            Hide routes attribute Show routes attribute object
            • * object Additional properties
        • ingest object
          Hide ingest attributes Show ingest attributes object
          • Contains statistics about ingest pipelines for the node.

            Hide pipelines attribute Show pipelines attribute object
            • * object Additional properties
          • total object
            Hide total attributes Show total attributes object
            • count number Required

              Total number of documents ingested during the lifetime of this node.

            • current number Required

              Total number of documents currently being ingested.

            • failed number Required

              Total number of failed ingest operations during the lifetime of this node.

        • ip string | array[string]

          IP address and port for the node.

        • jvm object
          Hide jvm attributes Show jvm attributes object
          • Contains statistics about JVM buffer pools for the node.

            Hide buffer_pools attribute Show buffer_pools attribute object
            • * object Additional properties
          • classes object
            Hide classes attributes Show classes attributes object
          • gc object
            Hide gc attribute Show gc attribute object
            • Contains statistics about JVM garbage collectors for the node.

          • mem object
            Hide mem attributes Show mem attributes object
          • threads object
            Hide threads attributes Show threads attributes object
            • count number

              Number of active threads in use by JVM.

            • Highest number of threads used by JVM.

          • Last time JVM statistics were refreshed.

          • uptime string

            Human-readable JVM uptime. Only returned if the human query parameter is true.

          • JVM uptime in milliseconds.

        • name string
        • os object
          Hide os attributes Show os attributes object
          • cpu object
            Hide cpu attributes Show cpu attributes object
            • percent number
            • sys string

              A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

            • total string

              A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

            • user string

              A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

          • swap object
            Hide swap attributes Show swap attributes object
          • cgroup object
            Hide cgroup attributes Show cgroup attributes object
        • process object
          Hide process attributes Show process attributes object
          • cpu object
            Hide cpu attributes Show cpu attributes object
            • percent number
            • sys string

              A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

            • total string

              A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

            • user string

              A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

          • mem object
            Hide mem attributes Show mem attributes object
          • Number of opened file descriptors associated with the current or -1 if not supported.

          • Maximum number of file descriptors allowed on the system, or -1 if not supported.

          • Last time the statistics were refreshed. Recorded in milliseconds since the Unix Epoch.

        • roles array[string]
          • @doc_id node-roles

          Values are master, data, data_cold, data_content, data_frozen, data_hot, data_warm, client, ingest, ml, voting_only, transform, remote_cluster_client, or coordinating_only.

        • script object
          Hide script attributes Show script attributes object
          • Total number of times the script cache has evicted old data.

          • Total number of inline script compilations performed by the node.

          • Contains this recent history of script compilations.

            Hide compilations_history attribute Show compilations_history attribute object
            • * number Additional properties
          • Total number of times the script compilation circuit breaker has limited inline script compilations.

          • contexts array[object]
        • Statistics about each thread pool, including current size, queue and rejected tasks.

          Hide thread_pool attribute Show thread_pool attribute object
          • * object Additional properties
            Hide * attributes Show * attributes object
            • active number

              Number of active threads in the thread pool.

            • Number of tasks completed by the thread pool executor.

            • largest number

              Highest number of active threads in the thread pool.

            • queue number

              Number of tasks in queue for the thread pool.

            • rejected number

              Number of tasks rejected by the thread pool executor.

            • threads number

              Number of threads in the thread pool.

        • Hide transport attributes Show transport attributes object
          • The distribution of the time spent handling each inbound message on a transport thread, represented as a histogram.

          • The distribution of the time spent sending each outbound transport message on a transport thread, represented as a histogram.

          • rx_count number

            Total number of RX (receive) packets received by the node during internal cluster communication.

          • rx_size string

            Size of RX packets received by the node during internal cluster communication.

          • Size, in bytes, of RX packets received by the node during internal cluster communication.

          • Current number of inbound TCP connections used for internal communication between nodes.

          • tx_count number

            Total number of TX (transmit) packets sent by the node during internal cluster communication.

          • tx_size string

            Size of TX packets sent by the node during internal cluster communication.

          • Size, in bytes, of TX packets sent by the node during internal cluster communication.

          • The cumulative number of outbound transport connections that this node has opened since it started. Each transport connection may comprise multiple TCP connections but is only counted once in this statistic. Transport connections are typically long-lived so this statistic should remain constant in a stable cluster.

        • Contains a list of attributes for the node.

          Hide attributes attribute Show attributes attribute object
          • * string Additional properties
        • Hide discovery attributes Show discovery attributes object
          • Hide cluster_state_queue attributes Show cluster_state_queue attributes object
            • total number

              Total number of cluster states in queue.

            • pending number

              Number of pending cluster states in queue.

            • Number of committed cluster states in queue.

          • Hide published_cluster_states attributes Show published_cluster_states attributes object
          • Contains low-level statistics about how long various activities took during cluster state updates while the node was the elected master. Omitted if the node is not master-eligible. Every field whose name ends in _time within this object is also represented as a raw number of milliseconds in a field whose name ends in _time_millis. The human-readable fields with a _time suffix are only returned if requested with the ?human=true query parameter.

            Hide cluster_state_update attribute Show cluster_state_update attribute object
            • * object Additional properties
          • Hide serialized_cluster_states attributes Show serialized_cluster_states attributes object
          • Hide cluster_applier_stats attribute Show cluster_applier_stats attribute object
        • Hide indexing_pressure attribute Show indexing_pressure attribute object
          • memory object
            Hide memory attributes Show memory attributes object
        • indices object
          Hide indices attributes Show indices attributes object
GET /_nodes/{node_id}/stats/{metric}
curl \
 --request GET 'http://api.example.com/_nodes/{node_id}/stats/{metric}' \
 --header "Authorization: $API_KEY"




















Get feature usage information Added in 6.0.0

GET /_nodes/{node_id}/usage/{metric}

Path parameters

  • node_id string | array[string] Required

    A comma-separated list of node IDs or names to limit the returned information; use _local to return information from the node you're connecting to, leave empty to get information from all nodes

  • metric string | array[string] Required

    Limits the information returned to the specific metrics. A comma-separated list of the following options: _all, rest_actions.

Query parameters

  • timeout string

    Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • _nodes object
      Hide _nodes attributes Show _nodes attributes object
      • failures array[object]
        Hide failures attributes Show failures attributes object
      • total number Required

        Total number of nodes selected by the request.

      • successful number Required

        Number of nodes that responded successfully to the request.

      • failed number Required

        Number of nodes that rejected the request or failed to respond. If this value is not 0, a reason for the rejection or failure is included in the response.

    • cluster_name string Required
    • nodes object Required
      Hide nodes attribute Show nodes attribute object
      • * object Additional properties
        Hide * attributes Show * attributes object
        • rest_actions object Required
          Hide rest_actions attribute Show rest_actions attribute object
          • * number Additional properties
        • Time unit for milliseconds

        • Time unit for milliseconds

        • aggregations object Required
          Hide aggregations attribute Show aggregations attribute object
          • * object Additional properties
GET /_nodes/{node_id}/usage/{metric}
curl \
 --request GET 'http://api.example.com/_nodes/{node_id}/usage/{metric}' \
 --header "Authorization: $API_KEY"

Get the cluster health Added in 8.7.0

GET /_health_report

Get a report with the health status of an Elasticsearch cluster. The report contains a list of indicators that compose Elasticsearch functionality.

Each indicator has a health status of: green, unknown, yellow or red. The indicator will provide an explanation and metadata describing the reason for its current health status.

The cluster’s status is controlled by the worst indicator status.

In the event that an indicator’s status is non-green, a list of impacts may be present in the indicator result which detail the functionalities that are negatively affected by the health issue. Each impact carries with it a severity level, an area of the system that is affected, and a simple description of the impact on the system.

Some health indicators can determine the root cause of a health problem and prescribe a set of steps that can be performed in order to improve the health of the system. The root cause and remediation steps are encapsulated in a diagnosis. A diagnosis contains a cause detailing a root cause analysis, an action containing a brief description of the steps to take to fix the problem, the list of affected resources (if applicable), and a detailed step-by-step troubleshooting guide to fix the diagnosed problem.

NOTE: The health indicators perform root cause analysis of non-green health statuses. This can be computationally expensive when called frequently. When setting up automated polling of the API for health status, set verbose to false to disable the more expensive analysis logic.

Query parameters

  • timeout string

    Explicit operation timeout.

  • verbose boolean

    Opt-in for more information about the health of the system.

  • size number

    Limit the number of affected resources the health report API returns.

Responses

GET /_health_report
curl \
 --request GET 'http://api.example.com/_health_report' \
 --header "Authorization: $API_KEY"









































































Update the connector API key ID Beta

PUT /_connector/{connector_id}/_api_key_id

Update the api_key_id and api_key_secret_id fields of a connector. You can specify the ID of the API key used for authorization and the ID of the connector secret where the API key is stored. The connector secret ID is required only for Elastic managed (native) connectors. Self-managed connectors (connector clients) do not use this field.

Path parameters

  • connector_id string Required

    The unique identifier of the connector to be updated

application/json

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • result string Required

      Values are created, updated, deleted, not_found, or noop.

PUT /_connector/{connector_id}/_api_key_id
curl \
 --request PUT 'http://api.example.com/_connector/{connector_id}/_api_key_id' \
 --header "Authorization: $API_KEY" \
 --header "Content-Type: application/json" \
 --data '"{\n    \"api_key_id\": \"my-api-key-id\",\n    \"api_key_secret_id\": \"my-connector-secret-id\"\n}"'
Request example
{
    "api_key_id": "my-api-key-id",
    "api_key_secret_id": "my-connector-secret-id"
}
Response examples (200)
{
  "result": "updated"
}

























































Delete auto-follow patterns Added in 6.5.0

DELETE /_ccr/auto_follow/{name}

Delete a collection of cross-cluster replication auto-follow patterns.

External documentation

Path parameters

  • name string Required

    The auto-follow pattern collection to delete.

Query parameters

  • The period to wait for a connection to the master node. If the master node is not available before the timeout expires, the request fails and returns an error. It can also be set to -1 to indicate that the request should never timeout.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

DELETE /_ccr/auto_follow/{name}
curl \
 --request DELETE 'http://api.example.com/_ccr/auto_follow/{name}' \
 --header "Authorization: $API_KEY"
Response examples (200)
A successful response from `DELETE /_ccr/auto_follow/my_auto_follow_pattern`, which deletes an auto-follow pattern.
{
  "acknowledged" : true
}




















Pause an auto-follow pattern Added in 7.5.0

POST /_ccr/auto_follow/{name}/pause

Pause a cross-cluster replication auto-follow pattern. When the API returns, the auto-follow pattern is inactive. New indices that are created on the remote cluster and match the auto-follow patterns are ignored.

You can resume auto-following with the resume auto-follow pattern API. When it resumes, the auto-follow pattern is active again and automatically configures follower indices for newly created indices on the remote cluster that match its patterns. Remote indices that were created while the pattern was paused will also be followed, unless they have been deleted or closed in the interim.

External documentation

Path parameters

  • name string Required

    The name of the auto-follow pattern to pause.

Query parameters

  • The period to wait for a connection to the master node. If the master node is not available before the timeout expires, the request fails and returns an error. It can also be set to -1 to indicate that the request should never timeout.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

POST /_ccr/auto_follow/{name}/pause
curl \
 --request POST 'http://api.example.com/_ccr/auto_follow/{name}/pause' \
 --header "Authorization: $API_KEY"
Response examples (200)
A successful response from `POST /_ccr/auto_follow/my_auto_follow_pattern/pause`, which pauses an auto-follow pattern.
{
  "acknowledged" : true
}


























































































Bulk index or delete documents

POST /{index}/_bulk

Perform multiple index, create, delete, and update actions in a single request. This reduces overhead and can greatly increase indexing speed.

If the Elasticsearch security features are enabled, you must have the following index privileges for the target data stream, index, or index alias:

  • To use the create action, you must have the create_doc, create, index, or write index privilege. Data streams support only the create action.
  • To use the index action, you must have the create, index, or write index privilege.
  • To use the delete action, you must have the delete or write index privilege.
  • To use the update action, you must have the index or write index privilege.
  • To automatically create a data stream or index with a bulk API request, you must have the auto_configure, create_index, or manage index privilege.
  • To make the result of a bulk operation visible to search using the refresh parameter, you must have the maintenance or manage index privilege.

Automatic data stream creation requires a matching index template with data stream enabled.

The actions are specified in the request body using a newline delimited JSON (NDJSON) structure:

action_and_meta_data\n
optional_source\n
action_and_meta_data\n
optional_source\n
....
action_and_meta_data\n
optional_source\n

The index and create actions expect a source on the next line and have the same semantics as the op_type parameter in the standard index API. A create action fails if a document with the same ID already exists in the target An index action adds or replaces a document as necessary.

NOTE: Data streams support only the create action. To update or delete a document in a data stream, you must target the backing index containing the document.

An update action expects that the partial doc, upsert, and script and its options are specified on the next line.

A delete action does not expect a source on the next line and has the same semantics as the standard delete API.

NOTE: The final line of data must end with a newline character (\n). Each newline character may be preceded by a carriage return (\r). When sending NDJSON data to the _bulk endpoint, use a Content-Type header of application/json or application/x-ndjson. Because this format uses literal newline characters (\n) as delimiters, make sure that the JSON actions and sources are not pretty printed.

If you provide a target in the request path, it is used for any actions that don't explicitly specify an _index argument.

A note on the format: the idea here is to make processing as fast as possible. As some of the actions are redirected to other shards on other nodes, only action_meta_data is parsed on the receiving node side.

Client libraries using this protocol should try and strive to do something similar on the client side, and reduce buffering as much as possible.

There is no "correct" number of actions to perform in a single bulk request. Experiment with different settings to find the optimal size for your particular workload. Note that Elasticsearch limits the maximum size of a HTTP request to 100mb by default so clients must ensure that no request exceeds this size. It is not possible to index a single document that exceeds the size limit, so you must pre-process any such documents into smaller pieces before sending them to Elasticsearch. For instance, split documents into pages or chapters before indexing them, or store raw binary data in a system outside Elasticsearch and replace the raw data with a link to the external system in the documents that you send to Elasticsearch.

Client suppport for bulk requests

Some of the officially supported clients provide helpers to assist with bulk requests and reindexing:

  • Go: Check out esutil.BulkIndexer
  • Perl: Check out Search::Elasticsearch::Client::5_0::Bulk and Search::Elasticsearch::Client::5_0::Scroll
  • Python: Check out elasticsearch.helpers.*
  • JavaScript: Check out client.helpers.*
  • .NET: Check out BulkAllObservable
  • PHP: Check out bulk indexing.

Submitting bulk requests with cURL

If you're providing text file input to curl, you must use the --data-binary flag instead of plain -d. The latter doesn't preserve newlines. For example:

$ cat requests
{ "index" : { "_index" : "test", "_id" : "1" } }
{ "field1" : "value1" }
$ curl -s -H "Content-Type: application/x-ndjson" -XPOST localhost:9200/_bulk --data-binary "@requests"; echo
{"took":7, "errors": false, "items":[{"index":{"_index":"test","_id":"1","_version":1,"result":"created","forced_refresh":false}}]}

Optimistic concurrency control

Each index and delete action within a bulk API call may include the if_seq_no and if_primary_term parameters in their respective action and meta data lines. The if_seq_no and if_primary_term parameters control how operations are run, based on the last modification to existing documents. See Optimistic concurrency control for more details.

Versioning

Each bulk item can include the version value using the version field. It automatically follows the behavior of the index or delete operation based on the _version mapping. It also support the version_type.

Routing

Each bulk item can include the routing value using the routing field. It automatically follows the behavior of the index or delete operation based on the _routing mapping.

NOTE: Data streams do not support custom routing unless they were created with the allow_custom_routing setting enabled in the template.

Wait for active shards

When making bulk calls, you can set the wait_for_active_shards parameter to require a minimum number of shard copies to be active before starting to process the bulk request.

Refresh

Control when the changes made by this request are visible to search.

NOTE: Only the shards that receive the bulk request will be affected by refresh. Imagine a _bulk?refresh=wait_for request with three documents in it that happen to be routed to different shards in an index with five shards. The request will only wait for those three shards to refresh. The other two shards that make up the index do not participate in the _bulk request at all.

Path parameters

  • index string Required

    The name of the data stream, index, or index alias to perform bulk actions on.

Query parameters

  • True or false if to include the document source in the error message in case of parsing errors.

  • If true, the response will include the ingest pipelines that were run for each index or create.

  • pipeline string

    The pipeline identifier to use to preprocess incoming documents. If the index has a default ingest pipeline specified, setting the value to _none turns off the default ingest pipeline for this request. If a final pipeline is configured, it will always run regardless of the value of this parameter.

  • refresh string

    If true, Elasticsearch refreshes the affected shards to make this operation visible to search. If wait_for, wait for a refresh to make this operation visible to search. If false, do nothing with refreshes. Valid values: true, false, wait_for.

    Values are true, false, or wait_for.

  • routing string

    A custom value that is used to route operations to a specific shard.

  • _source boolean | string | array[string]

    Indicates whether to return the _source field (true or false) or contains a list of fields to return.

  • _source_excludes string | array[string]

    A comma-separated list of source fields to exclude from the response. You can also use this parameter to exclude fields from the subset specified in _source_includes query parameter. If the _source parameter is false, this parameter is ignored.

  • _source_includes string | array[string]

    A comma-separated list of source fields to include in the response. If this parameter is specified, only these source fields are returned. You can exclude fields from this subset using the _source_excludes query parameter. If the _source parameter is false, this parameter is ignored.

  • timeout string

    The period each action waits for the following operations: automatic index creation, dynamic mapping updates, and waiting for active shards. The default is 1m (one minute), which guarantees Elasticsearch waits for at least the timeout before failing. The actual wait time could be longer, particularly when multiple waits occur.

  • wait_for_active_shards number | string

    The number of shard copies that must be active before proceeding with the operation. Set to all or any positive integer up to the total number of shards in the index (number_of_replicas+1). The default is 1, which waits for each primary shard to be active.

  • If true, the request's actions must target an index alias.

  • If true, the request's actions must target a data stream (existing or to be created).

application/json

Body object Required

One of:
  • index object
    Hide index attributes Show index attributes object
    • _id string
    • _index string
    • routing string
    • version number
    • Values are internal, external, external_gte, or force.

    • A map from the full name of fields to the name of dynamic templates. It defaults to an empty map. If a name matches a dynamic template, that template will be applied regardless of other match predicates defined in the template. If a field is already defined in the mapping, then this parameter won't be used.

      Hide dynamic_templates attribute Show dynamic_templates attribute object
      • * string Additional properties
    • pipeline string

      The ID of the pipeline to use to preprocess incoming documents. If the index has a default ingest pipeline specified, setting the value to _none turns off the default ingest pipeline for this request. If a final pipeline is configured, it will always run regardless of the value of this parameter.

    • If true, the request's actions must target an index alias.

  • create object
    Hide create attributes Show create attributes object
    • _id string
    • _index string
    • routing string
    • version number
    • Values are internal, external, external_gte, or force.

    • A map from the full name of fields to the name of dynamic templates. It defaults to an empty map. If a name matches a dynamic template, that template will be applied regardless of other match predicates defined in the template. If a field is already defined in the mapping, then this parameter won't be used.

      Hide dynamic_templates attribute Show dynamic_templates attribute object
      • * string Additional properties
    • pipeline string

      The ID of the pipeline to use to preprocess incoming documents. If the index has a default ingest pipeline specified, setting the value to _none turns off the default ingest pipeline for this request. If a final pipeline is configured, it will always run regardless of the value of this parameter.

    • If true, the request's actions must target an index alias.

  • update object
    Hide update attributes Show update attributes object
  • delete object
    Hide delete attributes Show delete attributes object

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • errors boolean Required

      If true, one or more of the operations in the bulk request did not complete successfully.

    • items array[object] Required

      The result of each operation in the bulk request, in the order they were submitted.

      Hide items attribute Show items attribute object
    • took number Required

      The length of time, in milliseconds, it took to process the bulk request.

POST /{index}/_bulk
curl \
 --request POST 'http://api.example.com/{index}/_bulk' \
 --header "Authorization: $API_KEY" \
 --header "Content-Type: application/json" \
 --data '"{ \"index\" : { \"_index\" : \"test\", \"_id\" : \"1\" } }\n{ \"field1\" : \"value1\" }\n{ \"delete\" : { \"_index\" : \"test\", \"_id\" : \"2\" } }\n{ \"create\" : { \"_index\" : \"test\", \"_id\" : \"3\" } }\n{ \"field1\" : \"value3\" }\n{ \"update\" : {\"_id\" : \"1\", \"_index\" : \"test\"} }\n{ \"doc\" : {\"field2\" : \"value2\"} }"'
Run `POST _bulk` to perform multiple operations.
{ "index" : { "_index" : "test", "_id" : "1" } }
{ "field1" : "value1" }
{ "delete" : { "_index" : "test", "_id" : "2" } }
{ "create" : { "_index" : "test", "_id" : "3" } }
{ "field1" : "value3" }
{ "update" : {"_id" : "1", "_index" : "test"} }
{ "doc" : {"field2" : "value2"} }
When you run `POST _bulk` and use the `update` action, you can use `retry_on_conflict` as a field in the action itself (not in the extra payload line) to specify how many times an update should be retried in the case of a version conflict.
{ "update" : {"_id" : "1", "_index" : "index1", "retry_on_conflict" : 3} }
{ "doc" : {"field" : "value"} }
{ "update" : { "_id" : "0", "_index" : "index1", "retry_on_conflict" : 3} }
{ "script" : { "source": "ctx._source.counter += params.param1", "lang" : "painless", "params" : {"param1" : 1}}, "upsert" : {"counter" : 1}}
{ "update" : {"_id" : "2", "_index" : "index1", "retry_on_conflict" : 3} }
{ "doc" : {"field" : "value"}, "doc_as_upsert" : true }
{ "update" : {"_id" : "3", "_index" : "index1", "_source" : true} }
{ "doc" : {"field" : "value"} }
{ "update" : {"_id" : "4", "_index" : "index1"} }
{ "doc" : {"field" : "value"}, "_source": true}
To return only information about failed operations, run `POST /_bulk?filter_path=items.*.error`.
{ "update": {"_id": "5", "_index": "index1"} }
{ "doc": {"my_field": "foo"} }
{ "update": {"_id": "6", "_index": "index1"} }
{ "doc": {"my_field": "foo"} }
{ "create": {"_id": "7", "_index": "index1"} }
{ "my_field": "foo" }
Run `POST /_bulk` to perform a bulk request that consists of index and create actions with the `dynamic_templates` parameter. The bulk request creates two new fields `work_location` and `home_location` with type `geo_point` according to the `dynamic_templates` parameter. However, the `raw_location` field is created using default dynamic mapping rules, as a text field in that case since it is supplied as a string in the JSON document.
{ "index" : { "_index" : "my_index", "_id" : "1", "dynamic_templates": {"work_location": "geo_point"}} }
{ "field" : "value1", "work_location": "41.12,-71.34", "raw_location": "41.12,-71.34"}
{ "create" : { "_index" : "my_index", "_id" : "2", "dynamic_templates": {"home_location": "geo_point"}} }
{ "field" : "value2", "home_location": "41.12,-71.34"}
Response examples (200)
{
   "took": 30,
   "errors": false,
   "items": [
      {
         "index": {
            "_index": "test",
            "_id": "1",
            "_version": 1,
            "result": "created",
            "_shards": {
               "total": 2,
               "successful": 1,
               "failed": 0
            },
            "status": 201,
            "_seq_no" : 0,
            "_primary_term": 1
         }
      },
      {
         "delete": {
            "_index": "test",
            "_id": "2",
            "_version": 1,
            "result": "not_found",
            "_shards": {
               "total": 2,
               "successful": 1,
               "failed": 0
            },
            "status": 404,
            "_seq_no" : 1,
            "_primary_term" : 2
         }
      },
      {
         "create": {
            "_index": "test",
            "_id": "3",
            "_version": 1,
            "result": "created",
            "_shards": {
               "total": 2,
               "successful": 1,
               "failed": 0
            },
            "status": 201,
            "_seq_no" : 2,
            "_primary_term" : 3
         }
      },
      {
         "update": {
            "_index": "test",
            "_id": "1",
            "_version": 2,
            "result": "updated",
            "_shards": {
                "total": 2,
                "successful": 1,
                "failed": 0
            },
            "status": 200,
            "_seq_no" : 3,
            "_primary_term" : 4
         }
      }
   ]
}
If you run `POST /_bulk` with operations that update non-existent documents, the operations cannot complete successfully. The API returns a response with an `errors` property value `true`. The response also includes an error object for any failed operations. The error object contains additional information about the failure, such as the error type and reason.
{
  "took": 486,
  "errors": true,
  "items": [
    {
      "update": {
        "_index": "index1",
        "_id": "5",
        "status": 404,
        "error": {
          "type": "document_missing_exception",
          "reason": "[5]: document missing",
          "index_uuid": "aAsFqTI0Tc2W0LCWgPNrOA",
          "shard": "0",
          "index": "index1"
        }
      }
    },
    {
      "update": {
        "_index": "index1",
        "_id": "6",
        "status": 404,
        "error": {
          "type": "document_missing_exception",
          "reason": "[6]: document missing",
          "index_uuid": "aAsFqTI0Tc2W0LCWgPNrOA",
          "shard": "0",
          "index": "index1"
        }
      }
    },
    {
      "create": {
        "_index": "index1",
        "_id": "7",
        "_version": 1,
        "result": "created",
        "_shards": {
          "total": 2,
          "successful": 1,
          "failed": 0
        },
        "_seq_no": 0,
        "_primary_term": 1,
        "status": 201
      }
    }
  ]
}
An example response from `POST /_bulk?filter_path=items.*.error`, which returns only information about failed operations.
{
  "items": [
    {
      "update": {
        "error": {
          "type": "document_missing_exception",
          "reason": "[5]: document missing",
          "index_uuid": "aAsFqTI0Tc2W0LCWgPNrOA",
          "shard": "0",
          "index": "index1"
        }
      }
    },
    {
      "update": {
        "error": {
          "type": "document_missing_exception",
          "reason": "[6]: document missing",
          "index_uuid": "aAsFqTI0Tc2W0LCWgPNrOA",
          "shard": "0",
          "index": "index1"
        }
      }
    }
  ]
}




























Delete documents Added in 5.0.0

POST /{index}/_delete_by_query

Deletes documents that match the specified query.

If the Elasticsearch security features are enabled, you must have the following index privileges for the target data stream, index, or alias:

  • read
  • delete or write

You can specify the query criteria in the request URI or the request body using the same syntax as the search API. When you submit a delete by query request, Elasticsearch gets a snapshot of the data stream or index when it begins processing the request and deletes matching documents using internal versioning. If a document changes between the time that the snapshot is taken and the delete operation is processed, it results in a version conflict and the delete operation fails.

NOTE: Documents with a version equal to 0 cannot be deleted using delete by query because internal versioning does not support 0 as a valid version number.

While processing a delete by query request, Elasticsearch performs multiple search requests sequentially to find all of the matching documents to delete. A bulk delete request is performed for each batch of matching documents. If a search or bulk request is rejected, the requests are retried up to 10 times, with exponential back off. If the maximum retry limit is reached, processing halts and all failed requests are returned in the response. Any delete requests that completed successfully still stick, they are not rolled back.

You can opt to count version conflicts instead of halting and returning by setting conflicts to proceed. Note that if you opt to count version conflicts the operation could attempt to delete more documents from the source than max_docs until it has successfully deleted max_docs documents, or it has gone through every document in the source query.

Throttling delete requests

To control the rate at which delete by query issues batches of delete operations, you can set requests_per_second to any positive decimal number. This pads each batch with a wait time to throttle the rate. Set requests_per_second to -1 to disable throttling.

Throttling uses a wait time between batches so that the internal scroll requests can be given a timeout that takes the request padding into account. The padding time is the difference between the batch size divided by the requests_per_second and the time spent writing. By default the batch size is 1000, so if requests_per_second is set to 500:

target_time = 1000 / 500 per second = 2 seconds
wait_time = target_time - write_time = 2 seconds - .5 seconds = 1.5 seconds

Since the batch is issued as a single _bulk request, large batch sizes cause Elasticsearch to create many requests and wait before starting the next set. This is "bursty" instead of "smooth".

Slicing

Delete by query supports sliced scroll to parallelize the delete process. This can improve efficiency and provide a convenient way to break the request down into smaller parts.

Setting slices to auto lets Elasticsearch choose the number of slices to use. This setting will use one slice per shard, up to a certain limit. If there are multiple source data streams or indices, it will choose the number of slices based on the index or backing index with the smallest number of shards. Adding slices to the delete by query operation creates sub-requests which means it has some quirks:

  • You can see these requests in the tasks APIs. These sub-requests are "child" tasks of the task for the request with slices.
  • Fetching the status of the task for the request with slices only contains the status of completed slices.
  • These sub-requests are individually addressable for things like cancellation and rethrottling.
  • Rethrottling the request with slices will rethrottle the unfinished sub-request proportionally.
  • Canceling the request with slices will cancel each sub-request.
  • Due to the nature of slices each sub-request won't get a perfectly even portion of the documents. All documents will be addressed, but some slices may be larger than others. Expect larger slices to have a more even distribution.
  • Parameters like requests_per_second and max_docs on a request with slices are distributed proportionally to each sub-request. Combine that with the earlier point about distribution being uneven and you should conclude that using max_docs with slices might not result in exactly max_docs documents being deleted.
  • Each sub-request gets a slightly different snapshot of the source data stream or index though these are all taken at approximately the same time.

If you're slicing manually or otherwise tuning automatic slicing, keep in mind that:

  • Query performance is most efficient when the number of slices is equal to the number of shards in the index or backing index. If that number is large (for example, 500), choose a lower number as too many slices hurts performance. Setting slices higher than the number of shards generally does not improve efficiency and adds overhead.
  • Delete performance scales linearly across available resources with the number of slices.

Whether query or delete performance dominates the runtime depends on the documents being reindexed and cluster resources.

Cancel a delete by query operation

Any delete by query can be canceled using the task cancel API. For example:

POST _tasks/r1A2WoRbTwKZ516z6NEs5A:36619/_cancel

The task ID can be found by using the get tasks API.

Cancellation should happen quickly but might take a few seconds. The get task status API will continue to list the delete by query task until this task checks that it has been cancelled and terminates itself.

Path parameters

  • index string | array[string] Required

    A comma-separated list of data streams, indices, and aliases to search. It supports wildcards (*). To search all data streams or indices, omit this parameter or use * or _all.

Query parameters

  • If false, the request returns an error if any wildcard expression, index alias, or _all value targets only missing or closed indices. This behavior applies even if the request targets other open indices. For example, a request targeting foo*,bar* returns an error if an index starts with foo but no index starts with bar.

  • analyzer string

    Analyzer to use for the query string. This parameter can be used only when the q query string parameter is specified.

  • If true, wildcard and prefix queries are analyzed. This parameter can be used only when the q query string parameter is specified.

  • What to do if delete by query hits version conflicts: abort or proceed.

    Supported values include:

    • abort: Stop reindexing if there are conflicts.
    • proceed: Continue reindexing even if there are conflicts.

    Values are abort or proceed.

  • The default operator for query string query: AND or OR. This parameter can be used only when the q query string parameter is specified.

    Values are and, AND, or, or OR.

  • df string

    The field to use as default where no field prefix is given in the query string. This parameter can be used only when the q query string parameter is specified.

  • expand_wildcards string | array[string]

    The type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. It supports comma-separated values, such as open,hidden.

    Supported values include:

    • all: Match any data stream or index, including hidden ones.
    • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
    • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
    • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
    • none: Wildcard expressions are not accepted.
  • from number

    Skips the specified number of documents.

  • If false, the request returns an error if it targets a missing or closed index.

  • lenient boolean

    If true, format-based query failures (such as providing text to a numeric field) in the query string will be ignored. This parameter can be used only when the q query string parameter is specified.

  • max_docs number

    The maximum number of documents to process. Defaults to all documents. When set to a value less then or equal to scroll_size, a scroll will not be used to retrieve the results for the operation.

  • The node or shard the operation should be performed on. It is random by default.

  • refresh boolean

    If true, Elasticsearch refreshes all shards involved in the delete by query after the request completes. This is different than the delete API's refresh parameter, which causes just the shard that received the delete request to be refreshed. Unlike the delete API, it does not support wait_for.

  • If true, the request cache is used for this request. Defaults to the index-level setting.

  • The throttle for this request in sub-requests per second.

  • routing string

    A custom value used to route operations to a specific shard.

  • q string

    A query in the Lucene query string syntax.

  • scroll string

    The period to retain the search context for scrolling.

  • The size of the scroll request that powers the operation.

  • The explicit timeout for each search request. It defaults to no timeout.

  • The type of the search operation. Available options include query_then_fetch and dfs_query_then_fetch.

    Supported values include:

    • query_then_fetch: Documents are scored using local term and document frequencies for the shard. This is usually faster but less accurate.
    • dfs_query_then_fetch: Documents are scored using global term and document frequencies across all shards. This is usually slower but more accurate.

    Values are query_then_fetch or dfs_query_then_fetch.

  • slices number | string

    The number of slices this task should be divided into.

  • sort array[string]

    A comma-separated list of <field>:<direction> pairs.

  • stats array[string]

    The specific tag of the request for logging and statistical purposes.

  • The maximum number of documents to collect for each shard. If a query reaches this limit, Elasticsearch terminates the query early. Elasticsearch collects documents before sorting.

    Use with caution. Elasticsearch applies this parameter to each shard handling the request. When possible, let Elasticsearch perform early termination automatically. Avoid specifying this parameter for requests that target data streams with backing indices across multiple data tiers.

  • timeout string

    The period each deletion request waits for active shards.

  • version boolean

    If true, returns the document version as part of a hit.

  • wait_for_active_shards number | string

    The number of shard copies that must be active before proceeding with the operation. Set to all or any positive integer up to the total number of shards in the index (number_of_replicas+1). The timeout value controls how long each write request waits for unavailable shards to become available.

  • If true, the request blocks until the operation is complete. If false, Elasticsearch performs some preflight checks, launches the request, and returns a task you can use to cancel or get the status of the task. Elasticsearch creates a record of this task as a document at .tasks/task/${taskId}. When you are done with a task, you should delete the task document so Elasticsearch can reclaim the space.

application/json

Body Required

  • max_docs number

    The maximum number of documents to delete.

  • query object

    An Elasticsearch Query DSL (Domain Specific Language) object that defines a query.

    External documentation
  • slice object
    Hide slice attributes Show slice attributes object
    • field string

      Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

    • id string Required
    • max number Required

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • batches number

      The number of scroll responses pulled back by the delete by query.

    • deleted number

      The number of documents that were successfully deleted.

    • failures array[object]

      An array of failures if there were any unrecoverable errors during the process. If this array is not empty, the request ended abnormally because of those failures. Delete by query is implemented using batches and any failures cause the entire process to end but all failures in the current batch are collected into the array. You can use the conflicts option to prevent reindex from ending on version conflicts.

      Hide failures attributes Show failures attributes object
    • noops number

      This field is always equal to zero for delete by query. It exists only so that delete by query, update by query, and reindex APIs return responses with the same structure.

    • The number of requests per second effectively run during the delete by query.

    • retries object
      Hide retries attributes Show retries attributes object
      • bulk number Required

        The number of bulk actions retried.

    • slice_id number
    • A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

    • Time unit for milliseconds

    • A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

    • Time unit for milliseconds

    • timed_out boolean

      If true, some requests run during the delete by query operation timed out.

    • took number

      Time unit for milliseconds

    • total number

      The number of documents that were successfully processed.

    • The number of version conflicts that the delete by query hit.

POST /{index}/_delete_by_query
curl \
 --request POST 'http://api.example.com/{index}/_delete_by_query' \
 --header "Authorization: $API_KEY" \
 --header "Content-Type: application/json" \
 --data '"{\n  \"query\": {\n    \"match_all\": {}\n  }\n}"'
Run `POST /my-index-000001,my-index-000002/_delete_by_query` to delete all documents from multiple data streams or indices.
{
  "query": {
    "match_all": {}
  }
}
Run `POST my-index-000001/_delete_by_query` to delete a document by using a unique attribute.
{
  "query": {
    "term": {
      "user.id": "kimchy"
    }
  },
  "max_docs": 1
}
Run `POST my-index-000001/_delete_by_query` to slice a delete by query manually. Provide a slice ID and total number of slices.
{
  "slice": {
    "id": 0,
    "max": 2
  },
  "query": {
    "range": {
      "http.response.bytes": {
        "lt": 2000000
      }
    }
  }
}
Run `POST my-index-000001/_delete_by_query?refresh&slices=5` to let delete by query automatically parallelize using sliced scroll to slice on `_id`. The `slices` query parameter value specifies the number of slices to use.
{
  "query": {
    "range": {
      "http.response.bytes": {
        "lt": 2000000
      }
    }
  }
}
Response examples (200)
A successful response from `POST /my-index-000001/_delete_by_query`.
{
  "took" : 147,
  "timed_out": false,
  "total": 119,
  "deleted": 119,
  "batches": 1,
  "version_conflicts": 0,
  "noops": 0,
  "retries": {
    "bulk": 0,
    "search": 0
  },
  "throttled_millis": 0,
  "requests_per_second": -1.0,
  "throttled_until_millis": 0,
  "failures" : [ ]
}




























Get multiple documents Added in 1.3.0

POST /{index}/_mget

Get multiple JSON documents by ID from one or more indices. If you specify an index in the request URI, you only need to specify the document IDs in the request body. To ensure fast responses, this multi get (mget) API responds with partial results if one or more shards fail.

Filter source fields

By default, the _source field is returned for every document (if stored). Use the _source and _source_include or source_exclude attributes to filter what fields are returned for a particular document. You can include the _source, _source_includes, and _source_excludes query parameters in the request URI to specify the defaults to use when there are no per-document instructions.

Get stored fields

Use the stored_fields attribute to specify the set of stored fields you want to retrieve. Any requested fields that are not stored are ignored. You can include the stored_fields query parameter in the request URI to specify the defaults to use when there are no per-document instructions.

Path parameters

  • index string Required

    Name of the index to retrieve documents from when ids are specified, or when a document in the docs array does not specify an index.

Query parameters

  • Specifies the node or shard the operation should be performed on. Random by default.

  • realtime boolean

    If true, the request is real-time as opposed to near-real-time.

  • refresh boolean

    If true, the request refreshes relevant shards before retrieving documents.

  • routing string

    Custom value used to route operations to a specific shard.

  • _source boolean | string | array[string]

    True or false to return the _source field or not, or a list of fields to return.

  • _source_excludes string | array[string]

    A comma-separated list of source fields to exclude from the response. You can also use this parameter to exclude fields from the subset specified in _source_includes query parameter.

  • _source_includes string | array[string]

    A comma-separated list of source fields to include in the response. If this parameter is specified, only these source fields are returned. You can exclude fields from this subset using the _source_excludes query parameter. If the _source parameter is false, this parameter is ignored.

  • stored_fields string | array[string]

    If true, retrieves the document fields stored in the index rather than the document _source.

application/json

Body Required

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • docs array[object] Required

      The response includes a docs array that contains the documents in the order specified in the request. The structure of the returned documents is similar to that returned by the get API. If there is a failure getting a particular document, the error is included in place of the document.

      One of:
      Hide attributes Show attributes
      • _index string Required
      • fields object

        If the stored_fields parameter is set to true and found is true, it contains the document fields stored in the index.

        Hide fields attribute Show fields attribute object
        • * object Additional properties
      • _ignored array[string]
      • found boolean Required

        Indicates whether the document exists.

      • _id string Required
      • The primary term assigned to the document for the indexing operation.

      • _routing string

        The explicit routing, if set.

      • _seq_no number
      • _source object

        If found is true, it contains the document data formatted in JSON. If the _source parameter is set to false or the stored_fields parameter is set to true, it is excluded.

      • _version number
POST /{index}/_mget
curl \
 --request POST 'http://api.example.com/{index}/_mget' \
 --header "Authorization: $API_KEY" \
 --header "Content-Type: application/json" \
 --data '"{\n  \"docs\": [\n    {\n      \"_id\": \"1\"\n    },\n    {\n      \"_id\": \"2\"\n    }\n  ]\n}"'
Run `GET /my-index-000001/_mget`. When you specify an index in the request URI, only the document IDs are required in the request body.
{
  "docs": [
    {
      "_id": "1"
    },
    {
      "_id": "2"
    }
  ]
}
Run `GET /_mget`. This request sets `_source` to `false` for document 1 to exclude the source entirely. It retrieves `field3` and `field4` from document 2. It retrieves the `user` field from document 3 but filters out the `user.location` field.
{
  "docs": [
    {
      "_index": "test",
      "_id": "1",
      "_source": false
    },
    {
      "_index": "test",
      "_id": "2",
      "_source": [ "field3", "field4" ]
    },
    {
      "_index": "test",
      "_id": "3",
      "_source": {
        "include": [ "user" ],
        "exclude": [ "user.location" ]
      }
    }
  ]
}
Run `GET /_mget`. This request retrieves `field1` and `field2` from document 1 and `field3` and `field4` from document 2.
{
  "docs": [
    {
      "_index": "test",
      "_id": "1",
      "stored_fields": [ "field1", "field2" ]
    },
    {
      "_index": "test",
      "_id": "2",
      "stored_fields": [ "field3", "field4" ]
    }
  ]
}
Run `GET /_mget?routing=key1`. If routing is used during indexing, you need to specify the routing value to retrieve documents. This request fetches `test/_doc/2` from the shard corresponding to routing key `key1`. It fetches `test/_doc/1` from the shard corresponding to routing key `key2`.
{
  "docs": [
    {
      "_index": "test",
      "_id": "1",
      "routing": "key2"
    },
    {
      "_index": "test",
      "_id": "2"
    }
  ]
}
































Get term vector information

GET /{index}/_termvectors

Get information and statistics about terms in the fields of a particular document.

You can retrieve term vectors for documents stored in the index or for artificial documents passed in the body of the request. You can specify the fields you are interested in through the fields parameter or by adding the fields to the request body. For example:

GET /my-index-000001/_termvectors/1?fields=message

Fields can be specified using wildcards, similar to the multi match query.

Term vectors are real-time by default, not near real-time. This can be changed by setting realtime parameter to false.

You can request three types of values: term information, term statistics, and field statistics. By default, all term information and field statistics are returned for all fields but term statistics are excluded.

Term information

  • term frequency in the field (always returned)
  • term positions (positions: true)
  • start and end offsets (offsets: true)
  • term payloads (payloads: true), as base64 encoded bytes

If the requested information wasn't stored in the index, it will be computed on the fly if possible. Additionally, term vectors could be computed for documents not even existing in the index, but instead provided by the user.


Start and end offsets assume UTF-16 encoding is being used. If you want to use these offsets in order to get the original text that produced this token, you should make sure that the string you are taking a sub-string of is also encoded using UTF-16.

Behaviour

The term and field statistics are not accurate. Deleted documents are not taken into account. The information is only retrieved for the shard the requested document resides in. The term and field statistics are therefore only useful as relative measures whereas the absolute numbers have no meaning in this context. By default, when requesting term vectors of artificial documents, a shard to get the statistics from is randomly selected. Use routing only to hit a particular shard.

Path parameters

  • index string Required

    The name of the index that contains the document.

Query parameters

  • fields string | array[string]

    A comma-separated list or wildcard expressions of fields to include in the statistics. It is used as the default list unless a specific field list is provided in the completion_fields or fielddata_fields parameters.

  • If true, the response includes:

    • The document count (how many documents contain this field).
    • The sum of document frequencies (the sum of document frequencies for all terms in this field).
    • The sum of total term frequencies (the sum of total term frequencies of each term in this field).
  • offsets boolean

    If true, the response includes term offsets.

  • payloads boolean

    If true, the response includes term payloads.

  • positions boolean

    If true, the response includes term positions.

  • The node or shard the operation should be performed on. It is random by default.

  • realtime boolean

    If true, the request is real-time as opposed to near-real-time.

  • routing string

    A custom value that is used to route operations to a specific shard.

  • If true, the response includes:

    • The total term frequency (how often a term occurs in all documents).
    • The document frequency (the number of documents containing the current term).

    By default these values are not returned since term statistics can have a serious performance impact.

  • version number

    If true, returns the document version as part of a hit.

  • The version type.

    Supported values include:

    • internal: Use internal versioning that starts at 1 and increments with each update or delete.
    • external: Only index the document if the specified version is strictly higher than the version of the stored document or if there is no existing document.
    • external_gte: Only index the document if the specified version is equal or higher than the version of the stored document or if there is no existing document. NOTE: The external_gte version type is meant for special use cases and should be used with care. If used incorrectly, it can result in loss of data.
    • force: This option is deprecated because it can cause primary and replica shards to diverge.

    Values are internal, external, external_gte, or force.

application/json

Body

  • doc object

    An artificial document (a document not present in the index) for which you want to retrieve term vectors.

  • filter object
    Hide filter attributes Show filter attributes object
    • Ignore words which occur in more than this many docs. Defaults to unbounded.

    • The maximum number of terms that must be returned per field.

    • Ignore words with more than this frequency in the source doc. It defaults to unbounded.

    • The maximum word length above which words will be ignored. Defaults to unbounded.

    • Ignore terms which do not occur in at least this many docs.

    • Ignore words with less than this frequency in the source doc.

    • The minimum word length below which words will be ignored.

  • Override the default per-field analyzer. This is useful in order to generate term vectors in any fashion, especially when using artificial documents. When providing an analyzer for a field that already stores term vectors, the term vectors will be regenerated.

    Hide per_field_analyzer attribute Show per_field_analyzer attribute object
    • * string Additional properties
  • fields string | array[string]
  • If true, the response includes:

    • The document count (how many documents contain this field).
    • The sum of document frequencies (the sum of document frequencies for all terms in this field).
    • The sum of total term frequencies (the sum of total term frequencies of each term in this field).
  • offsets boolean

    If true, the response includes term offsets.

  • payloads boolean

    If true, the response includes term payloads.

  • positions boolean

    If true, the response includes term positions.

  • If true, the response includes:

    • The total term frequency (how often a term occurs in all documents).
    • The document frequency (the number of documents containing the current term).

    By default these values are not returned since term statistics can have a serious performance impact.

  • routing string
  • version number
  • Values are internal, external, external_gte, or force.

Responses

GET /{index}/_termvectors
curl \
 --request GET 'http://api.example.com/{index}/_termvectors' \
 --header "Authorization: $API_KEY" \
 --header "Content-Type: application/json" \
 --data '"{\n  \"fields\" : [\"text\"],\n  \"offsets\" : true,\n  \"payloads\" : true,\n  \"positions\" : true,\n  \"term_statistics\" : true,\n  \"field_statistics\" : true\n}"'
Run `GET /my-index-000001/_termvectors/1` to return all information and statistics for field `text` in document 1.
{
  "fields" : ["text"],
  "offsets" : true,
  "payloads" : true,
  "positions" : true,
  "term_statistics" : true,
  "field_statistics" : true
}
Run `GET /my-index-000001/_termvectors/1` to set per-field analyzers. A different analyzer than the one at the field may be provided by using the `per_field_analyzer` parameter.
{
  "doc" : {
    "fullname" : "John Doe",
    "text" : "test test test"
  },
  "fields": ["fullname"],
  "per_field_analyzer" : {
    "fullname": "keyword"
  }
}
Run `GET /imdb/_termvectors` to filter the terms returned based on their tf-idf scores. It returns the three most "interesting" keywords from the artificial document having the given "plot" field value. Notice that the keyword "Tony" or any stop words are not part of the response, as their tf-idf must be too low.
{
  "doc": {
    "plot": "When wealthy industrialist Tony Stark is forced to build an armored suit after a life-threatening incident, he ultimately decides to use its technology to fight against evil."
  },
  "term_statistics": true,
  "field_statistics": true,
  "positions": false,
  "offsets": false,
  "filter": {
    "max_num_terms": 3,
    "min_term_freq": 1,
    "min_doc_freq": 1
  }
}
Run `GET /my-index-000001/_termvectors/1`. Term vectors which are not explicitly stored in the index are automatically computed on the fly. This request returns all information and statistics for the fields in document 1, even though the terms haven't been explicitly stored in the index. Note that for the field text, the terms are not regenerated.
{
  "fields" : ["text", "some_field_without_term_vectors"],
  "offsets" : true,
  "positions" : true,
  "term_statistics" : true,
  "field_statistics" : true
}
Run `GET /my-index-000001/_termvectors`. Term vectors can be generated for artificial documents, that is for documents not present in the index. If dynamic mapping is turned on (default), the document fields not in the original mapping will be dynamically created.
{
  "doc" : {
    "fullname" : "John Doe",
    "text" : "test test test"
  }
}
Response examples (200)
A successful response from `GET /my-index-000001/_termvectors/1`.
{
  "_index": "my-index-000001",
  "_id": "1",
  "_version": 1,
  "found": true,
  "took": 6,
  "term_vectors": {
    "text": {
      "field_statistics": {
        "sum_doc_freq": 4,
        "doc_count": 2,
        "sum_ttf": 6
      },
      "terms": {
        "test": {
          "doc_freq": 2,
          "ttf": 4,
          "term_freq": 3,
          "tokens": [
            {
              "position": 0,
              "start_offset": 0,
              "end_offset": 4,
              "payload": "d29yZA=="
            },
            {
              "position": 1,
              "start_offset": 5,
              "end_offset": 9,
              "payload": "d29yZA=="
            },
            {
              "position": 2,
              "start_offset": 10,
              "end_offset": 14,
              "payload": "d29yZA=="
            }
          ]
        }
      }
    }
  }
}
A successful response from `GET /my-index-000001/_termvectors` with `per_field_analyzer` in the request body.
{
  "_index": "my-index-000001",
  "_version": 0,
  "found": true,
  "took": 6,
  "term_vectors": {
    "fullname": {
      "field_statistics": {
          "sum_doc_freq": 2,
          "doc_count": 4,
          "sum_ttf": 4
      },
      "terms": {
          "John Doe": {
            "term_freq": 1,
            "tokens": [
                {
                  "position": 0,
                  "start_offset": 0,
                  "end_offset": 8
                }
            ]
          }
      }
    }
  }
}
A successful response from `GET /my-index-000001/_termvectors` with a `filter` in the request body.
{
  "_index": "imdb",
  "_version": 0,
  "found": true,
  "term_vectors": {
      "plot": {
        "field_statistics": {
            "sum_doc_freq": 3384269,
            "doc_count": 176214,
            "sum_ttf": 3753460
        },
        "terms": {
            "armored": {
              "doc_freq": 27,
              "ttf": 27,
              "term_freq": 1,
              "score": 9.74725
            },
            "industrialist": {
              "doc_freq": 88,
              "ttf": 88,
              "term_freq": 1,
              "score": 8.590818
            },
            "stark": {
              "doc_freq": 44,
              "ttf": 47,
              "term_freq": 1,
              "score": 9.272792
            }
        }
      }
  }
}








Update documents Added in 2.4.0

POST /{index}/_update_by_query

Updates documents that match the specified query. If no query is specified, performs an update on every document in the data stream or index without modifying the source, which is useful for picking up mapping changes.

If the Elasticsearch security features are enabled, you must have the following index privileges for the target data stream, index, or alias:

  • read
  • index or write

You can specify the query criteria in the request URI or the request body using the same syntax as the search API.

When you submit an update by query request, Elasticsearch gets a snapshot of the data stream or index when it begins processing the request and updates matching documents using internal versioning. When the versions match, the document is updated and the version number is incremented. If a document changes between the time that the snapshot is taken and the update operation is processed, it results in a version conflict and the operation fails. You can opt to count version conflicts instead of halting and returning by setting conflicts to proceed. Note that if you opt to count version conflicts, the operation could attempt to update more documents from the source than max_docs until it has successfully updated max_docs documents or it has gone through every document in the source query.

NOTE: Documents with a version equal to 0 cannot be updated using update by query because internal versioning does not support 0 as a valid version number.

While processing an update by query request, Elasticsearch performs multiple search requests sequentially to find all of the matching documents. A bulk update request is performed for each batch of matching documents. Any query or update failures cause the update by query request to fail and the failures are shown in the response. Any update requests that completed successfully still stick, they are not rolled back.

Throttling update requests

To control the rate at which update by query issues batches of update operations, you can set requests_per_second to any positive decimal number. This pads each batch with a wait time to throttle the rate. Set requests_per_second to -1 to turn off throttling.

Throttling uses a wait time between batches so that the internal scroll requests can be given a timeout that takes the request padding into account. The padding time is the difference between the batch size divided by the requests_per_second and the time spent writing. By default the batch size is 1000, so if requests_per_second is set to 500:

target_time = 1000 / 500 per second = 2 seconds
wait_time = target_time - write_time = 2 seconds - .5 seconds = 1.5 seconds

Since the batch is issued as a single _bulk request, large batch sizes cause Elasticsearch to create many requests and wait before starting the next set. This is "bursty" instead of "smooth".

Slicing

Update by query supports sliced scroll to parallelize the update process. This can improve efficiency and provide a convenient way to break the request down into smaller parts.

Setting slices to auto chooses a reasonable number for most data streams and indices. This setting will use one slice per shard, up to a certain limit. If there are multiple source data streams or indices, it will choose the number of slices based on the index or backing index with the smallest number of shards.

Adding slices to _update_by_query just automates the manual process of creating sub-requests, which means it has some quirks:

  • You can see these requests in the tasks APIs. These sub-requests are "child" tasks of the task for the request with slices.
  • Fetching the status of the task for the request with slices only contains the status of completed slices.
  • These sub-requests are individually addressable for things like cancellation and rethrottling.
  • Rethrottling the request with slices will rethrottle the unfinished sub-request proportionally.
  • Canceling the request with slices will cancel each sub-request.
  • Due to the nature of slices each sub-request won't get a perfectly even portion of the documents. All documents will be addressed, but some slices may be larger than others. Expect larger slices to have a more even distribution.
  • Parameters like requests_per_second and max_docs on a request with slices are distributed proportionally to each sub-request. Combine that with the point above about distribution being uneven and you should conclude that using max_docs with slices might not result in exactly max_docs documents being updated.
  • Each sub-request gets a slightly different snapshot of the source data stream or index though these are all taken at approximately the same time.

If you're slicing manually or otherwise tuning automatic slicing, keep in mind that:

  • Query performance is most efficient when the number of slices is equal to the number of shards in the index or backing index. If that number is large (for example, 500), choose a lower number as too many slices hurts performance. Setting slices higher than the number of shards generally does not improve efficiency and adds overhead.
  • Update performance scales linearly across available resources with the number of slices.

Whether query or update performance dominates the runtime depends on the documents being reindexed and cluster resources.

Update the document source

Update by query supports scripts to update the document source. As with the update API, you can set ctx.op to change the operation that is performed.

Set ctx.op = "noop" if your script decides that it doesn't have to make any changes. The update by query operation skips updating the document and increments the noop counter.

Set ctx.op = "delete" if your script decides that the document should be deleted. The update by query operation deletes the document and increments the deleted counter.

Update by query supports only index, noop, and delete. Setting ctx.op to anything else is an error. Setting any other field in ctx is an error. This API enables you to only modify the source of matching documents; you cannot move them.

Path parameters

  • index string | array[string] Required

    A comma-separated list of data streams, indices, and aliases to search. It supports wildcards (*). To search all data streams or indices, omit this parameter or use * or _all.

Query parameters

  • If false, the request returns an error if any wildcard expression, index alias, or _all value targets only missing or closed indices. This behavior applies even if the request targets other open indices. For example, a request targeting foo*,bar* returns an error if an index starts with foo but no index starts with bar.

  • analyzer string

    The analyzer to use for the query string. This parameter can be used only when the q query string parameter is specified.

  • If true, wildcard and prefix queries are analyzed. This parameter can be used only when the q query string parameter is specified.

  • The preferred behavior when update by query hits version conflicts: abort or proceed.

    Supported values include:

    • abort: Stop reindexing if there are conflicts.
    • proceed: Continue reindexing even if there are conflicts.

    Values are abort or proceed.

  • The default operator for query string query: AND or OR. This parameter can be used only when the q query string parameter is specified.

    Values are and, AND, or, or OR.

  • df string

    The field to use as default where no field prefix is given in the query string. This parameter can be used only when the q query string parameter is specified.

  • expand_wildcards string | array[string]

    The type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. It supports comma-separated values, such as open,hidden. Valid values are: all, open, closed, hidden, none.

    Supported values include:

    • all: Match any data stream or index, including hidden ones.
    • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
    • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
    • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
    • none: Wildcard expressions are not accepted.
  • from number

    Skips the specified number of documents.

  • If false, the request returns an error if it targets a missing or closed index.

  • lenient boolean

    If true, format-based query failures (such as providing text to a numeric field) in the query string will be ignored. This parameter can be used only when the q query string parameter is specified.

  • max_docs number

    The maximum number of documents to process. It defaults to all documents. When set to a value less then or equal to scroll_size then a scroll will not be used to retrieve the results for the operation.

  • pipeline string

    The ID of the pipeline to use to preprocess incoming documents. If the index has a default ingest pipeline specified, then setting the value to _none disables the default ingest pipeline for this request. If a final pipeline is configured it will always run, regardless of the value of this parameter.

  • The node or shard the operation should be performed on. It is random by default.

  • q string

    A query in the Lucene query string syntax.

  • refresh boolean

    If true, Elasticsearch refreshes affected shards to make the operation visible to search after the request completes. This is different than the update API's refresh parameter, which causes just the shard that received the request to be refreshed.

  • If true, the request cache is used for this request. It defaults to the index-level setting.

  • The throttle for this request in sub-requests per second.

  • routing string

    A custom value used to route operations to a specific shard.

  • scroll string

    The period to retain the search context for scrolling.

  • The size of the scroll request that powers the operation.

  • An explicit timeout for each search request. By default, there is no timeout.

  • The type of the search operation. Available options include query_then_fetch and dfs_query_then_fetch.

    Supported values include:

    • query_then_fetch: Documents are scored using local term and document frequencies for the shard. This is usually faster but less accurate.
    • dfs_query_then_fetch: Documents are scored using global term and document frequencies across all shards. This is usually slower but more accurate.

    Values are query_then_fetch or dfs_query_then_fetch.

  • slices number | string

    The number of slices this task should be divided into.

  • sort array[string]

    A comma-separated list of : pairs.

  • stats array[string]

    The specific tag of the request for logging and statistical purposes.

  • The maximum number of documents to collect for each shard. If a query reaches this limit, Elasticsearch terminates the query early. Elasticsearch collects documents before sorting.

    IMPORTANT: Use with caution. Elasticsearch applies this parameter to each shard handling the request. When possible, let Elasticsearch perform early termination automatically. Avoid specifying this parameter for requests that target data streams with backing indices across multiple data tiers.

  • timeout string

    The period each update request waits for the following operations: dynamic mapping updates, waiting for active shards. By default, it is one minute. This guarantees Elasticsearch waits for at least the timeout before failing. The actual wait time could be longer, particularly when multiple waits occur.

  • version boolean

    If true, returns the document version as part of a hit.

  • Should the document increment the version number (internal) on hit or not (reindex)

  • wait_for_active_shards number | string

    The number of shard copies that must be active before proceeding with the operation. Set to all or any positive integer up to the total number of shards in the index (number_of_replicas+1). The timeout parameter controls how long each write request waits for unavailable shards to become available. Both work exactly the way they work in the bulk API.

  • If true, the request blocks until the operation is complete. If false, Elasticsearch performs some preflight checks, launches the request, and returns a task ID that you can use to cancel or get the status of the task. Elasticsearch creates a record of this task as a document at .tasks/task/${taskId}.

application/json

Body

  • max_docs number

    The maximum number of documents to update.

  • query object

    An Elasticsearch Query DSL (Domain Specific Language) object that defines a query.

    External documentation
  • script object
    Hide script attributes Show script attributes object
    • source string | object

      One of:
    • id string
    • params object

      Specifies any named parameters that are passed into the script as variables. Use parameters instead of hard-coded values to decrease compile time.

      Hide params attribute Show params attribute object
      • * object Additional properties
    • lang string

      Any of:

      Values are painless, expression, mustache, or java.

    • options object
      Hide options attribute Show options attribute object
      • * string Additional properties
  • slice object
    Hide slice attributes Show slice attributes object
    • field string

      Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

    • id string Required
    • max number Required
  • Values are abort or proceed.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • batches number

      The number of scroll responses pulled back by the update by query.

    • failures array[object]

      Array of failures if there were any unrecoverable errors during the process. If this is non-empty then the request ended because of those failures. Update by query is implemented using batches. Any failure causes the entire process to end, but all failures in the current batch are collected into the array. You can use the conflicts option to prevent reindex from ending when version conflicts occur.

      Hide failures attributes Show failures attributes object
    • noops number

      The number of documents that were ignored because the script used for the update by query returned a noop value for ctx.op.

    • deleted number

      The number of documents that were successfully deleted.

    • The number of requests per second effectively run during the update by query.

    • retries object
      Hide retries attributes Show retries attributes object
      • bulk number Required

        The number of bulk actions retried.

    • timed_out boolean

      If true, some requests timed out during the update by query.

    • took number

      Time unit for milliseconds

    • total number

      The number of documents that were successfully processed.

    • updated number

      The number of documents that were successfully updated.

    • The number of version conflicts that the update by query hit.

    • A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

    • Time unit for milliseconds

    • A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

    • Time unit for milliseconds

POST /{index}/_update_by_query
curl \
 --request POST 'http://api.example.com/{index}/_update_by_query' \
 --header "Authorization: $API_KEY" \
 --header "Content-Type: application/json" \
 --data '"{\n  \"query\": { \n    \"term\": {\n      \"user.id\": \"kimchy\"\n    }\n  }\n}"'
Run `POST my-index-000001/_update_by_query?conflicts=proceed` to update documents that match a query.
{
  "query": { 
    "term": {
      "user.id": "kimchy"
    }
  }
}
Run `POST my-index-000001/_update_by_query` with a script to update the document source. It increments the `count` field for all documents with a `user.id` of `kimchy` in `my-index-000001`.
{
  "script": {
    "source": "ctx._source.count++",
    "lang": "painless"
  },
  "query": {
    "term": {
      "user.id": "kimchy"
    }
  }
}
Run `POST my-index-000001/_update_by_query` to slice an update by query manually. Provide a slice ID and total number of slices to each request.
{
  "slice": {
    "id": 0,
    "max": 2
  },
  "script": {
    "source": "ctx._source['extra'] = 'test'"
  }
}
Run `POST my-index-000001/_update_by_query?refresh&slices=5` to use automatic slicing. It automatically parallelizes using sliced scroll to slice on `_id`.
{
  "script": {
    "source": "ctx._source['extra'] = 'test'"
  }
}





























EQL

Event Query Language (EQL) is a query language for event-based time series data, such as logs, metrics, and traces.

Learn more about EQL search

Get async EQL search results Added in 7.9.0

GET /_eql/search/{id}

Get the current status and available results for an async EQL search or a stored synchronous EQL search.

Path parameters

  • id string Required

    Identifier for the search.

Query parameters

  • Period for which the search and its results are stored on the cluster. Defaults to the keep_alive value set by the search’s EQL search API request.

  • Timeout duration to wait for the request to finish. Defaults to no timeout, meaning the request waits for complete search results.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • id string
    • is_partial boolean

      If true, the response does not contain complete search results.

    • is_running boolean

      If true, the search request is still executing.

    • took number

      Time unit for milliseconds

    • timed_out boolean

      If true, the request timed out before completion.

    • hits object Required
      Hide hits attributes Show hits attributes object
      • total object
        Hide total attributes Show total attributes object
      • events array[object]

        Contains events matching the query. Each object represents a matching event.

        Hide events attributes Show events attributes object
        • _index string Required
        • _id string Required
        • _source object Required

          Original JSON body passed for the event at index time.

        • missing boolean

          Set to true for events in a timespan-constrained sequence that do not meet a given condition.

        • fields object
          Hide fields attribute Show fields attribute object
          • * array[object] Additional properties
      • sequences array[object]

        Contains event sequences matching the query. Each object represents a matching sequence. This parameter is only returned for EQL queries containing a sequence.

        Hide sequences attributes Show sequences attributes object
        • events array[object] Required

          Contains events matching the query. Each object represents a matching event.

          Hide events attributes Show events attributes object
          • _index string Required
          • _id string Required
          • _source object Required

            Original JSON body passed for the event at index time.

          • missing boolean

            Set to true for events in a timespan-constrained sequence that do not meet a given condition.

          • fields object
        • join_keys array[object]

          Shared field values used to constrain matches in the sequence. These are defined using the by keyword in the EQL query syntax.

    • shard_failures array[object]

      Contains information about shard failures (if any), in case allow_partial_search_results=true

      Hide shard_failures attributes Show shard_failures attributes object
GET /_eql/search/{id}
curl \
 --request GET 'http://api.example.com/_eql/search/{id}' \
 --header "Authorization: $API_KEY"































































Run multiple Fleet searches Technical preview

POST /{index}/_fleet/_fleet_msearch

Run several Fleet searches with a single API request. The API follows the same structure as the multi search API. However, similar to the Fleet search API, it supports the wait_for_checkpoints parameter.

Path parameters

  • index string Required

    A single target to search. If the target is an index alias, it must resolve to a single index.

Query parameters

  • If false, the request returns an error if any wildcard expression, index alias, or _all value targets only missing or closed indices. This behavior applies even if the request targets other open indices. For example, a request targeting foo*,bar* returns an error if an index starts with foo but no index starts with bar.

  • If true, network roundtrips between the coordinating node and remote clusters are minimized for cross-cluster search requests.

  • expand_wildcards string | array[string]

    Type of index that wildcard expressions can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams.

    Supported values include:

    • all: Match any data stream or index, including hidden ones.
    • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
    • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
    • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
    • none: Wildcard expressions are not accepted.
  • If true, concrete, expanded or aliased indices are ignored when frozen.

  • If true, missing or closed indices are not included in the response.

  • Maximum number of concurrent searches the multi search API can execute.

  • Maximum number of concurrent shard requests that each sub-search request executes per node.

  • Defines a threshold that enforces a pre-filter roundtrip to prefilter search shards based on query rewriting if the number of shards the search request expands to exceeds the threshold. This filter roundtrip can limit the number of shards significantly if for instance a shard can not match any documents based on its rewrite method i.e., if date filters are mandatory to match but the shard bounds and the query are disjoint.

  • Indicates whether global term and document frequencies should be used when scoring returned documents.

    Supported values include:

    • query_then_fetch: Documents are scored using local term and document frequencies for the shard. This is usually faster but less accurate.
    • dfs_query_then_fetch: Documents are scored using global term and document frequencies across all shards. This is usually slower but more accurate.

    Values are query_then_fetch or dfs_query_then_fetch.

  • If true, hits.total are returned as an integer in the response. Defaults to false, which returns an object.

  • typed_keys boolean

    Specifies whether aggregation and suggester names should be prefixed by their respective types in the response.

  • A comma separated list of checkpoints. When configured, the search API will only be executed on a shard after the relevant checkpoint has become visible for search. Defaults to an empty list which will cause Elasticsearch to immediately execute the search.

  • If true, returns partial results if there are shard request timeouts or shard failures. If false, returns an error with no partial results. Defaults to the configured cluster setting search.default_allow_partial_results, which is true by default.

application/json

Body object Required

One of:

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
POST /{index}/_fleet/_fleet_msearch
curl \
 --request POST 'http://api.example.com/{index}/_fleet/_fleet_msearch' \
 --header "Authorization: $API_KEY" \
 --header "Content-Type: application/json" \
 --data '[{"allow_no_indices":true,"expand_wildcards":"string","ignore_unavailable":true,"index":"string","preference":"string","request_cache":true,"routing":"string","search_type":"query_then_fetch","ccs_minimize_roundtrips":true,"allow_partial_search_results":true,"ignore_throttled":true}]'








Graph explore

The graph explore API enables you to extract and summarize information about the documents and terms in an Elasticsearch data stream or index.

Get started with Graph





















Delete component templates Added in 7.8.0

DELETE /_component_template/{name}

Component templates are building blocks for constructing index templates that specify index mappings, settings, and aliases.

Path parameters

  • name string | array[string] Required

    Comma-separated list or wildcard expression of component template names used to limit the request.

Query parameters

  • Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.

  • timeout string

    Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

DELETE /_component_template/{name}
curl \
 --request DELETE 'http://api.example.com/_component_template/{name}' \
 --header "Authorization: $API_KEY"








Import a dangling index Added in 7.9.0

POST /_dangling/{index_uuid}

If Elasticsearch encounters index data that is absent from the current cluster state, those indices are considered to be dangling. For example, this can happen if you delete more than cluster.indices.tombstones.size indices while an Elasticsearch node is offline.

Path parameters

  • index_uuid string Required

    The UUID of the index to import. Use the get dangling indices API to locate the UUID.

Query parameters

  • accept_data_loss boolean Required

    This parameter must be set to true to import a dangling index. Because Elasticsearch cannot know where the dangling index data came from or determine which shard copies are fresh and which are stale, it cannot guarantee that the imported data represents the latest state of the index when it was last in the cluster.

  • Specify timeout for connection to master

  • timeout string

    Explicit operation timeout

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

POST /_dangling/{index_uuid}
curl \
 --request POST 'http://api.example.com/_dangling/{index_uuid}?accept_data_loss=true' \
 --header "Authorization: $API_KEY"
Response examples (200)
A successful response from `POST /_dangling/zmM4e0JtBkeUjiHD-MihPQ?accept_data_loss=true`.
{
  "acknowledged": true
}












Get tokens from text analysis

GET /_analyze

The analyze API performs analysis on a text string and returns the resulting tokens.

Generating excessive amount of tokens may cause a node to run out of memory. The index.analyze.max_token_count setting enables you to limit the number of tokens that can be produced. If more than this limit of tokens gets generated, an error occurs. The _analyze endpoint without a specified index will always use 10000 as its limit.

External documentation

Query parameters

  • index string

    Index used to derive the analyzer. If specified, the analyzer or field parameter overrides this value. If no index is specified or the index does not have a default analyzer, the analyze API uses the standard analyzer.

application/json

Body

Responses

GET /_analyze
curl \
 --request GET 'http://api.example.com/_analyze' \
 --header "Authorization: $API_KEY" \
 --header "Content-Type: application/json" \
 --data '"{\n  \"analyzer\": \"standard\",\n  \"text\": \"this is a test\"\n}"'
You can apply any of the built-in analyzers to the text string without specifying an index.
{
  "analyzer": "standard",
  "text": "this is a test"
}
If the text parameter is provided as array of strings, it is analyzed as a multi-value field.
{
  "analyzer": "standard",
  "text": [
    "this is a test",
    "the second text"
  ]
}
You can test a custom transient analyzer built from tokenizers, token filters, and char filters. Token filters use the filter parameter.
{
  "tokenizer": "keyword",
  "filter": [
    "lowercase"
  ],
  "char_filter": [
    "html_strip"
  ],
  "text": "this is a <b>test</b>"
}
Custom tokenizers, token filters, and character filters can be specified in the request body.
{
  "tokenizer": "whitespace",
  "filter": [
    "lowercase",
    {
      "type": "stop",
      "stopwords": [
        "a",
        "is",
        "this"
      ]
    }
  ],
  "text": "this is a test"
}
Run `GET /analyze_sample/_analyze` to run an analysis on the text using the default index analyzer associated with the `analyze_sample` index. Alternatively, the analyzer can be derived based on a field mapping.
{
  "field": "obj1.field1",
  "text": "this is a test"
}
Run `GET /analyze_sample/_analyze` and supply a normalizer for a keyword field if there is a normalizer associated with the specified index.
{
  "normalizer": "my_normalizer",
  "text": "BaR"
}
If you want to get more advanced details, set `explain` to `true`. It will output all token attributes for each token. You can filter token attributes you want to output by setting the `attributes` option. NOTE: The format of the additional detail information is labelled as experimental in Lucene and it may change in the future.
{
  "tokenizer": "standard",
  "filter": [
    "snowball"
  ],
  "text": "detailed output",
  "explain": true,
  "attributes": [
    "keyword"
  ]
}
Response examples (200)
A successful response for an analysis with `explain` set to `true`.
{
  "detail": {
    "custom_analyzer": true,
    "charfilters": [],
    "tokenizer": {
      "name": "standard",
      "tokens": [
        {
          "token": "detailed",
          "start_offset": 0,
          "end_offset": 8,
          "type": "<ALPHANUM>",
          "position": 0
        },
        {
          "token": "output",
          "start_offset": 9,
          "end_offset": 15,
          "type": "<ALPHANUM>",
          "position": 1
        }
      ]
    },
    "tokenfilters": [
      {
        "name": "snowball",
        "tokens": [
          {
            "token": "detail",
            "start_offset": 0,
            "end_offset": 8,
            "type": "<ALPHANUM>",
            "position": 0,
            "keyword": false
          },
          {
            "token": "output",
            "start_offset": 9,
            "end_offset": 15,
            "type": "<ALPHANUM>",
            "position": 1,
            "keyword": false
          }
        ]
      }
    ]
  }
}












Clear the cache

POST /_cache/clear

Clear the cache of one or more indices. For data streams, the API clears the caches of the stream's backing indices.

By default, the clear cache API clears all caches. To clear only specific caches, use the fielddata, query, or request parameters. To clear the cache only of specific fields, use the fields parameter.

Query parameters

  • index string | array[string]

    Comma-separated list of data streams, indices, and aliases used to limit the request. Supports wildcards (*). To target all data streams and indices, omit this parameter or use * or _all.

  • If false, the request returns an error if any wildcard expression, index alias, or _all value targets only missing or closed indices. This behavior applies even if the request targets other open indices.

  • expand_wildcards string | array[string]

    Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports comma-separated values, such as open,hidden. Valid values are: all, open, closed, hidden, none.

    Supported values include:

    • all: Match any data stream or index, including hidden ones.
    • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
    • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
    • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
    • none: Wildcard expressions are not accepted.
  • fielddata boolean

    If true, clears the fields cache. Use the fields parameter to clear the cache of specific fields only.

  • fields string | array[string]

    Comma-separated list of field names used to limit the fielddata parameter.

  • If false, the request returns an error if it targets a missing or closed index.

  • query boolean

    If true, clears the query cache.

  • request boolean

    If true, clears the request cache.

Responses

POST /_cache/clear
curl \
 --request POST 'http://api.example.com/_cache/clear' \
 --header "Authorization: $API_KEY"




































Create or update an alias

PUT /{index}/_alias/{name}

Adds a data stream or index to an alias.

Path parameters

  • index string | array[string] Required

    Comma-separated list of data streams or indices to add. Supports wildcards (*). Wildcard patterns that match both data streams and indices return an error.

  • name string Required

    Alias to update. If the alias doesn’t exist, the request creates it. Index alias names support date math.

Query parameters

  • Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.

  • timeout string

    Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.

application/json

Body

  • filter object

    An Elasticsearch Query DSL (Domain Specific Language) object that defines a query.

    External documentation
  • If true, sets the write index or data stream for the alias. If an alias points to multiple indices or data streams and is_write_index isn’t set, the alias rejects write requests. If an index alias points to one index and is_write_index isn’t set, the index automatically acts as the write index. Data stream aliases don’t automatically set a write data stream, even if the alias points to one data stream.

  • routing string

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

PUT /{index}/_alias/{name}
curl \
 --request PUT 'http://api.example.com/{index}/_alias/{name}' \
 --header "Authorization: $API_KEY" \
 --header "Content-Type: application/json" \
 --data '"{\n  \"actions\": [\n    {\n      \"add\": {\n        \"index\": \"my-data-stream\",\n        \"alias\": \"my-alias\"\n      }\n    }\n  ]\n}"'
Request example
{
  "actions": [
    {
      "add": {
        "index": "my-data-stream",
        "alias": "my-alias"
      }
    }
  ]
}




























Get index templates Added in 7.9.0

GET /_index_template/{name}

Get information about one or more index templates.

Path parameters

  • name string Required

    Comma-separated list of index template names used to limit the request. Wildcard (*) expressions are supported.

Query parameters

  • local boolean

    If true, the request retrieves information from the local node only. Defaults to false, which means information is retrieved from the master node.

  • If true, returns settings in flat format.

  • Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.

  • If true, returns all relevant default configurations for the index template.

Responses

GET /_index_template/{name}
curl \
 --request GET 'http://api.example.com/_index_template/{name}' \
 --header "Authorization: $API_KEY"
















Get legacy index templates Deprecated

GET /_template/{name}

Get information about one or more index templates.

IMPORTANT: This documentation is about legacy index templates, which are deprecated and will be replaced by the composable templates introduced in Elasticsearch 7.8.

External documentation

Path parameters

  • name string | array[string] Required

    Comma-separated list of index template names used to limit the request. Wildcard (*) expressions are supported. To return all index templates, omit this parameter or use a value of _all or *.

Query parameters

  • If true, returns settings in flat format.

  • local boolean

    If true, the request retrieves information from the local node only.

  • Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.

Responses

GET /_template/{name}
curl \
 --request GET 'http://api.example.com/_template/{name}' \
 --header "Authorization: $API_KEY"
































Flush data streams or indices

GET /_flush

Flushing a data stream or index is the process of making sure that any data that is currently only stored in the transaction log is also permanently stored in the Lucene index. When restarting, Elasticsearch replays any unflushed operations from the transaction log into the Lucene index to bring it back into the state that it was in before the restart. Elasticsearch automatically triggers flushes as needed, using heuristics that trade off the size of the unflushed transaction log against the cost of performing each flush.

After each operation has been flushed it is permanently stored in the Lucene index. This may mean that there is no need to maintain an additional copy of it in the transaction log. The transaction log is made up of multiple files, called generations, and Elasticsearch will delete any generation files when they are no longer needed, freeing up disk space.

It is also possible to trigger a flush on one or more indices using the flush API, although it is rare for users to need to call this API directly. If you call the flush API after indexing some documents then a successful response indicates that Elasticsearch has flushed all the documents that were indexed before the flush API was called.

Query parameters

  • If false, the request returns an error if any wildcard expression, index alias, or _all value targets only missing or closed indices. This behavior applies even if the request targets other open indices.

  • expand_wildcards string | array[string]

    Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports comma-separated values, such as open,hidden. Valid values are: all, open, closed, hidden, none.

    Supported values include:

    • all: Match any data stream or index, including hidden ones.
    • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
    • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
    • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
    • none: Wildcard expressions are not accepted.
  • force boolean

    If true, the request forces a flush even if there are no changes to commit to the index.

  • If false, the request returns an error if it targets a missing or closed index.

  • If true, the flush operation blocks until execution when another flush operation is running. If false, Elasticsearch returns an error if you request a flush when another flush operation is running.

Responses

GET /_flush
curl \
 --request GET 'http://api.example.com/_flush' \
 --header "Authorization: $API_KEY"








Flush data streams or indices

POST /{index}/_flush

Flushing a data stream or index is the process of making sure that any data that is currently only stored in the transaction log is also permanently stored in the Lucene index. When restarting, Elasticsearch replays any unflushed operations from the transaction log into the Lucene index to bring it back into the state that it was in before the restart. Elasticsearch automatically triggers flushes as needed, using heuristics that trade off the size of the unflushed transaction log against the cost of performing each flush.

After each operation has been flushed it is permanently stored in the Lucene index. This may mean that there is no need to maintain an additional copy of it in the transaction log. The transaction log is made up of multiple files, called generations, and Elasticsearch will delete any generation files when they are no longer needed, freeing up disk space.

It is also possible to trigger a flush on one or more indices using the flush API, although it is rare for users to need to call this API directly. If you call the flush API after indexing some documents then a successful response indicates that Elasticsearch has flushed all the documents that were indexed before the flush API was called.

Path parameters

  • index string | array[string] Required

    Comma-separated list of data streams, indices, and aliases to flush. Supports wildcards (*). To flush all data streams and indices, omit this parameter or use * or _all.

Query parameters

  • If false, the request returns an error if any wildcard expression, index alias, or _all value targets only missing or closed indices. This behavior applies even if the request targets other open indices.

  • expand_wildcards string | array[string]

    Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports comma-separated values, such as open,hidden. Valid values are: all, open, closed, hidden, none.

    Supported values include:

    • all: Match any data stream or index, including hidden ones.
    • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
    • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
    • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
    • none: Wildcard expressions are not accepted.
  • force boolean

    If true, the request forces a flush even if there are no changes to commit to the index.

  • If false, the request returns an error if it targets a missing or closed index.

  • If true, the flush operation blocks until execution when another flush operation is running. If false, Elasticsearch returns an error if you request a flush when another flush operation is running.

Responses

POST /{index}/_flush
curl \
 --request POST 'http://api.example.com/{index}/_flush' \
 --header "Authorization: $API_KEY"




























































































Refresh an index

GET /{index}/_refresh

A refresh makes recent operations performed on one or more indices available for search. For data streams, the API runs the refresh operation on the stream’s backing indices.

By default, Elasticsearch periodically refreshes indices every second, but only on indices that have received one search request or more in the last 30 seconds. You can change this default interval with the index.refresh_interval setting.

Refresh requests are synchronous and do not return a response until the refresh operation completes.

Refreshes are resource-intensive. To ensure good cluster performance, it's recommended to wait for Elasticsearch's periodic refresh rather than performing an explicit refresh when possible.

If your application workflow indexes documents and then runs a search to retrieve the indexed document, it's recommended to use the index API's refresh=wait_for query parameter option. This option ensures the indexing operation waits for a periodic refresh before running the search.

Path parameters

  • index string | array[string] Required

    Comma-separated list of data streams, indices, and aliases used to limit the request. Supports wildcards (*). To target all data streams and indices, omit this parameter or use * or _all.

Query parameters

  • If false, the request returns an error if any wildcard expression, index alias, or _all value targets only missing or closed indices. This behavior applies even if the request targets other open indices.

  • expand_wildcards string | array[string]

    Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports comma-separated values, such as open,hidden. Valid values are: all, open, closed, hidden, none.

    Supported values include:

    • all: Match any data stream or index, including hidden ones.
    • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
    • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
    • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
    • none: Wildcard expressions are not accepted.
  • If false, the request returns an error if it targets a missing or closed index.

Responses

GET /{index}/_refresh
curl \
 --request GET 'http://api.example.com/{index}/_refresh' \
 --header "Authorization: $API_KEY"




































































































Validate a query Added in 1.3.0

POST /_validate/query

Validates a query without running it.

Query parameters

  • If false, the request returns an error if any wildcard expression, index alias, or _all value targets only missing or closed indices. This behavior applies even if the request targets other open indices.

  • all_shards boolean

    If true, the validation is executed on all shards instead of one random shard per index.

  • analyzer string

    Analyzer to use for the query string. This parameter can only be used when the q query string parameter is specified.

  • If true, wildcard and prefix queries are analyzed.

  • The default operator for query string query: AND or OR.

    Values are and, AND, or, or OR.

  • df string

    Field to use as default where no field prefix is given in the query string. This parameter can only be used when the q query string parameter is specified.

  • expand_wildcards string | array[string]

    Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports comma-separated values, such as open,hidden. Valid values are: all, open, closed, hidden, none.

    Supported values include:

    • all: Match any data stream or index, including hidden ones.
    • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
    • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
    • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
    • none: Wildcard expressions are not accepted.
  • explain boolean

    If true, the response returns detailed information if an error has occurred.

  • If false, the request returns an error if it targets a missing or closed index.

  • lenient boolean

    If true, format-based query failures (such as providing text to a numeric field) in the query string will be ignored.

  • rewrite boolean

    If true, returns a more detailed explanation showing the actual Lucene query that will be executed.

  • q string

    Query in the Lucene query string syntax.

application/json

Body

Responses

POST /_validate/query
curl \
 --request POST 'http://api.example.com/_validate/query' \
 --header "Authorization: $API_KEY" \
 --header "Content-Type: application/json" \
 --data '{"query":{}}'

















































Start the ILM plugin Added in 6.6.0

POST /_ilm/start

Start the index lifecycle management plugin if it is currently stopped. ILM is started automatically when the cluster is formed. Restarting ILM is necessary only when it has been stopped using the stop ILM API.

Query parameters

  • Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.

  • timeout string

    Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

POST /_ilm/start
curl \
 --request POST 'http://api.example.com/_ilm/start' \
 --header "Authorization: $API_KEY"
Response examples (200)
A successful response when stating the ILM plugin.
{
  "acknowledged": true
}





















































Create an Amazon Bedrock inference endpoint Added in 8.12.0

PUT /_inference/{task_type}/{amazonbedrock_inference_id}

Creates an inference endpoint to perform an inference task with the amazonbedrock service.


You need to provide the access and secret keys only once, during the inference model creation. The get inference API does not retrieve your access or secret keys. After creating the inference model, you cannot change the associated key pairs. If you want to use a different access and secret key pair, delete the inference model and recreate it with the same name and the updated keys.

Path parameters

  • task_type string Required

    The type of the inference task that the model will perform.

    Values are completion or text_embedding.

  • The unique identifier of the inference endpoint.

application/json

Body

  • Hide chunking_settings attributes Show chunking_settings attributes object
    • The maximum size of a chunk in words. This value cannot be higher than 300 or lower than 20 (for sentence strategy) or 10 (for word strategy).

    • overlap number

      The number of overlapping words for chunks. It is applicable only to a word chunking strategy. This value cannot be higher than half the max_chunk_size value.

    • The number of overlapping sentences for chunks. It is applicable only for a sentence chunking strategy. It can be either 1 or 0.

    • strategy string

      The chunking strategy: sentence or word.

  • service string Required

    Value is amazonbedrock.

  • service_settings object Required
    Hide service_settings attributes Show service_settings attributes object
    • access_key string Required

      A valid AWS access key that has permissions to use Amazon Bedrock and access to models for inference requests.

    • model string Required

      The base model ID or an ARN to a custom model based on a foundational model. The base model IDs can be found in the Amazon Bedrock documentation. Note that the model ID must be available for the provider chosen and your IAM user must have access to the model.

      External documentation
    • provider string

      The model provider for your deployment. Note that some providers may support only certain task types. Supported providers include:

      • amazontitan - available for text_embedding and completion task types
      • anthropic - available for completion task type only
      • ai21labs - available for completion task type only
      • cohere - available for text_embedding and completion task types
      • meta - available for completion task type only
      • mistral - available for completion task type only
    • region string Required

      The region that your model or ARN is deployed in. The list of available regions per model can be found in the Amazon Bedrock documentation.

      External documentation
    • Hide rate_limit attribute Show rate_limit attribute object
    • secret_key string Required

      A valid AWS secret key that is paired with the access_key. For informationg about creating and managing access and secret keys, refer to the AWS documentation.

      External documentation
  • Hide task_settings attributes Show task_settings attributes object
    • For a completion task, it sets the maximum number for the output tokens to be generated.

    • For a completion task, it is a number between 0.0 and 1.0 that controls the apparent creativity of the results. At temperature 0.0 the model is most deterministic, at temperature 1.0 most random. It should not be used if top_p or top_k is specified.

    • top_k number

      For a completion task, it limits samples to the top-K most likely words, balancing coherence and variability. It is only available for anthropic, cohere, and mistral providers. It is an alternative to temperature; it should not be used if temperature is specified.

    • top_p number

      For a completion task, it is a number in the range of 0.0 to 1.0, to eliminate low-probability tokens. Top-p uses nucleus sampling to select top tokens whose sum of likelihoods does not exceed a certain value, ensuring both variety and coherence. It is an alternative to temperature; it should not be used if temperature is specified.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • Hide chunking_settings attributes Show chunking_settings attributes object
      • The maximum size of a chunk in words. This value cannot be higher than 300 or lower than 20 (for sentence strategy) or 10 (for word strategy).

      • overlap number

        The number of overlapping words for chunks. It is applicable only to a word chunking strategy. This value cannot be higher than half the max_chunk_size value.

      • The number of overlapping sentences for chunks. It is applicable only for a sentence chunking strategy. It can be either 1 or 0.

      • strategy string

        The chunking strategy: sentence or word.

    • service string Required

      The service type

    • service_settings object Required
    • inference_id string Required

      The inference Id

    • task_type string Required

      Values are sparse_embedding, text_embedding, rerank, completion, or chat_completion.

PUT /_inference/{task_type}/{amazonbedrock_inference_id}
curl \
 --request PUT 'http://api.example.com/_inference/{task_type}/{amazonbedrock_inference_id}' \
 --header "Authorization: $API_KEY" \
 --header "Content-Type: application/json" \
 --data '"{\n    \"service\": \"amazonbedrock\",\n    \"service_settings\": {\n        \"access_key\": \"AWS-access-key\",\n        \"secret_key\": \"AWS-secret-key\",\n        \"region\": \"us-east-1\",\n        \"provider\": \"amazontitan\",\n        \"model\": \"amazon.titan-embed-text-v2:0\"\n    }\n}"'
Request examples
Run `PUT _inference/text_embedding/amazon_bedrock_embeddings` to create an inference endpoint that performs a text embedding task.
{
    "service": "amazonbedrock",
    "service_settings": {
        "access_key": "AWS-access-key",
        "secret_key": "AWS-secret-key",
        "region": "us-east-1",
        "provider": "amazontitan",
        "model": "amazon.titan-embed-text-v2:0"
    }
}
Run `PUT _inference/completion/openai-completion` to create an inference endpoint to perform a completion task type.
{
    "service": "openai",
    "service_settings": {
        "api_key": "OpenAI-API-Key",
        "model_id": "gpt-3.5-turbo"
    }
}




































Create an JinaAI inference endpoint Added in 8.18.0

PUT /_inference/{task_type}/{jinaai_inference_id}

Create an inference endpoint to perform an inference task with the jinaai service.

To review the available rerank models, refer to https://jina.ai/reranker. To review the available text_embedding models, refer to the https://jina.ai/embeddings/.

Path parameters

  • task_type string Required

    The type of the inference task that the model will perform.

    Values are rerank or text_embedding.

  • jinaai_inference_id string Required

    The unique identifier of the inference endpoint.

application/json

Body

  • Hide chunking_settings attributes Show chunking_settings attributes object
    • The maximum size of a chunk in words. This value cannot be higher than 300 or lower than 20 (for sentence strategy) or 10 (for word strategy).

    • overlap number

      The number of overlapping words for chunks. It is applicable only to a word chunking strategy. This value cannot be higher than half the max_chunk_size value.

    • The number of overlapping sentences for chunks. It is applicable only for a sentence chunking strategy. It can be either 1 or 0.

    • strategy string

      The chunking strategy: sentence or word.

  • service string Required

    Value is jinaai.

  • service_settings object Required
    Hide service_settings attributes Show service_settings attributes object
    • api_key string Required

      A valid API key of your JinaAI account.

      IMPORTANT: You need to provide the API key only once, during the inference model creation. The get inference endpoint API does not retrieve your API key. After creating the inference model, you cannot change the associated API key. If you want to use a different API key, delete the inference model and recreate it with the same name and the updated API key.

      External documentation
    • model_id string

      The name of the model to use for the inference task. For a rerank task, it is required. For a text_embedding task, it is optional.

    • Hide rate_limit attribute Show rate_limit attribute object
    • Values are cosine, dot_product, or l2_norm.

  • Hide task_settings attributes Show task_settings attributes object
    • For a rerank task, return the doc text within the results.

    • task string

      Values are classification, clustering, ingest, or search.

    • top_n number

      For a rerank task, the number of most relevant documents to return. It defaults to the number of the documents. If this inference endpoint is used in a text_similarity_reranker retriever query and top_n is set, it must be greater than or equal to rank_window_size in the query.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • Hide chunking_settings attributes Show chunking_settings attributes object
      • The maximum size of a chunk in words. This value cannot be higher than 300 or lower than 20 (for sentence strategy) or 10 (for word strategy).

      • overlap number

        The number of overlapping words for chunks. It is applicable only to a word chunking strategy. This value cannot be higher than half the max_chunk_size value.

      • The number of overlapping sentences for chunks. It is applicable only for a sentence chunking strategy. It can be either 1 or 0.

      • strategy string

        The chunking strategy: sentence or word.

    • service string Required

      The service type

    • service_settings object Required
    • inference_id string Required

      The inference Id

    • task_type string Required

      Values are text_embedding or rerank.

PUT /_inference/{task_type}/{jinaai_inference_id}
curl \
 --request PUT 'http://api.example.com/_inference/{task_type}/{jinaai_inference_id}' \
 --header "Authorization: $API_KEY" \
 --header "Content-Type: application/json" \
 --data '"{\n    \"service\": \"jinaai\",\n    \"service_settings\": {\n        \"model_id\": \"jina-embeddings-v3\",\n        \"api_key\": \"JinaAi-Api-key\"\n    }\n}"'
Request examples
Run `PUT _inference/text_embedding/jinaai-embeddings` to create an inference endpoint for text embedding tasks using the JinaAI service.
{
    "service": "jinaai",
    "service_settings": {
        "model_id": "jina-embeddings-v3",
        "api_key": "JinaAi-Api-key"
    }
}
Run `PUT _inference/rerank/jinaai-rerank` to create an inference endpoint for rerank tasks using the JinaAI service.
{
    "service": "jinaai",
    "service_settings": {
        "api_key": "JinaAI-Api-key",
        "model_id": "jina-reranker-v2-base-multilingual"
    },
    "task_settings": {
        "top_n": 10,
        "return_documents": true
    }
}












Create a Watsonx inference endpoint Added in 8.16.0

PUT /_inference/{task_type}/{watsonx_inference_id}

Create an inference endpoint to perform an inference task with the watsonxai service. You need an IBM Cloud Databases for Elasticsearch deployment to use the watsonxai inference service. You can provision one through the IBM catalog, the Cloud Databases CLI plug-in, the Cloud Databases API, or Terraform.

Path parameters

  • task_type string Required

    The task type. The only valid task type for the model to perform is text_embedding.

    Value is text_embedding.

  • watsonx_inference_id string Required

    The unique identifier of the inference endpoint.

application/json

Body

  • service string Required

    Value is watsonxai.

  • service_settings object Required
    Hide service_settings attributes Show service_settings attributes object
    • api_key string Required

      A valid API key of your Watsonx account. You can find your Watsonx API keys or you can create a new one on the API keys page.

      IMPORTANT: You need to provide the API key only once, during the inference model creation. The get inference endpoint API does not retrieve your API key. After creating the inference model, you cannot change the associated API key. If you want to use a different API key, delete the inference model and recreate it with the same name and the updated API key.

      External documentation
    • api_version string Required

      A version parameter that takes a version date in the format of YYYY-MM-DD. For the active version data parameters, refer to the Wastonx documentation.

      External documentation
    • model_id string Required

      The name of the model to use for the inference task. Refer to the IBM Embedding Models section in the Watsonx documentation for the list of available text embedding models.

      External documentation
    • project_id string Required

      The identifier of the IBM Cloud project to use for the inference task.

    • Hide rate_limit attribute Show rate_limit attribute object
    • url string Required

      The URL of the inference endpoint that you created on Watsonx.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • Hide chunking_settings attributes Show chunking_settings attributes object
      • The maximum size of a chunk in words. This value cannot be higher than 300 or lower than 20 (for sentence strategy) or 10 (for word strategy).

      • overlap number

        The number of overlapping words for chunks. It is applicable only to a word chunking strategy. This value cannot be higher than half the max_chunk_size value.

      • The number of overlapping sentences for chunks. It is applicable only for a sentence chunking strategy. It can be either 1 or 0.

      • strategy string

        The chunking strategy: sentence or word.

    • service string Required

      The service type

    • service_settings object Required
    • inference_id string Required

      The inference Id

    • task_type string Required

      Values are sparse_embedding, text_embedding, rerank, completion, or chat_completion.

PUT /_inference/{task_type}/{watsonx_inference_id}
curl \
 --request PUT 'http://api.example.com/_inference/{task_type}/{watsonx_inference_id}' \
 --header "Authorization: $API_KEY" \
 --header "Content-Type: application/json" \
 --data '"{\n  \"service\": \"watsonxai\",\n  \"service_settings\": {\n      \"api_key\": \"Watsonx-API-Key\", \n      \"url\": \"Wastonx-URL\", \n      \"model_id\": \"ibm/slate-30m-english-rtrvr\",\n      \"project_id\": \"IBM-Cloud-ID\", \n      \"api_version\": \"2024-03-14\"\n  }\n}"'
Request example
Run `PUT _inference/text_embedding/watsonx-embeddings` to create an Watonsx inference endpoint that performs a text embedding task.
{
  "service": "watsonxai",
  "service_settings": {
      "api_key": "Watsonx-API-Key", 
      "url": "Wastonx-URL", 
      "model_id": "ibm/slate-30m-english-rtrvr",
      "project_id": "IBM-Cloud-ID", 
      "api_version": "2024-03-14"
  }
}














































































































Simulate data ingestion Technical preview

GET /_ingest/{index}/_simulate

Run ingest pipelines against a set of provided documents, optionally with substitute pipeline definitions, to simulate ingesting data into an index.

This API is meant to be used for troubleshooting or pipeline development, as it does not actually index any data into Elasticsearch.

The API runs the default and final pipeline for that index against a set of documents provided in the body of the request. If a pipeline contains a reroute processor, it follows that reroute processor to the new index, running that index's pipelines as well the same way that a non-simulated ingest would. No data is indexed into Elasticsearch. Instead, the transformed document is returned, along with the list of pipelines that have been run and the name of the index where the document would have been indexed if this were not a simulation. The transformed document is validated against the mappings that would apply to this index, and any validation error is reported in the result.

This API differs from the simulate pipeline API in that you specify a single pipeline for that API, and it runs only that one pipeline. The simulate pipeline API is more useful for developing a single pipeline, while the simulate ingest API is more useful for troubleshooting the interaction of the various pipelines that get applied when ingesting into an index.

By default, the pipeline definitions that are currently in the system are used. However, you can supply substitute pipeline definitions in the body of the request. These will be used in place of the pipeline definitions that are already in the system. This can be used to replace existing pipeline definitions or to create new ones. The pipeline substitutions are used only within this request.

Path parameters

  • index string Required

    The index to simulate ingesting into. This value can be overridden by specifying an index on each document. If you specify this parameter in the request path, it is used for any documents that do not explicitly specify an index argument.

Query parameters

  • pipeline string

    The pipeline to use as the default pipeline. This value can be used to override the default pipeline of the index.

application/json

Body Required

  • docs array[object] Required

    Sample documents to test in the pipeline.

    Hide docs attributes Show docs attributes object
  • A map of component template names to substitute component template definition objects.

    Hide component_template_substitutions attribute Show component_template_substitutions attribute object
  • A map of index template names to substitute index template definition objects.

    Hide index_template_substitutions attribute Show index_template_substitutions attribute object
  • Hide mapping_addition attributes Show mapping_addition attributes object
    • Hide all_field attributes Show all_field attributes object
    • dynamic string

      Values are strict, runtime, true, or false.

    • dynamic_templates array[object]
    • Hide _field_names attribute Show _field_names attribute object
    • Hide index_field attribute Show index_field attribute object
    • _meta object
      Hide _meta attribute Show _meta attribute object
      • * object Additional properties
    • _routing object
      Hide _routing attribute Show _routing attribute object
    • _size object
      Hide _size attribute Show _size attribute object
    • _source object
      Hide _source attributes Show _source attributes object
    • runtime object
      Hide runtime attribute Show runtime attribute object
      • * object Additional properties
        Hide * attributes Show * attributes object
        • fields object

          For type composite

          Hide fields attribute Show fields attribute object
          • * object Additional properties
            Hide * attribute Show * attribute object
            • type string Required

              Values are boolean, composite, date, double, geo_point, geo_shape, ip, keyword, long, or lookup.

        • fetch_fields array[object]

          For type lookup

          Hide fetch_fields attributes Show fetch_fields attributes object
          • field string Required

            Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • format string
        • format string

          A custom format for date type runtime fields.

        • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

        • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

        • script object
          Hide script attributes Show script attributes object
          • source string | object

            One of:
          • id string
          • params object

            Specifies any named parameters that are passed into the script as variables. Use parameters instead of hard-coded values to decrease compile time.

            Hide params attribute Show params attribute object
            • * object Additional properties
          • lang string

            Any of:

            Values are painless, expression, mustache, or java.

          • options object
            Hide options attribute Show options attribute object
            • * string Additional properties
        • type string Required

          Values are boolean, composite, date, double, geo_point, geo_shape, ip, keyword, long, or lookup.

    • enabled boolean
    • Values are true or false.

    • Hide _data_stream_timestamp attribute Show _data_stream_timestamp attribute object
  • Pipelines to test. If you don’t specify the pipeline request path parameter, this parameter is required. If you specify both this and the request path parameter, the API only uses the request path parameter.

    Hide pipeline_substitutions attribute Show pipeline_substitutions attribute object
    • * object Additional properties
      Hide * attributes Show * attributes object
      • Description of the ingest pipeline.

      • on_failure array[object]

        Processors to run immediately after a processor failure.

        Hide on_failure attributes Show on_failure attributes object
        • append object
          Hide append attributes Show append attributes object
          • Description of the processor. Useful for describing the purpose of the processor or its configuration.

          • if object
          • Ignore failures for the processor.

          • on_failure array[object]

            Handle failures for the processor.

          • tag string

            Identifier for the processor. Useful for debugging and metrics.

          • field string Required

            Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • If false, the processor does not append values already present in the field.

        • Hide attachment attributes Show attachment attributes object
          • Description of the processor. Useful for describing the purpose of the processor or its configuration.

          • if object
          • Ignore failures for the processor.

          • on_failure array[object]

            Handle failures for the processor.

          • tag string

            Identifier for the processor. Useful for debugging and metrics.

          • field string Required

            Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • If true and field does not exist, the processor quietly exits without modifying the document.

          • The number of chars being used for extraction to prevent huge fields. Use -1 for no limit.

          • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • properties array[string]

            Array of properties to select to be stored. Can be content, title, name, author, keywords, date, content_type, content_length, language.

          • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • If true, the binary field will be removed from the document

          • Field containing the name of the resource to decode. If specified, the processor passes this resource name to the underlying Tika library to enable Resource Name Based Detection.

        • bytes object
          Hide bytes attributes Show bytes attributes object
          • Description of the processor. Useful for describing the purpose of the processor or its configuration.

          • if object
          • Ignore failures for the processor.

          • on_failure array[object]

            Handle failures for the processor.

          • tag string

            Identifier for the processor. Useful for debugging and metrics.

          • field string Required

            Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • If true and field does not exist or is null, the processor quietly exits without modifying the document.

          • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

        • circle object
          Hide circle attributes Show circle attributes object
          • Description of the processor. Useful for describing the purpose of the processor or its configuration.

          • if object
          • Ignore failures for the processor.

          • on_failure array[object]

            Handle failures for the processor.

          • tag string

            Identifier for the processor. Useful for debugging and metrics.

          • error_distance number Required

            The difference between the resulting inscribed distance from center to side and the circle’s radius (measured in meters for geo_shape, unit-less for shape).

          • field string Required

            Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • If true and field does not exist, the processor quietly exits without modifying the document.

          • shape_type string Required

            Values are geo_shape or shape.

          • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

        • Hide community_id attributes Show community_id attributes object
          • Description of the processor. Useful for describing the purpose of the processor or its configuration.

          • if object
          • Ignore failures for the processor.

          • on_failure array[object]

            Handle failures for the processor.

          • tag string

            Identifier for the processor. Useful for debugging and metrics.

          • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • seed number

            Seed for the community ID hash. Must be between 0 and 65535 (inclusive). The seed can prevent hash collisions between network domains, such as a staging and production network that use the same addressing scheme.

          • If true and any required fields are missing, the processor quietly exits without modifying the document.

        • convert object
          Hide convert attributes Show convert attributes object
          • Description of the processor. Useful for describing the purpose of the processor or its configuration.

          • if object
          • Ignore failures for the processor.

          • on_failure array[object]

            Handle failures for the processor.

          • tag string

            Identifier for the processor. Useful for debugging and metrics.

          • field string Required

            Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • If true and field does not exist or is null, the processor quietly exits without modifying the document.

          • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • type string Required

            Values are integer, long, double, float, boolean, ip, string, or auto.

        • csv object
          Hide csv attributes Show csv attributes object
          • Description of the processor. Useful for describing the purpose of the processor or its configuration.

          • if object
          • Ignore failures for the processor.

          • on_failure array[object]

            Handle failures for the processor.

          • tag string

            Identifier for the processor. Useful for debugging and metrics.

          • Value used to fill empty fields. Empty fields are skipped if this is not provided. An empty field is one with no value (2 consecutive separators) or empty quotes ("").

          • field string Required

            Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • If true and field does not exist, the processor quietly exits without modifying the document.

          • quote string

            Quote used in CSV, has to be single character string.

          • Separator used in CSV, has to be single character string.

          • target_fields string | array[string] Required
          • trim boolean

            Trim whitespaces in unquoted fields.

        • date object
          Hide date attributes Show date attributes object
          • Description of the processor. Useful for describing the purpose of the processor or its configuration.

          • if object
          • Ignore failures for the processor.

          • on_failure array[object]

            Handle failures for the processor.

          • tag string

            Identifier for the processor. Useful for debugging and metrics.

          • field string Required

            Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • formats array[string] Required

            An array of the expected date formats. Can be a java time pattern or one of the following formats: ISO8601, UNIX, UNIX_MS, or TAI64N.

          • locale string

            The locale to use when parsing the date, relevant when parsing month names or week days. Supports template snippets.

          • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • timezone string

            The timezone to use when parsing the date. Supports template snippets.

          • The format to use when writing the date to target_field. Must be a valid java time pattern.

        • Hide date_index_name attributes Show date_index_name attributes object
          • Description of the processor. Useful for describing the purpose of the processor or its configuration.

          • if object
          • Ignore failures for the processor.

          • on_failure array[object]

            Handle failures for the processor.

          • tag string

            Identifier for the processor. Useful for debugging and metrics.

          • date_formats array[string]

            An array of the expected date formats for parsing dates / timestamps in the document being preprocessed. Can be a java time pattern or one of the following formats: ISO8601, UNIX, UNIX_MS, or TAI64N.

          • date_rounding string Required

            How to round the date when formatting the date into the index name. Valid values are: y (year), M (month), w (week), d (day), h (hour), m (minute) and s (second). Supports template snippets.

          • field string Required

            Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • The format to be used when printing the parsed date into the index name. A valid java time pattern is expected here. Supports template snippets.

          • A prefix of the index name to be prepended before the printed date. Supports template snippets.

          • locale string

            The locale to use when parsing the date from the document being preprocessed, relevant when parsing month names or week days.

          • timezone string

            The timezone to use when parsing the date and when date math index supports resolves expressions into concrete index names.

        • dissect object
          Hide dissect attributes Show dissect attributes object
          • Description of the processor. Useful for describing the purpose of the processor or its configuration.

          • if object
          • Ignore failures for the processor.

          • on_failure array[object]

            Handle failures for the processor.

          • tag string

            Identifier for the processor. Useful for debugging and metrics.

          • The character(s) that separate the appended fields.

          • field string Required

            Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • If true and field does not exist or is null, the processor quietly exits without modifying the document.

          • pattern string Required

            The pattern to apply to the field.

        • Hide dot_expander attributes Show dot_expander attributes object
          • Description of the processor. Useful for describing the purpose of the processor or its configuration.

          • if object
          • Ignore failures for the processor.

          • on_failure array[object]

            Handle failures for the processor.

          • tag string

            Identifier for the processor. Useful for debugging and metrics.

          • field string Required

            Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • override boolean

            Controls the behavior when there is already an existing nested object that conflicts with the expanded field. When false, the processor will merge conflicts by combining the old and the new values into an array. When true, the value from the expanded field will overwrite the existing value.

          • path string

            The field that contains the field to expand. Only required if the field to expand is part another object field, because the field option can only understand leaf fields.

        • drop object
          Hide drop attributes Show drop attributes object
          • Description of the processor. Useful for describing the purpose of the processor or its configuration.

          • if object
          • Ignore failures for the processor.

          • on_failure array[object]

            Handle failures for the processor.

          • tag string

            Identifier for the processor. Useful for debugging and metrics.

        • enrich object
          Hide enrich attributes Show enrich attributes object
          • Description of the processor. Useful for describing the purpose of the processor or its configuration.

          • if object
          • Ignore failures for the processor.

          • on_failure array[object]

            Handle failures for the processor.

          • tag string

            Identifier for the processor. Useful for debugging and metrics.

          • field string Required

            Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • If true and field does not exist, the processor quietly exits without modifying the document.

          • The maximum number of matched documents to include under the configured target field. The target_field will be turned into a json array if max_matches is higher than 1, otherwise target_field will become a json object. In order to avoid documents getting too large, the maximum allowed value is 128.

          • override boolean

            If processor will update fields with pre-existing non-null-valued field. When set to false, such fields will not be touched.

          • policy_name string Required

            The name of the enrich policy to use.

          • Values are intersects, disjoint, within, or contains.

          • target_field string Required

            Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

        • fail object
          Hide fail attributes Show fail attributes object
          • Description of the processor. Useful for describing the purpose of the processor or its configuration.

          • if object
          • Ignore failures for the processor.

          • on_failure array[object]

            Handle failures for the processor.

          • tag string

            Identifier for the processor. Useful for debugging and metrics.

          • message string Required

            The error message thrown by the processor. Supports template snippets.

        • Hide fingerprint attributes Show fingerprint attributes object
          • Description of the processor. Useful for describing the purpose of the processor or its configuration.

          • if object
          • Ignore failures for the processor.

          • on_failure array[object]

            Handle failures for the processor.

          • tag string

            Identifier for the processor. Useful for debugging and metrics.

          • fields string | array[string] Required
          • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • salt string

            Salt value for the hash function.

          • method string

            Values are MD5, SHA-1, SHA-256, SHA-512, or MurmurHash3.

          • If true, the processor ignores any missing fields. If all fields are missing, the processor silently exits without modifying the document.

        • foreach object
          Hide foreach attributes Show foreach attributes object
          • Description of the processor. Useful for describing the purpose of the processor or its configuration.

          • if object
          • Ignore failures for the processor.

          • on_failure array[object]

            Handle failures for the processor.

          • tag string

            Identifier for the processor. Useful for debugging and metrics.

          • field string Required

            Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • If true, the processor silently exits without changing the document if the field is null or missing.

          • processor object Required
        • Hide ip_location attributes Show ip_location attributes object
          • Description of the processor. Useful for describing the purpose of the processor or its configuration.

          • if object
          • Ignore failures for the processor.

          • on_failure array[object]

            Handle failures for the processor.

          • tag string

            Identifier for the processor. Useful for debugging and metrics.

          • The database filename referring to a database the module ships with (GeoLite2-City.mmdb, GeoLite2-Country.mmdb, or GeoLite2-ASN.mmdb) or a custom database in the ingest-geoip config directory.

          • field string Required

            Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • first_only boolean

            If true, only the first found IP location data will be returned, even if the field contains an array.

          • If true and field does not exist, the processor quietly exits without modifying the document.

          • properties array[string]

            Controls what properties are added to the target_field based on the IP location lookup.

          • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • If true (and if ingest.geoip.downloader.eager.download is false), the missing database is downloaded when the pipeline is created. Else, the download is triggered by when the pipeline is used as the default_pipeline or final_pipeline in an index.

        • geo_grid object
          Hide geo_grid attributes Show geo_grid attributes object
          • Description of the processor. Useful for describing the purpose of the processor or its configuration.

          • if object
          • Ignore failures for the processor.

          • on_failure array[object]

            Handle failures for the processor.

          • tag string

            Identifier for the processor. Useful for debugging and metrics.

          • field string Required

            The field to interpret as a geo-tile.= The field format is determined by the tile_type.

          • tile_type string Required

            Values are geotile, geohex, or geohash.

          • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • If true and field does not exist, the processor quietly exits without modifying the document.

          • Values are geojson or wkt.

        • geoip object
          Hide geoip attributes Show geoip attributes object
          • Description of the processor. Useful for describing the purpose of the processor or its configuration.

          • if object
          • Ignore failures for the processor.

          • on_failure array[object]

            Handle failures for the processor.

          • tag string

            Identifier for the processor. Useful for debugging and metrics.

          • The database filename referring to a database the module ships with (GeoLite2-City.mmdb, GeoLite2-Country.mmdb, or GeoLite2-ASN.mmdb) or a custom database in the ingest-geoip config directory.

          • field string Required

            Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • first_only boolean

            If true, only the first found geoip data will be returned, even if the field contains an array.

          • If true and field does not exist, the processor quietly exits without modifying the document.

          • properties array[string]

            Controls what properties are added to the target_field based on the geoip lookup.

          • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • If true (and if ingest.geoip.downloader.eager.download is false), the missing database is downloaded when the pipeline is created. Else, the download is triggered by when the pipeline is used as the default_pipeline or final_pipeline in an index.

        • grok object
          Hide grok attributes Show grok attributes object
          • Description of the processor. Useful for describing the purpose of the processor or its configuration.

          • if object
          • Ignore failures for the processor.

          • on_failure array[object]

            Handle failures for the processor.

          • tag string

            Identifier for the processor. Useful for debugging and metrics.

          • Must be disabled or v1. If v1, the processor uses patterns with Elastic Common Schema (ECS) field names.

          • field string Required

            Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • If true and field does not exist or is null, the processor quietly exits without modifying the document.

          • A map of pattern-name and pattern tuples defining custom patterns to be used by the current processor. Patterns matching existing names will override the pre-existing definition.

          • patterns array[string] Required

            An ordered list of grok expression to match and extract named captures with. Returns on the first expression in the list that matches.

          • When true, _ingest._grok_match_index will be inserted into your matched document’s metadata with the index into the pattern found in patterns that matched.

        • gsub object
          Hide gsub attributes Show gsub attributes object
          • Description of the processor. Useful for describing the purpose of the processor or its configuration.

          • if object
          • Ignore failures for the processor.

          • on_failure array[object]

            Handle failures for the processor.

          • tag string

            Identifier for the processor. Useful for debugging and metrics.

          • field string Required

            Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • If true and field does not exist or is null, the processor quietly exits without modifying the document.

          • pattern string Required

            The pattern to be replaced.

          • replacement string Required

            The string to replace the matching patterns with.

          • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

        • Hide html_strip attributes Show html_strip attributes object
          • Description of the processor. Useful for describing the purpose of the processor or its configuration.

          • if object
          • Ignore failures for the processor.

          • on_failure array[object]

            Handle failures for the processor.

          • tag string

            Identifier for the processor. Useful for debugging and metrics.

          • field string Required

            Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • If true and field does not exist or is null, the processor quietly exits without modifying the document,

          • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

        • Hide inference attributes Show inference attributes object
          • Description of the processor. Useful for describing the purpose of the processor or its configuration.

          • if object
          • Ignore failures for the processor.

          • on_failure array[object]

            Handle failures for the processor.

          • tag string

            Identifier for the processor. Useful for debugging and metrics.

          • model_id string Required
          • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • Maps the document field names to the known field names of the model. This mapping takes precedence over any default mappings provided in the model configuration.

          • If true and any of the input fields defined in input_ouput are missing then those missing fields are quietly ignored, otherwise a missing field causes a failure. Only applies when using input_output configurations to explicitly list the input fields.

        • join object
          Hide join attributes Show join attributes object
          • Description of the processor. Useful for describing the purpose of the processor or its configuration.

          • if object
          • Ignore failures for the processor.

          • on_failure array[object]

            Handle failures for the processor.

          • tag string

            Identifier for the processor. Useful for debugging and metrics.

          • field string Required

            Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • separator string Required

            The separator character.

          • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

        • json object
          Hide json attributes Show json attributes object
          • Description of the processor. Useful for describing the purpose of the processor or its configuration.

          • if object
          • Ignore failures for the processor.

          • on_failure array[object]

            Handle failures for the processor.

          • tag string

            Identifier for the processor. Useful for debugging and metrics.

          • Flag that forces the parsed JSON to be added at the top level of the document. target_field must not be set when this option is chosen.

          • Values are replace or merge.

          • When set to true, the JSON parser will not fail if the JSON contains duplicate keys. Instead, the last encountered value for any duplicate key wins.

          • field string Required

            Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

        • kv object
          Hide kv attributes Show kv attributes object
          • Description of the processor. Useful for describing the purpose of the processor or its configuration.

          • if object
          • Ignore failures for the processor.

          • on_failure array[object]

            Handle failures for the processor.

          • tag string

            Identifier for the processor. Useful for debugging and metrics.

          • exclude_keys array[string]

            List of keys to exclude from document.

          • field string Required

            Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • field_split string Required

            Regex pattern to use for splitting key-value pairs.

          • If true and field does not exist or is null, the processor quietly exits without modifying the document.

          • include_keys array[string]

            List of keys to filter and insert into document. Defaults to including all keys.

          • prefix string

            Prefix to be added to extracted keys.

          • If true. strip brackets (), <>, [] as well as quotes ' and " from extracted values.

          • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • trim_key string

            String of characters to trim from extracted keys.

          • String of characters to trim from extracted values.

          • value_split string Required

            Regex pattern to use for splitting the key from the value within a key-value pair.

        • Hide lowercase attributes Show lowercase attributes object
          • Description of the processor. Useful for describing the purpose of the processor or its configuration.

          • if object
          • Ignore failures for the processor.

          • on_failure array[object]

            Handle failures for the processor.

          • tag string

            Identifier for the processor. Useful for debugging and metrics.

          • field string Required

            Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • If true and field does not exist or is null, the processor quietly exits without modifying the document.

          • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

        • Hide network_direction attributes Show network_direction attributes object
          • Description of the processor. Useful for describing the purpose of the processor or its configuration.

          • if object
          • Ignore failures for the processor.

          • on_failure array[object]

            Handle failures for the processor.

          • tag string

            Identifier for the processor. Useful for debugging and metrics.

          • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • internal_networks array[string]

            List of internal networks. Supports IPv4 and IPv6 addresses and ranges in CIDR notation. Also supports the named ranges listed below. These may be constructed with template snippets. Must specify only one of internal_networks or internal_networks_field.

          • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • If true and any required fields are missing, the processor quietly exits without modifying the document.

        • pipeline object
          Hide pipeline attributes Show pipeline attributes object
          • Description of the processor. Useful for describing the purpose of the processor or its configuration.

          • if object
          • Ignore failures for the processor.

          • on_failure array[object]

            Handle failures for the processor.

          • tag string

            Identifier for the processor. Useful for debugging and metrics.

          • name string Required
          • Whether to ignore missing pipelines instead of failing.

        • redact object
          Hide redact attributes Show redact attributes object
          • Description of the processor. Useful for describing the purpose of the processor or its configuration.

          • if object
          • Ignore failures for the processor.

          • on_failure array[object]

            Handle failures for the processor.

          • tag string

            Identifier for the processor. Useful for debugging and metrics.

          • field string Required

            Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • patterns array[string] Required

            A list of grok expressions to match and redact named captures with

          • prefix string

            Start a redacted section with this token

          • suffix string

            End a redacted section with this token

          • If true and field does not exist or is null, the processor quietly exits without modifying the document.

          • If true and the current license does not support running redact processors, then the processor quietly exits without modifying the document

          • If true then ingest metadata _ingest._redact._is_redacted is set to true if the document has been redacted

        • Hide registered_domain attributes Show registered_domain attributes object
          • Description of the processor. Useful for describing the purpose of the processor or its configuration.

          • if object
          • Ignore failures for the processor.

          • on_failure array[object]

            Handle failures for the processor.

          • tag string

            Identifier for the processor. Useful for debugging and metrics.

          • field string Required

            Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • If true and any required fields are missing, the processor quietly exits without modifying the document.

        • remove object
          Hide remove attributes Show remove attributes object
          • Description of the processor. Useful for describing the purpose of the processor or its configuration.

          • if object
          • Ignore failures for the processor.

          • on_failure array[object]

            Handle failures for the processor.

          • tag string

            Identifier for the processor. Useful for debugging and metrics.

          • field string | array[string] Required
          • keep string | array[string]
          • If true and field does not exist or is null, the processor quietly exits without modifying the document.

        • rename object
          Hide rename attributes Show rename attributes object
          • Description of the processor. Useful for describing the purpose of the processor or its configuration.

          • if object
          • Ignore failures for the processor.

          • on_failure array[object]

            Handle failures for the processor.

          • tag string

            Identifier for the processor. Useful for debugging and metrics.

          • field string Required

            Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • If true and field does not exist, the processor quietly exits without modifying the document.

          • target_field string Required

            Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

        • reroute object
          Hide reroute attributes Show reroute attributes object
          • Description of the processor. Useful for describing the purpose of the processor or its configuration.

          • if object
          • Ignore failures for the processor.

          • on_failure array[object]

            Handle failures for the processor.

          • tag string

            Identifier for the processor. Useful for debugging and metrics.

          • A static value for the target. Can’t be set when the dataset or namespace option is set.

        • script object
          Hide script attributes Show script attributes object
          • Description of the processor. Useful for describing the purpose of the processor or its configuration.

          • if object
          • Ignore failures for the processor.

          • on_failure array[object]

            Handle failures for the processor.

          • tag string

            Identifier for the processor. Useful for debugging and metrics.

          • id string
          • params object

            Object containing parameters for the script.

        • set object
          Hide set attributes Show set attributes object
          • Description of the processor. Useful for describing the purpose of the processor or its configuration.

          • if object
          • Ignore failures for the processor.

          • on_failure array[object]

            Handle failures for the processor.

          • tag string

            Identifier for the processor. Useful for debugging and metrics.

          • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • field string Required

            Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • If true and value is a template snippet that evaluates to null or the empty string, the processor quietly exits without modifying the document.

          • The media type for encoding value. Applies only when value is a template snippet. Must be one of application/json, text/plain, or application/x-www-form-urlencoded.

          • override boolean

            If true processor will update fields with pre-existing non-null-valued field. When set to false, such fields will not be touched.

          • value object

            The value to be set for the field. Supports template snippets. May specify only one of value or copy_from.

        • Hide set_security_user attributes Show set_security_user attributes object
          • Description of the processor. Useful for describing the purpose of the processor or its configuration.

          • if object
          • Ignore failures for the processor.

          • on_failure array[object]

            Handle failures for the processor.

          • tag string

            Identifier for the processor. Useful for debugging and metrics.

          • field string Required

            Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • properties array[string]

            Controls what user related properties are added to the field.

        • sort object
          Hide sort attributes Show sort attributes object
          • Description of the processor. Useful for describing the purpose of the processor or its configuration.

          • if object
          • Ignore failures for the processor.

          • on_failure array[object]

            Handle failures for the processor.

          • tag string

            Identifier for the processor. Useful for debugging and metrics.

          • field string Required

            Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • order string

            Values are asc or desc.

          • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

        • split object
          Hide split attributes Show split attributes object
          • Description of the processor. Useful for describing the purpose of the processor or its configuration.

          • if object
          • Ignore failures for the processor.

          • on_failure array[object]

            Handle failures for the processor.

          • tag string

            Identifier for the processor. Useful for debugging and metrics.

          • field string Required

            Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • If true and field does not exist, the processor quietly exits without modifying the document.

          • Preserves empty trailing fields, if any.

          • separator string Required

            A regex which matches the separator, for example, , or \s+.

          • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

        • Hide terminate attributes Show terminate attributes object
          • Description of the processor. Useful for describing the purpose of the processor or its configuration.

          • if object
          • Ignore failures for the processor.

          • on_failure array[object]

            Handle failures for the processor.

          • tag string

            Identifier for the processor. Useful for debugging and metrics.

        • trim object
          Hide trim attributes Show trim attributes object
          • Description of the processor. Useful for describing the purpose of the processor or its configuration.

          • if object
          • Ignore failures for the processor.

          • on_failure array[object]

            Handle failures for the processor.

          • tag string

            Identifier for the processor. Useful for debugging and metrics.

          • field string Required

            Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • If true and field does not exist, the processor quietly exits without modifying the document.

          • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

        • Hide uppercase attributes Show uppercase attributes object
          • Description of the processor. Useful for describing the purpose of the processor or its configuration.

          • if object
          • Ignore failures for the processor.

          • on_failure array[object]

            Handle failures for the processor.

          • tag string

            Identifier for the processor. Useful for debugging and metrics.

          • field string Required

            Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • If true and field does not exist or is null, the processor quietly exits without modifying the document.

          • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

        • Hide urldecode attributes Show urldecode attributes object
          • Description of the processor. Useful for describing the purpose of the processor or its configuration.

          • if object
          • Ignore failures for the processor.

          • on_failure array[object]

            Handle failures for the processor.

          • tag string

            Identifier for the processor. Useful for debugging and metrics.

          • field string Required

            Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • If true and field does not exist or is null, the processor quietly exits without modifying the document.

          • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

        • Hide uri_parts attributes Show uri_parts attributes object
          • Description of the processor. Useful for describing the purpose of the processor or its configuration.

          • if object
          • Ignore failures for the processor.

          • on_failure array[object]

            Handle failures for the processor.

          • tag string

            Identifier for the processor. Useful for debugging and metrics.

          • field string Required

            Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • If true and field does not exist, the processor quietly exits without modifying the document.

          • If true, the processor copies the unparsed URI to <target_field>.original.

          • If true, the processor removes the field after parsing the URI string. If parsing fails, the processor does not remove the field.

          • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

        • Hide user_agent attributes Show user_agent attributes object
          • Description of the processor. Useful for describing the purpose of the processor or its configuration.

          • if object
          • Ignore failures for the processor.

          • on_failure array[object]

            Handle failures for the processor.

          • tag string

            Identifier for the processor. Useful for debugging and metrics.

          • field string Required

            Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • If true and field does not exist, the processor quietly exits without modifying the document.

          • The name of the file in the config/ingest-user-agent directory containing the regular expressions for parsing the user agent string. Both the directory and the file have to be created before starting Elasticsearch. If not specified, ingest-user-agent will use the regexes.yaml from uap-core it ships with.

          • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • properties array[string]

            Controls what properties are added to target_field.

            Values are name, os, device, original, or version.

          • Extracts device type from the user agent string on a best-effort basis.

      • processors array[object]

        Processors used to perform transformations on documents before indexing. Processors run sequentially in the order specified.

        Hide processors attributes Show processors attributes object
        • append object
          Hide append attributes Show append attributes object
          • Description of the processor. Useful for describing the purpose of the processor or its configuration.

          • if object
          • Ignore failures for the processor.

          • on_failure array[object]

            Handle failures for the processor.

          • tag string

            Identifier for the processor. Useful for debugging and metrics.

          • field string Required

            Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • If false, the processor does not append values already present in the field.

        • Hide attachment attributes Show attachment attributes object
          • Description of the processor. Useful for describing the purpose of the processor or its configuration.

          • if object
          • Ignore failures for the processor.

          • on_failure array[object]

            Handle failures for the processor.

          • tag string

            Identifier for the processor. Useful for debugging and metrics.

          • field string Required

            Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • If true and field does not exist, the processor quietly exits without modifying the document.

          • The number of chars being used for extraction to prevent huge fields. Use -1 for no limit.

          • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • properties array[string]

            Array of properties to select to be stored. Can be content, title, name, author, keywords, date, content_type, content_length, language.

          • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • If true, the binary field will be removed from the document

          • Field containing the name of the resource to decode. If specified, the processor passes this resource name to the underlying Tika library to enable Resource Name Based Detection.

        • bytes object
          Hide bytes attributes Show bytes attributes object
          • Description of the processor. Useful for describing the purpose of the processor or its configuration.

          • if object
          • Ignore failures for the processor.

          • on_failure array[object]

            Handle failures for the processor.

          • tag string

            Identifier for the processor. Useful for debugging and metrics.

          • field string Required

            Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • If true and field does not exist or is null, the processor quietly exits without modifying the document.

          • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

        • circle object
          Hide circle attributes Show circle attributes object
          • Description of the processor. Useful for describing the purpose of the processor or its configuration.

          • if object
          • Ignore failures for the processor.

          • on_failure array[object]

            Handle failures for the processor.

          • tag string

            Identifier for the processor. Useful for debugging and metrics.

          • error_distance number Required

            The difference between the resulting inscribed distance from center to side and the circle’s radius (measured in meters for geo_shape, unit-less for shape).

          • field string Required

            Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • If true and field does not exist, the processor quietly exits without modifying the document.

          • shape_type string Required

            Values are geo_shape or shape.

          • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

        • Hide community_id attributes Show community_id attributes object
          • Description of the processor. Useful for describing the purpose of the processor or its configuration.

          • if object
          • Ignore failures for the processor.

          • on_failure array[object]

            Handle failures for the processor.

          • tag string

            Identifier for the processor. Useful for debugging and metrics.

          • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • seed number

            Seed for the community ID hash. Must be between 0 and 65535 (inclusive). The seed can prevent hash collisions between network domains, such as a staging and production network that use the same addressing scheme.

          • If true and any required fields are missing, the processor quietly exits without modifying the document.

        • convert object
          Hide convert attributes Show convert attributes object
          • Description of the processor. Useful for describing the purpose of the processor or its configuration.

          • if object
          • Ignore failures for the processor.

          • on_failure array[object]

            Handle failures for the processor.

          • tag string

            Identifier for the processor. Useful for debugging and metrics.

          • field string Required

            Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • If true and field does not exist or is null, the processor quietly exits without modifying the document.

          • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • type string Required

            Values are integer, long, double, float, boolean, ip, string, or auto.

        • csv object
          Hide csv attributes Show csv attributes object
          • Description of the processor. Useful for describing the purpose of the processor or its configuration.

          • if object
          • Ignore failures for the processor.

          • on_failure array[object]

            Handle failures for the processor.

          • tag string

            Identifier for the processor. Useful for debugging and metrics.

          • Value used to fill empty fields. Empty fields are skipped if this is not provided. An empty field is one with no value (2 consecutive separators) or empty quotes ("").

          • field string Required

            Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • If true and field does not exist, the processor quietly exits without modifying the document.

          • quote string

            Quote used in CSV, has to be single character string.

          • Separator used in CSV, has to be single character string.

          • target_fields string | array[string] Required
          • trim boolean

            Trim whitespaces in unquoted fields.

        • date object
          Hide date attributes Show date attributes object
          • Description of the processor. Useful for describing the purpose of the processor or its configuration.

          • if object
          • Ignore failures for the processor.

          • on_failure array[object]

            Handle failures for the processor.

          • tag string

            Identifier for the processor. Useful for debugging and metrics.

          • field string Required

            Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • formats array[string] Required

            An array of the expected date formats. Can be a java time pattern or one of the following formats: ISO8601, UNIX, UNIX_MS, or TAI64N.

          • locale string

            The locale to use when parsing the date, relevant when parsing month names or week days. Supports template snippets.

          • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • timezone string

            The timezone to use when parsing the date. Supports template snippets.

          • The format to use when writing the date to target_field. Must be a valid java time pattern.

        • Hide date_index_name attributes Show date_index_name attributes object
          • Description of the processor. Useful for describing the purpose of the processor or its configuration.

          • if object
          • Ignore failures for the processor.

          • on_failure array[object]

            Handle failures for the processor.

          • tag string

            Identifier for the processor. Useful for debugging and metrics.

          • date_formats array[string]

            An array of the expected date formats for parsing dates / timestamps in the document being preprocessed. Can be a java time pattern or one of the following formats: ISO8601, UNIX, UNIX_MS, or TAI64N.

          • date_rounding string Required

            How to round the date when formatting the date into the index name. Valid values are: y (year), M (month), w (week), d (day), h (hour), m (minute) and s (second). Supports template snippets.

          • field string Required

            Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • The format to be used when printing the parsed date into the index name. A valid java time pattern is expected here. Supports template snippets.

          • A prefix of the index name to be prepended before the printed date. Supports template snippets.

          • locale string

            The locale to use when parsing the date from the document being preprocessed, relevant when parsing month names or week days.

          • timezone string

            The timezone to use when parsing the date and when date math index supports resolves expressions into concrete index names.

        • dissect object
          Hide dissect attributes Show dissect attributes object
          • Description of the processor. Useful for describing the purpose of the processor or its configuration.

          • if object
          • Ignore failures for the processor.

          • on_failure array[object]

            Handle failures for the processor.

          • tag string

            Identifier for the processor. Useful for debugging and metrics.

          • The character(s) that separate the appended fields.

          • field string Required

            Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • If true and field does not exist or is null, the processor quietly exits without modifying the document.

          • pattern string Required

            The pattern to apply to the field.

        • Hide dot_expander attributes Show dot_expander attributes object
          • Description of the processor. Useful for describing the purpose of the processor or its configuration.

          • if object
          • Ignore failures for the processor.

          • on_failure array[object]

            Handle failures for the processor.

          • tag string

            Identifier for the processor. Useful for debugging and metrics.

          • field string Required

            Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • override boolean

            Controls the behavior when there is already an existing nested object that conflicts with the expanded field. When false, the processor will merge conflicts by combining the old and the new values into an array. When true, the value from the expanded field will overwrite the existing value.

          • path string

            The field that contains the field to expand. Only required if the field to expand is part another object field, because the field option can only understand leaf fields.

        • drop object
          Hide drop attributes Show drop attributes object
          • Description of the processor. Useful for describing the purpose of the processor or its configuration.

          • if object
          • Ignore failures for the processor.

          • on_failure array[object]

            Handle failures for the processor.

          • tag string

            Identifier for the processor. Useful for debugging and metrics.

        • enrich object
          Hide enrich attributes Show enrich attributes object
          • Description of the processor. Useful for describing the purpose of the processor or its configuration.

          • if object
          • Ignore failures for the processor.

          • on_failure array[object]

            Handle failures for the processor.

          • tag string

            Identifier for the processor. Useful for debugging and metrics.

          • field string Required

            Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • If true and field does not exist, the processor quietly exits without modifying the document.

          • The maximum number of matched documents to include under the configured target field. The target_field will be turned into a json array if max_matches is higher than 1, otherwise target_field will become a json object. In order to avoid documents getting too large, the maximum allowed value is 128.

          • override boolean

            If processor will update fields with pre-existing non-null-valued field. When set to false, such fields will not be touched.

          • policy_name string Required

            The name of the enrich policy to use.

          • Values are intersects, disjoint, within, or contains.

          • target_field string Required

            Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

        • fail object
          Hide fail attributes Show fail attributes object
          • Description of the processor. Useful for describing the purpose of the processor or its configuration.

          • if object
          • Ignore failures for the processor.

          • on_failure array[object]

            Handle failures for the processor.

          • tag string

            Identifier for the processor. Useful for debugging and metrics.

          • message string Required

            The error message thrown by the processor. Supports template snippets.

        • Hide fingerprint attributes Show fingerprint attributes object
          • Description of the processor. Useful for describing the purpose of the processor or its configuration.

          • if object
          • Ignore failures for the processor.

          • on_failure array[object]

            Handle failures for the processor.

          • tag string

            Identifier for the processor. Useful for debugging and metrics.

          • fields string | array[string] Required
          • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • salt string

            Salt value for the hash function.

          • method string

            Values are MD5, SHA-1, SHA-256, SHA-512, or MurmurHash3.

          • If true, the processor ignores any missing fields. If all fields are missing, the processor silently exits without modifying the document.

        • foreach object
          Hide foreach attributes Show foreach attributes object
          • Description of the processor. Useful for describing the purpose of the processor or its configuration.

          • if object
          • Ignore failures for the processor.

          • on_failure array[object]

            Handle failures for the processor.

          • tag string

            Identifier for the processor. Useful for debugging and metrics.

          • field string Required

            Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • If true, the processor silently exits without changing the document if the field is null or missing.

          • processor object Required
        • Hide ip_location attributes Show ip_location attributes object
          • Description of the processor. Useful for describing the purpose of the processor or its configuration.

          • if object
          • Ignore failures for the processor.

          • on_failure array[object]

            Handle failures for the processor.

          • tag string

            Identifier for the processor. Useful for debugging and metrics.

          • The database filename referring to a database the module ships with (GeoLite2-City.mmdb, GeoLite2-Country.mmdb, or GeoLite2-ASN.mmdb) or a custom database in the ingest-geoip config directory.

          • field string Required

            Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • first_only boolean

            If true, only the first found IP location data will be returned, even if the field contains an array.

          • If true and field does not exist, the processor quietly exits without modifying the document.

          • properties array[string]

            Controls what properties are added to the target_field based on the IP location lookup.

          • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • If true (and if ingest.geoip.downloader.eager.download is false), the missing database is downloaded when the pipeline is created. Else, the download is triggered by when the pipeline is used as the default_pipeline or final_pipeline in an index.

        • geo_grid object
          Hide geo_grid attributes Show geo_grid attributes object
          • Description of the processor. Useful for describing the purpose of the processor or its configuration.

          • if object
          • Ignore failures for the processor.

          • on_failure array[object]

            Handle failures for the processor.

          • tag string

            Identifier for the processor. Useful for debugging and metrics.

          • field string Required

            The field to interpret as a geo-tile.= The field format is determined by the tile_type.

          • tile_type string Required

            Values are geotile, geohex, or geohash.

          • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • If true and field does not exist, the processor quietly exits without modifying the document.

          • Values are geojson or wkt.

        • geoip object
          Hide geoip attributes Show geoip attributes object
          • Description of the processor. Useful for describing the purpose of the processor or its configuration.

          • if object
          • Ignore failures for the processor.

          • on_failure array[object]

            Handle failures for the processor.

          • tag string

            Identifier for the processor. Useful for debugging and metrics.

          • The database filename referring to a database the module ships with (GeoLite2-City.mmdb, GeoLite2-Country.mmdb, or GeoLite2-ASN.mmdb) or a custom database in the ingest-geoip config directory.

          • field string Required

            Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • first_only boolean

            If true, only the first found geoip data will be returned, even if the field contains an array.

          • If true and field does not exist, the processor quietly exits without modifying the document.

          • properties array[string]

            Controls what properties are added to the target_field based on the geoip lookup.

          • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • If true (and if ingest.geoip.downloader.eager.download is false), the missing database is downloaded when the pipeline is created. Else, the download is triggered by when the pipeline is used as the default_pipeline or final_pipeline in an index.

        • grok object
          Hide grok attributes Show grok attributes object
          • Description of the processor. Useful for describing the purpose of the processor or its configuration.

          • if object
          • Ignore failures for the processor.

          • on_failure array[object]

            Handle failures for the processor.

          • tag string

            Identifier for the processor. Useful for debugging and metrics.

          • Must be disabled or v1. If v1, the processor uses patterns with Elastic Common Schema (ECS) field names.

          • field string Required

            Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • If true and field does not exist or is null, the processor quietly exits without modifying the document.

          • A map of pattern-name and pattern tuples defining custom patterns to be used by the current processor. Patterns matching existing names will override the pre-existing definition.

          • patterns array[string] Required

            An ordered list of grok expression to match and extract named captures with. Returns on the first expression in the list that matches.

          • When true, _ingest._grok_match_index will be inserted into your matched document’s metadata with the index into the pattern found in patterns that matched.

        • gsub object
          Hide gsub attributes Show gsub attributes object
          • Description of the processor. Useful for describing the purpose of the processor or its configuration.

          • if object
          • Ignore failures for the processor.

          • on_failure array[object]

            Handle failures for the processor.

          • tag string

            Identifier for the processor. Useful for debugging and metrics.

          • field string Required

            Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • If true and field does not exist or is null, the processor quietly exits without modifying the document.

          • pattern string Required

            The pattern to be replaced.

          • replacement string Required

            The string to replace the matching patterns with.

          • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

        • Hide html_strip attributes Show html_strip attributes object
          • Description of the processor. Useful for describing the purpose of the processor or its configuration.

          • if object
          • Ignore failures for the processor.

          • on_failure array[object]

            Handle failures for the processor.

          • tag string

            Identifier for the processor. Useful for debugging and metrics.

          • field string Required

            Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • If true and field does not exist or is null, the processor quietly exits without modifying the document,

          • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

        • Hide inference attributes Show inference attributes object
          • Description of the processor. Useful for describing the purpose of the processor or its configuration.

          • if object
          • Ignore failures for the processor.

          • on_failure array[object]

            Handle failures for the processor.

          • tag string

            Identifier for the processor. Useful for debugging and metrics.

          • model_id string Required
          • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • Maps the document field names to the known field names of the model. This mapping takes precedence over any default mappings provided in the model configuration.

          • If true and any of the input fields defined in input_ouput are missing then those missing fields are quietly ignored, otherwise a missing field causes a failure. Only applies when using input_output configurations to explicitly list the input fields.

        • join object
          Hide join attributes Show join attributes object
          • Description of the processor. Useful for describing the purpose of the processor or its configuration.

          • if object
          • Ignore failures for the processor.

          • on_failure array[object]

            Handle failures for the processor.

          • tag string

            Identifier for the processor. Useful for debugging and metrics.

          • field string Required

            Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • separator string Required

            The separator character.

          • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

        • json object
          Hide json attributes Show json attributes object
          • Description of the processor. Useful for describing the purpose of the processor or its configuration.

          • if object
          • Ignore failures for the processor.

          • on_failure array[object]

            Handle failures for the processor.

          • tag string

            Identifier for the processor. Useful for debugging and metrics.

          • Flag that forces the parsed JSON to be added at the top level of the document. target_field must not be set when this option is chosen.

          • Values are replace or merge.

          • When set to true, the JSON parser will not fail if the JSON contains duplicate keys. Instead, the last encountered value for any duplicate key wins.

          • field string Required

            Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

        • kv object
          Hide kv attributes Show kv attributes object
          • Description of the processor. Useful for describing the purpose of the processor or its configuration.

          • if object
          • Ignore failures for the processor.

          • on_failure array[object]

            Handle failures for the processor.

          • tag string

            Identifier for the processor. Useful for debugging and metrics.

          • exclude_keys array[string]

            List of keys to exclude from document.

          • field string Required

            Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • field_split string Required

            Regex pattern to use for splitting key-value pairs.

          • If true and field does not exist or is null, the processor quietly exits without modifying the document.

          • include_keys array[string]

            List of keys to filter and insert into document. Defaults to including all keys.

          • prefix string

            Prefix to be added to extracted keys.

          • If true. strip brackets (), <>, [] as well as quotes ' and " from extracted values.

          • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • trim_key string

            String of characters to trim from extracted keys.

          • String of characters to trim from extracted values.

          • value_split string Required

            Regex pattern to use for splitting the key from the value within a key-value pair.

        • Hide lowercase attributes Show lowercase attributes object
          • Description of the processor. Useful for describing the purpose of the processor or its configuration.

          • if object
          • Ignore failures for the processor.

          • on_failure array[object]

            Handle failures for the processor.

          • tag string

            Identifier for the processor. Useful for debugging and metrics.

          • field string Required

            Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • If true and field does not exist or is null, the processor quietly exits without modifying the document.

          • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

        • Hide network_direction attributes Show network_direction attributes object
          • Description of the processor. Useful for describing the purpose of the processor or its configuration.

          • if object
          • Ignore failures for the processor.

          • on_failure array[object]

            Handle failures for the processor.

          • tag string

            Identifier for the processor. Useful for debugging and metrics.

          • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • internal_networks array[string]

            List of internal networks. Supports IPv4 and IPv6 addresses and ranges in CIDR notation. Also supports the named ranges listed below. These may be constructed with template snippets. Must specify only one of internal_networks or internal_networks_field.

          • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • If true and any required fields are missing, the processor quietly exits without modifying the document.

        • pipeline object
          Hide pipeline attributes Show pipeline attributes object
          • Description of the processor. Useful for describing the purpose of the processor or its configuration.

          • if object
          • Ignore failures for the processor.

          • on_failure array[object]

            Handle failures for the processor.

          • tag string

            Identifier for the processor. Useful for debugging and metrics.

          • name string Required
          • Whether to ignore missing pipelines instead of failing.

        • redact object
          Hide redact attributes Show redact attributes object
          • Description of the processor. Useful for describing the purpose of the processor or its configuration.

          • if object
          • Ignore failures for the processor.

          • on_failure array[object]

            Handle failures for the processor.

          • tag string

            Identifier for the processor. Useful for debugging and metrics.

          • field string Required

            Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • patterns array[string] Required

            A list of grok expressions to match and redact named captures with

          • prefix string

            Start a redacted section with this token

          • suffix string

            End a redacted section with this token

          • If true and field does not exist or is null, the processor quietly exits without modifying the document.

          • If true and the current license does not support running redact processors, then the processor quietly exits without modifying the document

          • If true then ingest metadata _ingest._redact._is_redacted is set to true if the document has been redacted

        • Hide registered_domain attributes Show registered_domain attributes object
          • Description of the processor. Useful for describing the purpose of the processor or its configuration.

          • if object
          • Ignore failures for the processor.

          • on_failure array[object]

            Handle failures for the processor.

          • tag string

            Identifier for the processor. Useful for debugging and metrics.

          • field string Required

            Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • If true and any required fields are missing, the processor quietly exits without modifying the document.

        • remove object
          Hide remove attributes Show remove attributes object
          • Description of the processor. Useful for describing the purpose of the processor or its configuration.

          • if object
          • Ignore failures for the processor.

          • on_failure array[object]

            Handle failures for the processor.

          • tag string

            Identifier for the processor. Useful for debugging and metrics.

          • field string | array[string] Required
          • keep string | array[string]
          • If true and field does not exist or is null, the processor quietly exits without modifying the document.

        • rename object
          Hide rename attributes Show rename attributes object
          • Description of the processor. Useful for describing the purpose of the processor or its configuration.

          • if object
          • Ignore failures for the processor.

          • on_failure array[object]

            Handle failures for the processor.

          • tag string

            Identifier for the processor. Useful for debugging and metrics.

          • field string Required

            Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • If true and field does not exist, the processor quietly exits without modifying the document.

          • target_field string Required

            Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

        • reroute object
          Hide reroute attributes Show reroute attributes object
          • Description of the processor. Useful for describing the purpose of the processor or its configuration.

          • if object
          • Ignore failures for the processor.

          • on_failure array[object]

            Handle failures for the processor.

          • tag string

            Identifier for the processor. Useful for debugging and metrics.

          • A static value for the target. Can’t be set when the dataset or namespace option is set.

        • script object
          Hide script attributes Show script attributes object
          • Description of the processor. Useful for describing the purpose of the processor or its configuration.

          • if object
          • Ignore failures for the processor.

          • on_failure array[object]

            Handle failures for the processor.

          • tag string

            Identifier for the processor. Useful for debugging and metrics.

          • id string
          • params object

            Object containing parameters for the script.

        • set object
          Hide set attributes Show set attributes object
          • Description of the processor. Useful for describing the purpose of the processor or its configuration.

          • if object
          • Ignore failures for the processor.

          • on_failure array[object]

            Handle failures for the processor.

          • tag string

            Identifier for the processor. Useful for debugging and metrics.

          • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • field string Required

            Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • If true and value is a template snippet that evaluates to null or the empty string, the processor quietly exits without modifying the document.

          • The media type for encoding value. Applies only when value is a template snippet. Must be one of application/json, text/plain, or application/x-www-form-urlencoded.

          • override boolean

            If true processor will update fields with pre-existing non-null-valued field. When set to false, such fields will not be touched.

          • value object

            The value to be set for the field. Supports template snippets. May specify only one of value or copy_from.

        • Hide set_security_user attributes Show set_security_user attributes object
          • Description of the processor. Useful for describing the purpose of the processor or its configuration.

          • if object
          • Ignore failures for the processor.

          • on_failure array[object]

            Handle failures for the processor.

          • tag string

            Identifier for the processor. Useful for debugging and metrics.

          • field string Required

            Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • properties array[string]

            Controls what user related properties are added to the field.

        • sort object
          Hide sort attributes Show sort attributes object
          • Description of the processor. Useful for describing the purpose of the processor or its configuration.

          • if object
          • Ignore failures for the processor.

          • on_failure array[object]

            Handle failures for the processor.

          • tag string

            Identifier for the processor. Useful for debugging and metrics.

          • field string Required

            Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • order string

            Values are asc or desc.

          • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

        • split object
          Hide split attributes Show split attributes object
          • Description of the processor. Useful for describing the purpose of the processor or its configuration.

          • if object
          • Ignore failures for the processor.

          • on_failure array[object]

            Handle failures for the processor.

          • tag string

            Identifier for the processor. Useful for debugging and metrics.

          • field string Required

            Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • If true and field does not exist, the processor quietly exits without modifying the document.

          • Preserves empty trailing fields, if any.

          • separator string Required

            A regex which matches the separator, for example, , or \s+.

          • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

        • Hide terminate attributes Show terminate attributes object
          • Description of the processor. Useful for describing the purpose of the processor or its configuration.

          • if object
          • Ignore failures for the processor.

          • on_failure array[object]

            Handle failures for the processor.

          • tag string

            Identifier for the processor. Useful for debugging and metrics.

        • trim object
          Hide trim attributes Show trim attributes object
          • Description of the processor. Useful for describing the purpose of the processor or its configuration.

          • if object
          • Ignore failures for the processor.

          • on_failure array[object]

            Handle failures for the processor.

          • tag string

            Identifier for the processor. Useful for debugging and metrics.

          • field string Required

            Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • If true and field does not exist, the processor quietly exits without modifying the document.

          • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

        • Hide uppercase attributes Show uppercase attributes object
          • Description of the processor. Useful for describing the purpose of the processor or its configuration.

          • if object
          • Ignore failures for the processor.

          • on_failure array[object]

            Handle failures for the processor.

          • tag string

            Identifier for the processor. Useful for debugging and metrics.

          • field string Required

            Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • If true and field does not exist or is null, the processor quietly exits without modifying the document.

          • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

        • Hide urldecode attributes Show urldecode attributes object
          • Description of the processor. Useful for describing the purpose of the processor or its configuration.

          • if object
          • Ignore failures for the processor.

          • on_failure array[object]

            Handle failures for the processor.

          • tag string

            Identifier for the processor. Useful for debugging and metrics.

          • field string Required

            Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • If true and field does not exist or is null, the processor quietly exits without modifying the document.

          • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

        • Hide uri_parts attributes Show uri_parts attributes object
          • Description of the processor. Useful for describing the purpose of the processor or its configuration.

          • if object
          • Ignore failures for the processor.

          • on_failure array[object]

            Handle failures for the processor.

          • tag string

            Identifier for the processor. Useful for debugging and metrics.

          • field string Required

            Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • If true and field does not exist, the processor quietly exits without modifying the document.

          • If true, the processor copies the unparsed URI to <target_field>.original.

          • If true, the processor removes the field after parsing the URI string. If parsing fails, the processor does not remove the field.

          • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

        • Hide user_agent attributes Show user_agent attributes object
          • Description of the processor. Useful for describing the purpose of the processor or its configuration.

          • if object
          • Ignore failures for the processor.

          • on_failure array[object]

            Handle failures for the processor.

          • tag string

            Identifier for the processor. Useful for debugging and metrics.

          • field string Required

            Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • If true and field does not exist, the processor quietly exits without modifying the document.

          • The name of the file in the config/ingest-user-agent directory containing the regular expressions for parsing the user agent string. Both the directory and the file have to be created before starting Elasticsearch. If not specified, ingest-user-agent will use the regexes.yaml from uap-core it ships with.

          • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • properties array[string]

            Controls what properties are added to target_field.

            Values are name, os, device, original, or version.

          • Extracts device type from the user agent string on a best-effort basis.

      • version number
      • deprecated boolean

        Marks this ingest pipeline as deprecated. When a deprecated ingest pipeline is referenced as the default or final pipeline when creating or updating a non-deprecated index template, Elasticsearch will emit a deprecation warning.

      • _meta object
        Hide _meta attribute Show _meta attribute object
        • * object Additional properties

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • docs array[object] Required
      Hide docs attribute Show docs attribute object
      • doc object
        Hide doc attributes Show doc attributes object
        • _id string Required
        • _index string Required
        • _source object Required

          JSON body for the document.

          Hide _source attribute Show _source attribute object
          • * object Additional properties
        • _version number | string Required

          Some APIs will return values such as numbers also as a string (notably epoch timestamps). This behavior is used to capture this behavior while keeping the semantics of the field type.

          Depending on the target language, code generators can keep the union or remove it and leniently parse strings to the target type.

        • executed_pipelines array[string] Required

          A list of the names of the pipelines executed on this document.

        • ignored_fields array[object]

          A list of the fields that would be ignored at the indexing step. For example, a field whose value is larger than the allowed limit would make it through all of the pipelines, but would not be indexed into Elasticsearch.

          Hide ignored_fields attribute Show ignored_fields attribute object
          • * string Additional properties
        • error object
          Hide error attributes Show error attributes object
GET /_ingest/{index}/_simulate
curl \
 --request GET 'http://api.example.com/_ingest/{index}/_simulate' \
 --header "Authorization: $API_KEY" \
 --header "Content-Type: application/json" \
 --data '"{\n  \"docs\": [\n    {\n      \"_id\": 123,\n      \"_index\": \"my-index\",\n      \"_source\": {\n        \"foo\": \"bar\"\n      }\n    },\n    {\n      \"_id\": 456,\n      \"_index\": \"my-index\",\n      \"_source\": {\n        \"foo\": \"rab\"\n      }\n    }\n  ]\n}"'
In this example the index `my-index` has a default pipeline called `my-pipeline` and a final pipeline called `my-final-pipeline`. Since both documents are being ingested into `my-index`, both pipelines are run using the pipeline definitions that are already in the system.
{
  "docs": [
    {
      "_id": 123,
      "_index": "my-index",
      "_source": {
        "foo": "bar"
      }
    },
    {
      "_id": 456,
      "_index": "my-index",
      "_source": {
        "foo": "rab"
      }
    }
  ]
}
In this example the index `my-index` has a default pipeline called `my-pipeline` and a final pipeline called `my-final-pipeline`. But a substitute definition of `my-pipeline` is provided in `pipeline_substitutions`. The substitute `my-pipeline` will be used in place of the `my-pipeline` that is in the system, and then the `my-final-pipeline` that is already defined in the system will run.
{
  "docs": [
    {
      "_index": "my-index",
      "_id": 123,
      "_source": {
        "foo": "bar"
      }
    },
    {
      "_index": "my-index",
      "_id": 456,
      "_source": {
        "foo": "rab"
      }
    }
  ],
  "pipeline_substitutions": {
    "my-pipeline": {
      "processors": [
        {
          "uppercase": {
            "field": "foo"
          }
        }
      ]
    }
  }
}
In this example, imagine that the index `my-index` has a strict mapping with only the `foo` keyword field defined. Say that field mapping came from a component template named `my-mappings-template`. You want to test adding a new field, `bar`. So a substitute definition of `my-mappings-template` is provided in `component_template_substitutions`. The substitute `my-mappings-template` will be used in place of the existing mapping for `my-index` and in place of the `my-mappings-template` that is in the system.
{
  "docs": [
    {
      "_index": "my-index",
      "_id": "123",
      "_source": {
        "foo": "foo"
      }
    },
    {
      "_index": "my-index",
      "_id": "456",
      "_source": {
        "bar": "rab"
      }
    }
  ],
  "component_template_substitutions": {
    "my-mappings_template": {
      "template": {
        "mappings": {
          "dynamic": "strict",
          "properties": {
            "foo": {
              "type": "keyword"
            },
            "bar": {
              "type": "keyword"
            }
          }
        }
      }
    }
  }
}
The pipeline, component template, and index template substitutions replace the existing pipeline details for the duration of this request.
{
  "docs": [
    {
      "_id": "id",
      "_index": "my-index",
      "_source": {
        "foo": "bar"
      }
    },
    {
      "_id": "id",
      "_index": "my-index",
      "_source": {
        "foo": "rab"
      }
    }
  ],
  "pipeline_substitutions": {
    "my-pipeline": {
      "processors": [
        {
          "set": {
            "field": "field3",
            "value": "value3"
          }
        }
      ]
    }
  },
  "component_template_substitutions": {
    "my-component-template": {
      "template": {
        "mappings": {
          "dynamic": true,
          "properties": {
            "field3": {
              "type": "keyword"
            }
          }
        },
        "settings": {
          "index": {
            "default_pipeline": "my-pipeline"
          }
        }
      }
    }
  },
  "index_template_substitutions": {
    "my-index-template": {
      "index_patterns": [
        "my-index-*"
      ],
      "composed_of": [
        "component_template_1",
        "component_template_2"
      ]
    }
  },
  "mapping_addition": {
    "dynamic": "strict",
    "properties": {
      "foo": {
        "type": "keyword"
      }
    }
  }
}
A successful response when the simulation uses pipeline definitions that are already in the system.
{
  "docs": [
    {
      "doc": null,
      "_id": 123,
      "_index": "my-index",
      "_version": -3,
      "_source": {
        "field1": "value1",
        "field2": "value2",
        "foo": "bar"
      },
      "executed_pipelines": [
        "my-pipeline",
        "my-final-pipeline"
      ]
    },
    {
      "doc": null,
      "_id": 456,
      "_index": "my-index",
      "_version": "-3,",
      "_source": {
        "field1": "value1",
        "field2": "value2",
        "foo": "rab"
      },
      "executed_pipelines": [
        "my-pipeline",
        "my-final-pipeline"
      ]
    }
  ]
}
A successful response when the simulation uses pipeline substitutions.
{
  "docs": [
    {
      "doc": null,
      "_id": 123,
      "_index": "my-index",
      "_version": -3,
      "_source": {
        "field2": "value2",
        "foo": "BAR"
      },
      "executed_pipelines": [
        "my-pipeline",
        "my-final-pipeline"
      ]
    },
    {
      "doc": null,
      "_id": 456,
      "_index": "my-index",
      "_version": -3,
      "_source": {
        "field2": "value2",
        "foo": "RAB"
      },
      "executed_pipelines": [
        "my-pipeline",
        "my-final-pipeline"
      ]
    }
  ]
}
A successful response when the simulation uses pipeline substitutions.
{
  "docs": [
    {
      "doc": {
        "_id": "123",
        "_index": "my-index",
        "_version": -3,
        "_source": {
          "foo": "foo"
        },
        "executed_pipelines": []
      }
    },
    {
      "doc": {
        "_id": "456",
        "_index": "my-index",
        "_version": -3,
        "_source": {
          "bar": "rab"
        },
      "executed_pipelines": []
      }
    }
  ]
}












































































































Create a datafeed Added in 5.4.0

PUT /_ml/datafeeds/{datafeed_id}

Datafeeds retrieve data from Elasticsearch for analysis by an anomaly detection job. You can associate only one datafeed with each anomaly detection job. The datafeed contains a query that runs at a defined interval (frequency). If you are concerned about delayed data, you can add a delay (query_delay') at each interval. By default, the datafeed uses the following query:{"match_all": {"boost": 1}}`.

When Elasticsearch security features are enabled, your datafeed remembers which roles the user who created it had at the time of creation and runs the query using those same roles. If you provide secondary authorization headers, those credentials are used instead. You must use Kibana, this API, or the create anomaly detection jobs API to create a datafeed. Do not add a datafeed directly to the .ml-config index. Do not give users write privileges on the .ml-config index.

Path parameters

  • datafeed_id string Required

    A numerical character string that uniquely identifies the datafeed. This identifier can contain lowercase alphanumeric characters (a-z and 0-9), hyphens, and underscores. It must start and end with alphanumeric characters.

Query parameters

  • If true, wildcard indices expressions that resolve into no concrete indices are ignored. This includes the _all string or when no indices are specified.

  • expand_wildcards string | array[string]

    Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports comma-separated values.

    Supported values include:

    • all: Match any data stream or index, including hidden ones.
    • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
    • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
    • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
    • none: Wildcard expressions are not accepted.
  • ignore_throttled boolean Deprecated

    If true, concrete, expanded, or aliased indices are ignored when frozen.

  • If true, unavailable indices (missing or closed) are ignored.

application/json

Body Required

  • If set, the datafeed performs aggregation searches. Support for aggregations is limited and should be used only with low cardinality data.

  • Hide chunking_config attributes Show chunking_config attributes object
    • mode string Required

      Values are auto, manual, or off.

    • A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

  • Hide delayed_data_check_config attributes Show delayed_data_check_config attributes object
    • A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

    • enabled boolean Required

      Specifies whether the datafeed periodically checks for delayed data.

  • A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

  • indices string | array[string]
  • Hide indices_options attributes Show indices_options attributes object
    • If false, the request returns an error if any wildcard expression, index alias, or _all value targets only missing or closed indices. This behavior applies even if the request targets other open indices. For example, a request targeting foo*,bar* returns an error if an index starts with foo but no index starts with bar.

    • expand_wildcards string | array[string]
    • If true, missing or closed indices are not included in the response.

    • If true, concrete, expanded or aliased indices are ignored when frozen.

  • job_id string
  • If a real-time datafeed has never seen any data (including during any initial training period), it automatically stops and closes the associated job after this many real-time searches return no documents. In other words, it stops after frequency times max_empty_searches of real-time operation. If not set, a datafeed with no end time that sees no data remains started until it is explicitly stopped. By default, it is not set.

  • query object

    An Elasticsearch Query DSL (Domain Specific Language) object that defines a query.

    External documentation
  • A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

  • Hide runtime_mappings attribute Show runtime_mappings attribute object
    • * object Additional properties
      Hide * attributes Show * attributes object
      • fields object

        For type composite

        Hide fields attribute Show fields attribute object
        • * object Additional properties
          Hide * attribute Show * attribute object
          • type string Required

            Values are boolean, composite, date, double, geo_point, geo_shape, ip, keyword, long, or lookup.

      • fetch_fields array[object]

        For type lookup

        Hide fetch_fields attributes Show fetch_fields attributes object
        • field string Required

          Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

        • format string
      • format string

        A custom format for date type runtime fields.

      • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

      • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

      • script object
        Hide script attributes Show script attributes object
        • source string | object

          One of:
        • id string
        • params object

          Specifies any named parameters that are passed into the script as variables. Use parameters instead of hard-coded values to decrease compile time.

          Hide params attribute Show params attribute object
          • * object Additional properties
        • lang string

          Any of:

          Values are painless, expression, mustache, or java.

        • options object
          Hide options attribute Show options attribute object
          • * string Additional properties
      • type string Required

        Values are boolean, composite, date, double, geo_point, geo_shape, ip, keyword, long, or lookup.

  • Specifies scripts that evaluate custom expressions and returns script fields to the datafeed. The detector configuration objects in a job can contain functions that use these script fields.

    Hide script_fields attribute Show script_fields attribute object
    • * object Additional properties
      Hide * attributes Show * attributes object
      • script object Required
        Hide script attributes Show script attributes object
        • source string | object

          One of:
        • id string
        • params object

          Specifies any named parameters that are passed into the script as variables. Use parameters instead of hard-coded values to decrease compile time.

          Hide params attribute Show params attribute object
          • * object Additional properties
        • lang string

          Any of:

          Values are painless, expression, mustache, or java.

        • options object
          Hide options attribute Show options attribute object
          • * string Additional properties
  • The size parameter that is used in Elasticsearch searches when the datafeed does not use aggregations. The maximum value is the value of index.max_result_window, which is 10,000 by default.

  • headers object

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • Hide authorization attributes Show authorization attributes object
      • api_key object
        Hide api_key attributes Show api_key attributes object
        • id string Required

          The identifier for the API key.

        • name string Required

          The name of the API key.

      • roles array[string]

        If a user ID was used for the most recent update to the datafeed, its roles at the time of the update are listed in the response.

      • If a service account was used for the most recent update to the datafeed, the account name is listed in the response.

    • chunking_config object Required
      Hide chunking_config attributes Show chunking_config attributes object
      • mode string Required

        Values are auto, manual, or off.

      • A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

    • Hide delayed_data_check_config attributes Show delayed_data_check_config attributes object
      • A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

      • enabled boolean Required

        Specifies whether the datafeed periodically checks for delayed data.

    • datafeed_id string Required
    • A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

    • indices array[string] Required
    • job_id string Required
    • Hide indices_options attributes Show indices_options attributes object
      • If false, the request returns an error if any wildcard expression, index alias, or _all value targets only missing or closed indices. This behavior applies even if the request targets other open indices. For example, a request targeting foo*,bar* returns an error if an index starts with foo but no index starts with bar.

      • expand_wildcards string | array[string]
      • If true, missing or closed indices are not included in the response.

      • If true, concrete, expanded or aliased indices are ignored when frozen.

    • query object Required

      An Elasticsearch Query DSL (Domain Specific Language) object that defines a query.

      External documentation
    • query_delay string Required

      A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

    • Hide runtime_mappings attribute Show runtime_mappings attribute object
      • * object Additional properties
        Hide * attributes Show * attributes object
        • fields object

          For type composite

          Hide fields attribute Show fields attribute object
          • * object Additional properties
            Hide * attribute Show * attribute object
            • type string Required

              Values are boolean, composite, date, double, geo_point, geo_shape, ip, keyword, long, or lookup.

        • fetch_fields array[object]

          For type lookup

          Hide fetch_fields attributes Show fetch_fields attributes object
          • field string Required

            Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • format string
        • format string

          A custom format for date type runtime fields.

        • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

        • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

        • script object
          Hide script attributes Show script attributes object
          • source string | object

            One of:
          • id string
          • params object

            Specifies any named parameters that are passed into the script as variables. Use parameters instead of hard-coded values to decrease compile time.

            Hide params attribute Show params attribute object
            • * object Additional properties
          • lang string

            Any of:

            Values are painless, expression, mustache, or java.

          • options object
            Hide options attribute Show options attribute object
            • * string Additional properties
        • type string Required

          Values are boolean, composite, date, double, geo_point, geo_shape, ip, keyword, long, or lookup.

    • Hide script_fields attribute Show script_fields attribute object
      • * object Additional properties
        Hide * attributes Show * attributes object
        • script object Required
          Hide script attributes Show script attributes object
          • source string | object

            One of:
          • id string
          • params object

            Specifies any named parameters that are passed into the script as variables. Use parameters instead of hard-coded values to decrease compile time.

            Hide params attribute Show params attribute object
            • * object Additional properties
          • lang string

            Any of:

            Values are painless, expression, mustache, or java.

          • options object
            Hide options attribute Show options attribute object
            • * string Additional properties
    • scroll_size number Required
PUT /_ml/datafeeds/{datafeed_id}
curl \
 --request PUT 'http://api.example.com/_ml/datafeeds/{datafeed_id}' \
 --header "Authorization: $API_KEY" \
 --header "Content-Type: application/json" \
 --data '{"aggregations":{},"chunking_config":{"mode":"auto","time_span":"string"},"delayed_data_check_config":{"check_window":"string","enabled":true},"frequency":"string","indices":"string","indices_options":{"allow_no_indices":true,"expand_wildcards":"string","ignore_unavailable":true,"ignore_throttled":true},"job_id":"string","max_empty_searches":42.0,"query":{},"query_delay":"string","runtime_mappings":{"additionalProperty1":{"fields":{"additionalProperty1":{"type":"boolean"},"additionalProperty2":{"type":"boolean"}},"fetch_fields":[{"field":"string","format":"string"}],"format":"string","input_field":"string","target_field":"string","target_index":"string","script":{"":"painless","id":"string","params":{"additionalProperty1":{},"additionalProperty2":{}},"options":{"additionalProperty1":"string","additionalProperty2":"string"}},"type":"boolean"},"additionalProperty2":{"fields":{"additionalProperty1":{"type":"boolean"},"additionalProperty2":{"type":"boolean"}},"fetch_fields":[{"field":"string","format":"string"}],"format":"string","input_field":"string","target_field":"string","target_index":"string","script":{"":"painless","id":"string","params":{"additionalProperty1":{},"additionalProperty2":{}},"options":{"additionalProperty1":"string","additionalProperty2":"string"}},"type":"boolean"}},"script_fields":{"additionalProperty1":{"script":{"":"painless","id":"string","params":{"additionalProperty1":{},"additionalProperty2":{}},"options":{"additionalProperty1":"string","additionalProperty2":"string"}},"ignore_failure":true},"additionalProperty2":{"script":{"":"painless","id":"string","params":{"additionalProperty1":{},"additionalProperty2":{}},"options":{"additionalProperty1":"string","additionalProperty2":"string"}},"ignore_failure":true}},"scroll_size":42.0,"headers":{}}'




































Get anomaly detection jobs configuration info Added in 5.5.0

GET /_ml/anomaly_detectors/{job_id}

You can get information for multiple anomaly detection jobs in a single API request by using a group name, a comma-separated list of jobs, or a wildcard expression. You can get information for all anomaly detection jobs by using _all, by specifying * as the <job_id>, or by omitting the <job_id>.

Path parameters

  • job_id string | array[string] Required

    Identifier for the anomaly detection job. It can be a job identifier, a group name, or a wildcard expression. If you do not specify one of these options, the API returns information for all anomaly detection jobs.

Query parameters

  • Specifies what to do when the request:

    1. Contains wildcard expressions and there are no jobs that match.
    2. Contains the _all string or no identifiers and there are no matches.
    3. Contains wildcard expressions and there are only partial matches.

    The default value is true, which returns an empty jobs array when there are no matches and the subset of results when there are partial matches. If this parameter is false, the request returns a 404 status code when there are no matches or only partial matches.

  • Indicates if certain fields should be removed from the configuration on retrieval. This allows the configuration to be in an acceptable format to be retrieved and then added to another cluster.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • count number Required
    • jobs array[object] Required
      Hide jobs attributes Show jobs attributes object
      • allow_lazy_open boolean Required

        Advanced configuration option. Specifies whether this job can open when there is insufficient machine learning node capacity for it to be immediately assigned to a node.

      • analysis_config object Required
        Hide analysis_config attributes Show analysis_config attributes object
        • A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

        • categorization_analyzer string | object

          One of:
        • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

        • If categorization_field_name is specified, you can also define optional filters. This property expects an array of regular expressions. The expressions are used to filter out matching sequences from the categorization field values. You can use this functionality to fine tune the categorization by excluding sequences from consideration when categories are defined. For example, you can exclude SQL statements that appear in your log files. This property cannot be used at the same time as categorization_analyzer. If you only want to define simple regular expression filters that are applied prior to tokenization, setting this property is the easiest method. If you also want to customize the tokenizer or post-tokenization filtering, use the categorization_analyzer property instead and include the filters as pattern_replace character filters. The effect is exactly the same.

        • detectors array[object] Required

          Detector configuration objects specify which data fields a job analyzes. They also specify which analytical functions are used. You can specify multiple detectors for a job. If the detectors array does not contain at least one detector, no analysis can occur and an error is returned.

          Hide detectors attributes Show detectors attributes object
          • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • custom_rules array[object]

            Custom rules enable you to customize the way detectors operate. For example, a rule may dictate conditions under which results should be skipped. Kibana refers to custom rules as job rules.

          • A description of the detector.

          • A unique identifier for the detector. This identifier is based on the order of the detectors in the analysis_config, starting at zero. If you specify a value for this property, it is ignored.

          • Values are all, none, by, or over.

          • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • function string

            The analysis function that is used. For example, count, rare, mean, min, max, or sum.

          • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • use_null boolean

            Defines whether a new series is used as the null series when there is no value for the by or partition fields.

        • influencers array[string]

          A comma separated list of influencer field names. Typically these can be the by, over, or partition fields that are used in the detector configuration. You might also want to use a field name that is not specifically named in a detector, but is available as part of the input data. When you use multiple detectors, the use of influencers is recommended as it aggregates results for each influencer entity.

        • latency string

          A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

        • A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

        • This functionality is reserved for internal use. It is not supported for use in customer environments and is not subject to the support SLA of official GA features. If set to true, the analysis will automatically find correlations between metrics for a given by field value and report anomalies when those correlations cease to hold. For example, suppose CPU and memory usage on host A is usually highly correlated with the same metrics on host B. Perhaps this correlation occurs because they are running a load-balanced application. If you enable this property, anomalies will be reported when, for example, CPU usage on host A is high and the value of CPU usage on host B is low. That is to say, you’ll see an anomaly when the CPU of host A is unusual given the CPU of host B. To use the multivariate_by_fields property, you must also specify by_field_name in your detector.

        • Hide per_partition_categorization attributes Show per_partition_categorization attributes object
          • enabled boolean

            To enable this setting, you must also set the partition_field_name property to the same value in every detector that uses the keyword mlcategory. Otherwise, job creation fails.

          • This setting can be set to true only if per-partition categorization is enabled. If true, both categorization and subsequent anomaly detection stops for partitions where the categorization status changes to warn. This setting makes it viable to have a job where it is expected that categorization works well for some partitions but not others; you do not pay the cost of bad categorization forever in the partitions where it works badly.

        • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

      • Hide analysis_limits attributes Show analysis_limits attributes object
      • A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

      • blocked object
        Hide blocked attributes Show blocked attributes object
      • create_time string | number

        A date and time, either as a string whose format can depend on the context (defaulting to ISO 8601), or a number of milliseconds since the Epoch. Elasticsearch accepts both as input, but will generally output a string representation.

      • Custom metadata about the job

      • Advanced configuration option, which affects the automatic removal of old model snapshots for this job. It specifies a period of time (in days) after which only the first snapshot per day is retained. This period is relative to the timestamp of the most recent snapshot for this job. Valid values range from 0 to model_snapshot_retention_days.

      • data_description object Required
        Hide data_description attributes Show data_description attributes object
        • format string

          Only JSON format is supported at this time.

        • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

        • The time format, which can be epoch, epoch_ms, or a custom pattern. The value epoch refers to UNIX or Epoch time (the number of seconds since 1 Jan 1970). The value epoch_ms indicates that time is measured in milliseconds since the epoch. The epoch and epoch_ms time formats accept either integer or real values. Custom patterns must conform to the Java DateTimeFormatter class. When you use date-time formatting patterns, it is recommended that you provide the full date, time and time zone. For example: yyyy-MM-dd'T'HH:mm:ssX. If the pattern that you specify is not sufficient to produce a complete timestamp, job creation fails.

      • Hide datafeed_config attributes Show datafeed_config attributes object
        • Hide authorization attributes Show authorization attributes object
          • api_key object
            Hide api_key attributes Show api_key attributes object
            • id string Required

              The identifier for the API key.

            • name string Required

              The name of the API key.

          • roles array[string]

            If a user ID was used for the most recent update to the datafeed, its roles at the time of the update are listed in the response.

          • If a service account was used for the most recent update to the datafeed, the account name is listed in the response.

        • Hide chunking_config attributes Show chunking_config attributes object
          • mode string Required

            Values are auto, manual, or off.

          • A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

        • datafeed_id string Required
        • A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

        • indices array[string] Required
        • indexes array[string]
        • job_id string Required
        • A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

        • Hide script_fields attribute Show script_fields attribute object
        • Hide delayed_data_check_config attributes Show delayed_data_check_config attributes object
          • A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

          • enabled boolean Required

            Specifies whether the datafeed periodically checks for delayed data.

        • Hide runtime_mappings attribute Show runtime_mappings attribute object
          • * object Additional properties
            Hide * attributes Show * attributes object
            • fields object

              For type composite

            • fetch_fields array[object]

              For type lookup

            • format string

              A custom format for date type runtime fields.

            • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

            • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

            • script object
            • type string Required

              Values are boolean, composite, date, double, geo_point, geo_shape, ip, keyword, long, or lookup.

        • Hide indices_options attributes Show indices_options attributes object
          • If false, the request returns an error if any wildcard expression, index alias, or _all value targets only missing or closed indices. This behavior applies even if the request targets other open indices. For example, a request targeting foo*,bar* returns an error if an index starts with foo but no index starts with bar.

          • expand_wildcards string | array[string]
          • If true, missing or closed indices are not included in the response.

          • If true, concrete, expanded or aliased indices are ignored when frozen.

        • query object Required

          The Elasticsearch query domain-specific language (DSL). This value corresponds to the query object in an Elasticsearch search POST body. All the options that are supported by Elasticsearch can be used, as this object is passed verbatim to Elasticsearch. By default, this property has the following value: {"match_all": {"boost": 1}}.

          Query DSL
      • deleting boolean

        Indicates that the process of deleting the job is in progress but not yet completed. It is only reported when true.

      • A description of the job.

      • finished_time string | number

        A date and time, either as a string whose format can depend on the context (defaulting to ISO 8601), or a number of milliseconds since the Epoch. Elasticsearch accepts both as input, but will generally output a string representation.

      • groups array[string]

        A list of job groups. A job can belong to no groups or many.

      • job_id string Required
      • job_type string

        Reserved for future use, currently set to anomaly_detector.

      • Hide model_plot_config attributes Show model_plot_config attributes object
        • If true, enables calculation and storage of the model change annotations for each entity that is being analyzed.

        • enabled boolean

          If true, enables calculation and storage of the model bounds for each entity that is being analyzed.

        • terms string

          Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

      • Advanced configuration option, which affects the automatic removal of old model snapshots for this job. It specifies the maximum period of time (in days) that snapshots are retained. This period is relative to the timestamp of the most recent snapshot for this job. By default, snapshots ten days older than the newest snapshot are deleted.

      • Advanced configuration option. The period over which adjustments to the score are applied, as new data is seen. The default value is the longer of 30 days or 100 bucket_spans.

      • results_index_name string Required
      • Advanced configuration option. The period of time (in days) that results are retained. Age is calculated relative to the timestamp of the latest bucket result. If this property has a non-null value, once per day at 00:30 (server time), results that are the specified number of days older than the latest bucket result are deleted from Elasticsearch. The default value is null, which means all results are retained. Annotations generated by the system also count as results for retention purposes; they are deleted after the same number of days as results. Annotations added by users are retained forever.

GET /_ml/anomaly_detectors/{job_id}
curl \
 --request GET 'http://api.example.com/_ml/anomaly_detectors/{job_id}' \
 --header "Authorization: $API_KEY"








Get model snapshots info Added in 5.4.0

GET /_ml/anomaly_detectors/{job_id}/model_snapshots/{snapshot_id}

Path parameters

  • job_id string Required

    Identifier for the anomaly detection job.

  • snapshot_id string Required

    A numerical character string that uniquely identifies the model snapshot. You can get information for multiple snapshots by using a comma-separated list or a wildcard expression. You can get all snapshots by using _all, by specifying * as the snapshot ID, or by omitting the snapshot ID.

Query parameters

  • desc boolean

    If true, the results are sorted in descending order.

  • end string | number

    Returns snapshots with timestamps earlier than this time.

  • from number

    Skips the specified number of snapshots.

  • size number

    Specifies the maximum number of snapshots to obtain.

  • sort string

    Specifies the sort field for the requested snapshots. By default, the snapshots are sorted by their timestamp.

  • start string | number

    Returns snapshots with timestamps after this time.

application/json

Body

  • desc boolean

    Refer to the description for the desc query parameter.

  • end string | number

    A date and time, either as a string whose format can depend on the context (defaulting to ISO 8601), or a number of milliseconds since the Epoch. Elasticsearch accepts both as input, but will generally output a string representation.

  • page object
    Hide page attributes Show page attributes object
    • from number

      Skips the specified number of items.

    • size number

      Specifies the maximum number of items to obtain.

  • sort string

    Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

  • start string | number

    A date and time, either as a string whose format can depend on the context (defaulting to ISO 8601), or a number of milliseconds since the Epoch. Elasticsearch accepts both as input, but will generally output a string representation.

Responses

GET /_ml/anomaly_detectors/{job_id}/model_snapshots/{snapshot_id}
curl \
 --request GET 'http://api.example.com/_ml/anomaly_detectors/{job_id}/model_snapshots/{snapshot_id}' \
 --header "Authorization: $API_KEY" \
 --header "Content-Type: application/json" \
 --data '{"desc":true,"":"string","page":{"from":42.0,"size":42.0},"sort":"string"}'
































































Get datafeed stats Added in 5.5.0

GET /_ml/datafeeds/{datafeed_id}/_stats

You can get statistics for multiple datafeeds in a single API request by using a comma-separated list of datafeeds or a wildcard expression. You can get statistics for all datafeeds by using _all, by specifying * as the <feed_id>, or by omitting the <feed_id>. If the datafeed is stopped, the only information you receive is the datafeed_id and the state. This API returns a maximum of 10,000 datafeeds.

Path parameters

  • datafeed_id string | array[string] Required

    Identifier for the datafeed. It can be a datafeed identifier or a wildcard expression. If you do not specify one of these options, the API returns information about all datafeeds.

Query parameters

  • Specifies what to do when the request:

    1. Contains wildcard expressions and there are no datafeeds that match.
    2. Contains the _all string or no identifiers and there are no matches.
    3. Contains wildcard expressions and there are only partial matches.

    The default value is true, which returns an empty datafeeds array when there are no matches and the subset of results when there are partial matches. If this parameter is false, the request returns a 404 status code when there are no matches or only partial matches.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
GET /_ml/datafeeds/{datafeed_id}/_stats
curl \
 --request GET 'http://api.example.com/_ml/datafeeds/{datafeed_id}/_stats' \
 --header "Authorization: $API_KEY"




































































Preview a datafeed Added in 5.4.0

GET /_ml/datafeeds/{datafeed_id}/_preview

This API returns the first "page" of search results from a datafeed. You can preview an existing datafeed or provide configuration details for a datafeed and anomaly detection job in the API. The preview shows the structure of the data that will be passed to the anomaly detection engine. IMPORTANT: When Elasticsearch security features are enabled, the preview uses the credentials of the user that called the API. However, when the datafeed starts it uses the roles of the last user that created or updated the datafeed. To get a preview that accurately reflects the behavior of the datafeed, use the appropriate credentials. You can also use secondary authorization headers to supply the credentials.

Path parameters

  • datafeed_id string Required

    A numerical character string that uniquely identifies the datafeed. This identifier can contain lowercase alphanumeric characters (a-z and 0-9), hyphens, and underscores. It must start and end with alphanumeric characters. NOTE: If you use this path parameter, you cannot provide datafeed or anomaly detection job configuration details in the request body.

Query parameters

  • start string | number

    The start time from where the datafeed preview should begin

  • end string | number

    The end time when the datafeed preview should stop

application/json

Body

  • Hide datafeed_config attributes Show datafeed_config attributes object
    • If set, the datafeed performs aggregation searches. Support for aggregations is limited and should be used only with low cardinality data.

    • Hide chunking_config attributes Show chunking_config attributes object
      • mode string Required

        Values are auto, manual, or off.

      • A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

    • Hide delayed_data_check_config attributes Show delayed_data_check_config attributes object
      • A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

      • enabled boolean Required

        Specifies whether the datafeed periodically checks for delayed data.

    • A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

    • indices string | array[string]
    • Hide indices_options attributes Show indices_options attributes object
      • If false, the request returns an error if any wildcard expression, index alias, or _all value targets only missing or closed indices. This behavior applies even if the request targets other open indices. For example, a request targeting foo*,bar* returns an error if an index starts with foo but no index starts with bar.

      • expand_wildcards string | array[string]
      • If true, missing or closed indices are not included in the response.

      • If true, concrete, expanded or aliased indices are ignored when frozen.

    • job_id string
    • If a real-time datafeed has never seen any data (including during any initial training period) then it will automatically stop itself and close its associated job after this many real-time searches that return no documents. In other words, it will stop after frequency times max_empty_searches of real-time operation. If not set then a datafeed with no end time that sees no data will remain started until it is explicitly stopped.

    • query object

      An Elasticsearch Query DSL (Domain Specific Language) object that defines a query.

      External documentation
    • A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

    • Hide runtime_mappings attribute Show runtime_mappings attribute object
      • * object Additional properties
        Hide * attributes Show * attributes object
        • fields object

          For type composite

          Hide fields attribute Show fields attribute object
          • * object Additional properties
            Hide * attribute Show * attribute object
            • type string Required

              Values are boolean, composite, date, double, geo_point, geo_shape, ip, keyword, long, or lookup.

        • fetch_fields array[object]

          For type lookup

          Hide fetch_fields attributes Show fetch_fields attributes object
          • field string Required

            Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • format string
        • format string

          A custom format for date type runtime fields.

        • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

        • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

        • script object
          Hide script attributes Show script attributes object
          • source string | object

            One of:
          • id string
          • params object

            Specifies any named parameters that are passed into the script as variables. Use parameters instead of hard-coded values to decrease compile time.

            Hide params attribute Show params attribute object
            • * object Additional properties
          • lang string

            Any of:

            Values are painless, expression, mustache, or java.

          • options object
            Hide options attribute Show options attribute object
            • * string Additional properties
        • type string Required

          Values are boolean, composite, date, double, geo_point, geo_shape, ip, keyword, long, or lookup.

    • Specifies scripts that evaluate custom expressions and returns script fields to the datafeed. The detector configuration objects in a job can contain functions that use these script fields.

      Hide script_fields attribute Show script_fields attribute object
      • * object Additional properties
        Hide * attributes Show * attributes object
        • script object Required
          Hide script attributes Show script attributes object
          • source string | object

            One of:
          • id string
          • params object

            Specifies any named parameters that are passed into the script as variables. Use parameters instead of hard-coded values to decrease compile time.

            Hide params attribute Show params attribute object
            • * object Additional properties
          • lang string

            Any of:

            Values are painless, expression, mustache, or java.

          • options object
            Hide options attribute Show options attribute object
            • * string Additional properties
    • The size parameter that is used in Elasticsearch searches when the datafeed does not use aggregations. The maximum value is the value of index.max_result_window, which is 10,000 by default.

  • Hide job_config attributes Show job_config attributes object
    • Advanced configuration option. Specifies whether this job can open when there is insufficient machine learning node capacity for it to be immediately assigned to a node.

    • analysis_config object Required
      Hide analysis_config attributes Show analysis_config attributes object
      • A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

      • categorization_analyzer string | object

        One of:
      • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

      • If categorization_field_name is specified, you can also define optional filters. This property expects an array of regular expressions. The expressions are used to filter out matching sequences from the categorization field values. You can use this functionality to fine tune the categorization by excluding sequences from consideration when categories are defined. For example, you can exclude SQL statements that appear in your log files. This property cannot be used at the same time as categorization_analyzer. If you only want to define simple regular expression filters that are applied prior to tokenization, setting this property is the easiest method. If you also want to customize the tokenizer or post-tokenization filtering, use the categorization_analyzer property instead and include the filters as pattern_replace character filters. The effect is exactly the same.

      • detectors array[object] Required

        Detector configuration objects specify which data fields a job analyzes. They also specify which analytical functions are used. You can specify multiple detectors for a job. If the detectors array does not contain at least one detector, no analysis can occur and an error is returned.

        Hide detectors attributes Show detectors attributes object
        • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

        • custom_rules array[object]

          Custom rules enable you to customize the way detectors operate. For example, a rule may dictate conditions under which results should be skipped. Kibana refers to custom rules as job rules.

          Hide custom_rules attributes Show custom_rules attributes object
          • actions array[string]

            The set of actions to be triggered when the rule applies. If more than one action is specified the effects of all actions are combined.

            Supported values include:

            • skip_result: The result will not be created. Unless you also specify skip_model_update, the model will be updated as usual with the corresponding series value.
            • skip_model_update: The value for that series will not be used to update the model. Unless you also specify skip_result, the results will be created as usual. This action is suitable when certain values are expected to be consistently anomalous and they affect the model in a way that negatively impacts the rest of the results.

            Values are skip_result or skip_model_update.

          • conditions array[object]

            An array of numeric conditions when the rule applies. A rule must either have a non-empty scope or at least one condition. Multiple conditions are combined together with a logical AND.

          • scope object

            A scope of series where the rule applies. A rule must either have a non-empty scope or at least one condition. By default, the scope includes all series. Scoping is allowed for any of the fields that are also specified in by_field_name, over_field_name, or partition_field_name.

        • A description of the detector.

        • A unique identifier for the detector. This identifier is based on the order of the detectors in the analysis_config, starting at zero. If you specify a value for this property, it is ignored.

        • Values are all, none, by, or over.

        • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

        • function string

          The analysis function that is used. For example, count, rare, mean, min, max, or sum.

        • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

        • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

        • use_null boolean

          Defines whether a new series is used as the null series when there is no value for the by or partition fields.

      • influencers array[string]

        A comma separated list of influencer field names. Typically these can be the by, over, or partition fields that are used in the detector configuration. You might also want to use a field name that is not specifically named in a detector, but is available as part of the input data. When you use multiple detectors, the use of influencers is recommended as it aggregates results for each influencer entity.

      • latency string

        A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

      • A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

      • This functionality is reserved for internal use. It is not supported for use in customer environments and is not subject to the support SLA of official GA features. If set to true, the analysis will automatically find correlations between metrics for a given by field value and report anomalies when those correlations cease to hold. For example, suppose CPU and memory usage on host A is usually highly correlated with the same metrics on host B. Perhaps this correlation occurs because they are running a load-balanced application. If you enable this property, anomalies will be reported when, for example, CPU usage on host A is high and the value of CPU usage on host B is low. That is to say, you’ll see an anomaly when the CPU of host A is unusual given the CPU of host B. To use the multivariate_by_fields property, you must also specify by_field_name in your detector.

      • Hide per_partition_categorization attributes Show per_partition_categorization attributes object
        • enabled boolean

          To enable this setting, you must also set the partition_field_name property to the same value in every detector that uses the keyword mlcategory. Otherwise, job creation fails.

        • This setting can be set to true only if per-partition categorization is enabled. If true, both categorization and subsequent anomaly detection stops for partitions where the categorization status changes to warn. This setting makes it viable to have a job where it is expected that categorization works well for some partitions but not others; you do not pay the cost of bad categorization forever in the partitions where it works badly.

      • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

    • Hide analysis_limits attributes Show analysis_limits attributes object
    • A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

    • Custom metadata about the job

    • Advanced configuration option, which affects the automatic removal of old model snapshots for this job. It specifies a period of time (in days) after which only the first snapshot per day is retained. This period is relative to the timestamp of the most recent snapshot for this job.

    • data_description object Required
      Hide data_description attributes Show data_description attributes object
      • format string

        Only JSON format is supported at this time.

      • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

      • The time format, which can be epoch, epoch_ms, or a custom pattern. The value epoch refers to UNIX or Epoch time (the number of seconds since 1 Jan 1970). The value epoch_ms indicates that time is measured in milliseconds since the epoch. The epoch and epoch_ms time formats accept either integer or real values. Custom patterns must conform to the Java DateTimeFormatter class. When you use date-time formatting patterns, it is recommended that you provide the full date, time and time zone. For example: yyyy-MM-dd'T'HH:mm:ssX. If the pattern that you specify is not sufficient to produce a complete timestamp, job creation fails.

    • Hide datafeed_config attributes Show datafeed_config attributes object
      • If set, the datafeed performs aggregation searches. Support for aggregations is limited and should be used only with low cardinality data.

      • Hide chunking_config attributes Show chunking_config attributes object
        • mode string Required

          Values are auto, manual, or off.

        • A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

      • Hide delayed_data_check_config attributes Show delayed_data_check_config attributes object
        • A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

        • enabled boolean Required

          Specifies whether the datafeed periodically checks for delayed data.

      • A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

      • indices string | array[string]
      • Hide indices_options attributes Show indices_options attributes object
        • If false, the request returns an error if any wildcard expression, index alias, or _all value targets only missing or closed indices. This behavior applies even if the request targets other open indices. For example, a request targeting foo*,bar* returns an error if an index starts with foo but no index starts with bar.

        • expand_wildcards string | array[string]
        • If true, missing or closed indices are not included in the response.

        • If true, concrete, expanded or aliased indices are ignored when frozen.

      • job_id string
      • If a real-time datafeed has never seen any data (including during any initial training period) then it will automatically stop itself and close its associated job after this many real-time searches that return no documents. In other words, it will stop after frequency times max_empty_searches of real-time operation. If not set then a datafeed with no end time that sees no data will remain started until it is explicitly stopped.

      • query object

        An Elasticsearch Query DSL (Domain Specific Language) object that defines a query.

        External documentation
      • A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

      • Hide runtime_mappings attribute Show runtime_mappings attribute object
        • * object Additional properties
          Hide * attributes Show * attributes object
          • fields object

            For type composite

            Hide fields attribute Show fields attribute object
            • * object Additional properties
              Hide * attribute Show * attribute object
              • type string Required

                Values are boolean, composite, date, double, geo_point, geo_shape, ip, keyword, long, or lookup.

          • fetch_fields array[object]

            For type lookup

            Hide fetch_fields attributes Show fetch_fields attributes object
            • field string Required

              Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

            • format string
          • format string

            A custom format for date type runtime fields.

          • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • script object
            Hide script attributes Show script attributes object
          • type string Required

            Values are boolean, composite, date, double, geo_point, geo_shape, ip, keyword, long, or lookup.

      • Specifies scripts that evaluate custom expressions and returns script fields to the datafeed. The detector configuration objects in a job can contain functions that use these script fields.

        Hide script_fields attribute Show script_fields attribute object
      • The size parameter that is used in Elasticsearch searches when the datafeed does not use aggregations. The maximum value is the value of index.max_result_window, which is 10,000 by default.

    • A description of the job.

    • groups array[string]

      A list of job groups. A job can belong to no groups or many.

    • job_id string
    • job_type string

      Reserved for future use, currently set to anomaly_detector.

    • Hide model_plot_config attributes Show model_plot_config attributes object
      • If true, enables calculation and storage of the model change annotations for each entity that is being analyzed.

      • enabled boolean

        If true, enables calculation and storage of the model bounds for each entity that is being analyzed.

      • terms string

        Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

    • Advanced configuration option, which affects the automatic removal of old model snapshots for this job. It specifies the maximum period of time (in days) that snapshots are retained. This period is relative to the timestamp of the most recent snapshot for this job. The default value is 10, which means snapshots ten days older than the newest snapshot are deleted.

    • Advanced configuration option. The period over which adjustments to the score are applied, as new data is seen. The default value is the longer of 30 days or 100 bucket_spans.

    • Advanced configuration option. The period of time (in days) that results are retained. Age is calculated relative to the timestamp of the latest bucket result. If this property has a non-null value, once per day at 00:30 (server time), results that are the specified number of days older than the latest bucket result are deleted from Elasticsearch. The default value is null, which means all results are retained. Annotations generated by the system also count as results for retention purposes; they are deleted after the same number of days as results. Annotations added by users are retained forever.

Responses

GET /_ml/datafeeds/{datafeed_id}/_preview
curl \
 --request GET 'http://api.example.com/_ml/datafeeds/{datafeed_id}/_preview' \
 --header "Authorization: $API_KEY" \
 --header "Content-Type: application/json" \
 --data '{"datafeed_config":{"aggregations":{},"chunking_config":{"mode":"auto","time_span":"string"},"datafeed_id":"string","delayed_data_check_config":{"check_window":"string","enabled":true},"frequency":"string","indices":"string","indices_options":{"allow_no_indices":true,"expand_wildcards":"string","ignore_unavailable":true,"ignore_throttled":true},"job_id":"string","max_empty_searches":42.0,"query":{},"query_delay":"string","runtime_mappings":{"additionalProperty1":{"fields":{"additionalProperty1":{"type":"boolean"},"additionalProperty2":{"type":"boolean"}},"fetch_fields":[{"field":"string","format":"string"}],"format":"string","input_field":"string","target_field":"string","target_index":"string","script":{"":"painless","id":"string","params":{"additionalProperty1":{},"additionalProperty2":{}},"options":{"additionalProperty1":"string","additionalProperty2":"string"}},"type":"boolean"},"additionalProperty2":{"fields":{"additionalProperty1":{"type":"boolean"},"additionalProperty2":{"type":"boolean"}},"fetch_fields":[{"field":"string","format":"string"}],"format":"string","input_field":"string","target_field":"string","target_index":"string","script":{"":"painless","id":"string","params":{"additionalProperty1":{},"additionalProperty2":{}},"options":{"additionalProperty1":"string","additionalProperty2":"string"}},"type":"boolean"}},"script_fields":{"additionalProperty1":{"script":{"":"painless","id":"string","params":{"additionalProperty1":{},"additionalProperty2":{}},"options":{"additionalProperty1":"string","additionalProperty2":"string"}},"ignore_failure":true},"additionalProperty2":{"script":{"":"painless","id":"string","params":{"additionalProperty1":{},"additionalProperty2":{}},"options":{"additionalProperty1":"string","additionalProperty2":"string"}},"ignore_failure":true}},"scroll_size":42.0},"job_config":{"allow_lazy_open":true,"analysis_config":{"bucket_span":"string","":"string","categorization_field_name":"string","categorization_filters":["string"],"detectors":[{"by_field_name":"string","custom_rules":[{"actions":["skip_result"],"conditions":[{}],"scope":{}}],"detector_description":"string","detector_index":42.0,"exclude_frequent":"all","field_name":"string","function":"string","over_field_name":"string","partition_field_name":"string","use_null":true}],"influencers":["string"],"latency":"string","model_prune_window":"string","multivariate_by_fields":true,"per_partition_categorization":{"enabled":true,"stop_on_warn":true},"summary_count_field_name":"string"},"analysis_limits":{"categorization_examples_limit":42.0,"":42.0},"background_persist_interval":"string","custom_settings":{},"daily_model_snapshot_retention_after_days":42.0,"data_description":{"format":"string","time_field":"string","time_format":"string","field_delimiter":"string"},"datafeed_config":{"aggregations":{},"chunking_config":{"mode":"auto","time_span":"string"},"datafeed_id":"string","delayed_data_check_config":{"check_window":"string","enabled":true},"frequency":"string","indices":"string","indices_options":{"allow_no_indices":true,"expand_wildcards":"string","ignore_unavailable":true,"ignore_throttled":true},"job_id":"string","max_empty_searches":42.0,"query":{},"query_delay":"string","runtime_mappings":{"additionalProperty1":{"fields":{"additionalProperty1":{"type":"boolean"},"additionalProperty2":{"type":"boolean"}},"fetch_fields":[{"field":"string","format":"string"}],"format":"string","input_field":"string","target_field":"string","target_index":"string","script":{"":"painless","id":"string","params":{"additionalProperty1":{},"additionalProperty2":{}},"options":{"additionalProperty1":"string","additionalProperty2":"string"}},"type":"boolean"},"additionalProperty2":{"fields":{"additionalProperty1":{"type":"boolean"},"additionalProperty2":{"type":"boolean"}},"fetch_fields":[{"field":"string","format":"string"}],"format":"string","input_field":"string","target_field":"string","target_index":"string","script":{"":"painless","id":"string","params":{"additionalProperty1":{},"additionalProperty2":{}},"options":{"additionalProperty1":"string","additionalProperty2":"string"}},"type":"boolean"}},"script_fields":{"additionalProperty1":{"script":{"":"painless","id":"string","params":{"additionalProperty1":{},"additionalProperty2":{}},"options":{"additionalProperty1":"string","additionalProperty2":"string"}},"ignore_failure":true},"additionalProperty2":{"script":{"":"painless","id":"string","params":{"additionalProperty1":{},"additionalProperty2":{}},"options":{"additionalProperty1":"string","additionalProperty2":"string"}},"ignore_failure":true}},"scroll_size":42.0},"description":"string","groups":["string"],"job_id":"string","job_type":"string","model_plot_config":{"annotations_enabled":true,"enabled":true,"terms":"string"},"model_snapshot_retention_days":42.0,"renormalization_window_days":42.0,"results_index_name":"string","results_retention_days":42.0}}'
































Update a filter Added in 6.4.0

POST /_ml/filters/{filter_id}/_update

Updates the description of a filter, adds items, or removes items from the list.

Path parameters

  • filter_id string Required

    A string that uniquely identifies a filter.

application/json

Body Required

Responses

POST /_ml/filters/{filter_id}/_update
curl \
 --request POST 'http://api.example.com/_ml/filters/{filter_id}/_update' \
 --header "Authorization: $API_KEY" \
 --header "Content-Type: application/json" \
 --data '{"add_items":["string"],"description":"string","remove_items":["string"]}'













Get data frame analytics job configuration info Added in 7.3.0

GET /_ml/data_frame/analytics/{id}

You can get information for multiple data frame analytics jobs in a single API request by using a comma-separated list of data frame analytics jobs or a wildcard expression.

Path parameters

  • id string Required

    Identifier for the data frame analytics job. If you do not specify this option, the API returns information for the first hundred data frame analytics jobs.

Query parameters

  • Specifies what to do when the request:

    1. Contains wildcard expressions and there are no data frame analytics jobs that match.
    2. Contains the _all string or no identifiers and there are no matches.
    3. Contains wildcard expressions and there are only partial matches.

    The default value returns an empty data_frame_analytics array when there are no matches and the subset of results when there are partial matches. If this parameter is false, the request returns a 404 status code when there are no matches or only partial matches.

  • from number

    Skips the specified number of data frame analytics jobs.

  • size number

    Specifies the maximum number of data frame analytics jobs to obtain.

  • Indicates if certain fields should be removed from the configuration on retrieval. This allows the configuration to be in an acceptable format to be retrieved and then added to another cluster.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • count number Required
    • data_frame_analytics array[object] Required

      An array of data frame analytics job resources, which are sorted by the id value in ascending order.

      Hide data_frame_analytics attributes Show data_frame_analytics attributes object
      • analysis object Required
        Hide analysis attributes Show analysis attributes object
        • Hide classification attributes Show classification attributes object
          • alpha number

            Advanced configuration option. Machine learning uses loss guided tree growing, which means that the decision trees grow where the regularized loss decreases most quickly. This parameter affects loss calculations by acting as a multiplier of the tree depth. Higher alpha values result in shallower trees and faster training times. By default, this value is calculated during hyperparameter optimization. It must be greater than or equal to zero.

          • dependent_variable string Required

            Defines which field of the document is to be predicted. It must match one of the fields in the index being used to train. If this field is missing from a document, then that document will not be used for training, but a prediction with the trained model will be generated for it. It is also known as continuous target variable. For classification analysis, the data type of the field must be numeric (integer, short, long, byte), categorical (ip or keyword), or boolean. There must be no more than 30 different values in this field. For regression analysis, the data type of the field must be numeric.

          • Advanced configuration option. Controls the fraction of data that is used to compute the derivatives of the loss function for tree training. A small value results in the use of a small fraction of the data. If this value is set to be less than 1, accuracy typically improves. However, too small a value may result in poor convergence for the ensemble and so require more trees. By default, this value is calculated during hyperparameter optimization. It must be greater than zero and less than or equal to 1.

          • Advanced configuration option. Specifies whether the training process should finish if it is not finding any better performing models. If disabled, the training process can take significantly longer and the chance of finding a better performing model is unremarkable.

          • eta number

            Advanced configuration option. The shrinkage applied to the weights. Smaller values result in larger forests which have a better generalization error. However, larger forests cause slower training. By default, this value is calculated during hyperparameter optimization. It must be a value between 0.001 and 1.

          • Advanced configuration option. Specifies the rate at which eta increases for each new tree that is added to the forest. For example, a rate of 1.05 increases eta by 5% for each extra tree. By default, this value is calculated during hyperparameter optimization. It must be between 0.5 and 2.

          • Advanced configuration option. Defines the fraction of features that will be used when selecting a random bag for each candidate split. By default, this value is calculated during hyperparameter optimization.

          • feature_processors array[object]

            Advanced configuration option. A collection of feature preprocessors that modify one or more included fields. The analysis uses the resulting one or more features instead of the original document field. However, these features are ephemeral; they are not stored in the destination index. Multiple feature_processors entries can refer to the same document fields. Automatic categorical feature encoding still occurs for the fields that are unprocessed by a custom processor or that have categorical values. Use this property only if you want to override the automatic feature encoding of the specified fields.

          • gamma number

            Advanced configuration option. Regularization parameter to prevent overfitting on the training data set. Multiplies a linear penalty associated with the size of individual trees in the forest. A high gamma value causes training to prefer small trees. A small gamma value results in larger individual trees and slower training. By default, this value is calculated during hyperparameter optimization. It must be a nonnegative value.

          • lambda number

            Advanced configuration option. Regularization parameter to prevent overfitting on the training data set. Multiplies an L2 regularization term which applies to leaf weights of the individual trees in the forest. A high lambda value causes training to favor small leaf weights. This behavior makes the prediction function smoother at the expense of potentially not being able to capture relevant relationships between the features and the dependent variable. A small lambda value results in large individual trees and slower training. By default, this value is calculated during hyperparameter optimization. It must be a nonnegative value.

          • Advanced configuration option. A multiplier responsible for determining the maximum number of hyperparameter optimization steps in the Bayesian optimization procedure. The maximum number of steps is determined based on the number of undefined hyperparameters times the maximum optimization rounds per hyperparameter. By default, this value is calculated during hyperparameter optimization.

          • Advanced configuration option. Defines the maximum number of decision trees in the forest. The maximum value is 2000. By default, this value is calculated during hyperparameter optimization.

          • Advanced configuration option. Specifies the maximum number of feature importance values per document to return. By default, no feature importance calculation occurs.

          • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • Defines the seed for the random generator that is used to pick training data. By default, it is randomly generated. Set it to a specific value to use the same training data each time you start a job (assuming other related parameters such as source and analyzed_fields are the same).

          • Advanced configuration option. Machine learning uses loss guided tree growing, which means that the decision trees grow where the regularized loss decreases most quickly. This soft limit combines with the soft_tree_depth_tolerance to penalize trees that exceed the specified depth; the regularized loss increases quickly beyond this depth. By default, this value is calculated during hyperparameter optimization. It must be greater than or equal to 0.

          • Advanced configuration option. This option controls how quickly the regularized loss increases when the tree depth exceeds soft_tree_depth_limit. By default, this value is calculated during hyperparameter optimization. It must be greater than or equal to 0.01.

          • Defines the number of categories for which the predicted probabilities are reported. It must be non-negative or -1. If it is -1 or greater than the total number of categories, probabilities are reported for all categories; if you have a large number of categories, there could be a significant effect on the size of your destination index. NOTE: To use the AUC ROC evaluation method, num_top_classes must be set to -1 or a value greater than or equal to the total number of categories.

        • Hide outlier_detection attributes Show outlier_detection attributes object
          • Specifies whether the feature influence calculation is enabled.

          • The minimum outlier score that a document needs to have in order to calculate its feature influence score. Value range: 0-1.

          • method string

            The method that outlier detection uses. Available methods are lof, ldof, distance_kth_nn, distance_knn, and ensemble. The default value is ensemble, which means that outlier detection uses an ensemble of different methods and normalises and combines their individual outlier scores to obtain the overall outlier score.

          • Defines the value for how many nearest neighbors each method of outlier detection uses to calculate its outlier score. When the value is not set, different values are used for different ensemble members. This default behavior helps improve the diversity in the ensemble; only override it if you are confident that the value you choose is appropriate for the data set.

          • The proportion of the data set that is assumed to be outlying prior to outlier detection. For example, 0.05 means it is assumed that 5% of values are real outliers and 95% are inliers.

          • If true, the following operation is performed on the columns before computing outlier scores: (x_i - mean(x_i)) / sd(x_i).

        • Hide regression attributes Show regression attributes object
          • alpha number

            Advanced configuration option. Machine learning uses loss guided tree growing, which means that the decision trees grow where the regularized loss decreases most quickly. This parameter affects loss calculations by acting as a multiplier of the tree depth. Higher alpha values result in shallower trees and faster training times. By default, this value is calculated during hyperparameter optimization. It must be greater than or equal to zero.

          • dependent_variable string Required

            Defines which field of the document is to be predicted. It must match one of the fields in the index being used to train. If this field is missing from a document, then that document will not be used for training, but a prediction with the trained model will be generated for it. It is also known as continuous target variable. For classification analysis, the data type of the field must be numeric (integer, short, long, byte), categorical (ip or keyword), or boolean. There must be no more than 30 different values in this field. For regression analysis, the data type of the field must be numeric.

          • Advanced configuration option. Controls the fraction of data that is used to compute the derivatives of the loss function for tree training. A small value results in the use of a small fraction of the data. If this value is set to be less than 1, accuracy typically improves. However, too small a value may result in poor convergence for the ensemble and so require more trees. By default, this value is calculated during hyperparameter optimization. It must be greater than zero and less than or equal to 1.

          • Advanced configuration option. Specifies whether the training process should finish if it is not finding any better performing models. If disabled, the training process can take significantly longer and the chance of finding a better performing model is unremarkable.

          • eta number

            Advanced configuration option. The shrinkage applied to the weights. Smaller values result in larger forests which have a better generalization error. However, larger forests cause slower training. By default, this value is calculated during hyperparameter optimization. It must be a value between 0.001 and 1.

          • Advanced configuration option. Specifies the rate at which eta increases for each new tree that is added to the forest. For example, a rate of 1.05 increases eta by 5% for each extra tree. By default, this value is calculated during hyperparameter optimization. It must be between 0.5 and 2.

          • Advanced configuration option. Defines the fraction of features that will be used when selecting a random bag for each candidate split. By default, this value is calculated during hyperparameter optimization.

          • feature_processors array[object]

            Advanced configuration option. A collection of feature preprocessors that modify one or more included fields. The analysis uses the resulting one or more features instead of the original document field. However, these features are ephemeral; they are not stored in the destination index. Multiple feature_processors entries can refer to the same document fields. Automatic categorical feature encoding still occurs for the fields that are unprocessed by a custom processor or that have categorical values. Use this property only if you want to override the automatic feature encoding of the specified fields.

          • gamma number

            Advanced configuration option. Regularization parameter to prevent overfitting on the training data set. Multiplies a linear penalty associated with the size of individual trees in the forest. A high gamma value causes training to prefer small trees. A small gamma value results in larger individual trees and slower training. By default, this value is calculated during hyperparameter optimization. It must be a nonnegative value.

          • lambda number

            Advanced configuration option. Regularization parameter to prevent overfitting on the training data set. Multiplies an L2 regularization term which applies to leaf weights of the individual trees in the forest. A high lambda value causes training to favor small leaf weights. This behavior makes the prediction function smoother at the expense of potentially not being able to capture relevant relationships between the features and the dependent variable. A small lambda value results in large individual trees and slower training. By default, this value is calculated during hyperparameter optimization. It must be a nonnegative value.

          • Advanced configuration option. A multiplier responsible for determining the maximum number of hyperparameter optimization steps in the Bayesian optimization procedure. The maximum number of steps is determined based on the number of undefined hyperparameters times the maximum optimization rounds per hyperparameter. By default, this value is calculated during hyperparameter optimization.

          • Advanced configuration option. Defines the maximum number of decision trees in the forest. The maximum value is 2000. By default, this value is calculated during hyperparameter optimization.

          • Advanced configuration option. Specifies the maximum number of feature importance values per document to return. By default, no feature importance calculation occurs.

          • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • Defines the seed for the random generator that is used to pick training data. By default, it is randomly generated. Set it to a specific value to use the same training data each time you start a job (assuming other related parameters such as source and analyzed_fields are the same).

          • Advanced configuration option. Machine learning uses loss guided tree growing, which means that the decision trees grow where the regularized loss decreases most quickly. This soft limit combines with the soft_tree_depth_tolerance to penalize trees that exceed the specified depth; the regularized loss increases quickly beyond this depth. By default, this value is calculated during hyperparameter optimization. It must be greater than or equal to 0.

          • Advanced configuration option. This option controls how quickly the regularized loss increases when the tree depth exceeds soft_tree_depth_limit. By default, this value is calculated during hyperparameter optimization. It must be greater than or equal to 0.01.

          • The loss function used during regression. Available options are mse (mean squared error), msle (mean squared logarithmic error), huber (Pseudo-Huber loss).

          • A positive number that is used as a parameter to the loss_function.

      • Hide analyzed_fields attributes Show analyzed_fields attributes object
        • includes array[string]

          An array of strings that defines the fields that will be excluded from the analysis. You do not need to add fields with unsupported data types to excludes, these fields are excluded from the analysis automatically.

        • excludes array[string]

          An array of strings that defines the fields that will be included in the analysis.

      • Hide authorization attributes Show authorization attributes object
        • api_key object
          Hide api_key attributes Show api_key attributes object
          • id string Required

            The identifier for the API key.

          • name string Required

            The name of the API key.

        • roles array[string]

          If a user ID was used for the most recent update to the job, its roles at the time of the update are listed in the response.

        • If a service account was used for the most recent update to the job, the account name is listed in the response.

      • Time unit for milliseconds

      • dest object Required
        Hide dest attributes Show dest attributes object
        • index string Required
        • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

      • id string Required
      • source object Required
        Hide source attributes Show source attributes object
        • index string | array[string] Required
        • Hide runtime_mappings attribute Show runtime_mappings attribute object
          • * object Additional properties
            Hide * attributes Show * attributes object
            • fields object

              For type composite

            • fetch_fields array[object]

              For type lookup

            • format string

              A custom format for date type runtime fields.

            • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

            • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

            • script object
            • type string Required

              Values are boolean, composite, date, double, geo_point, geo_shape, ip, keyword, long, or lookup.

        • _source object
          Hide _source attributes Show _source attributes object
          • includes array[string]

            An array of strings that defines the fields that will be excluded from the analysis. You do not need to add fields with unsupported data types to excludes, these fields are excluded from the analysis automatically.

          • excludes array[string]

            An array of strings that defines the fields that will be included in the analysis.

        • query object

          The Elasticsearch query domain-specific language (DSL). This value corresponds to the query object in an Elasticsearch search POST body. All the options that are supported by Elasticsearch can be used, as this object is passed verbatim to Elasticsearch. By default, this property has the following value: {"match_all": {}}.

          Query DSL
      • version string
      • _meta object
        Hide _meta attribute Show _meta attribute object
        • * object Additional properties
GET /_ml/data_frame/analytics/{id}
curl \
 --request GET 'http://api.example.com/_ml/data_frame/analytics/{id}' \
 --header "Authorization: $API_KEY"
































Query parameters

  • Specifies what to do when the request:

    1. Contains wildcard expressions and there are no data frame analytics jobs that match.
    2. Contains the _all string or no identifiers and there are no matches.
    3. Contains wildcard expressions and there are only partial matches.

    The default value returns an empty data_frame_analytics array when there are no matches and the subset of results when there are partial matches. If this parameter is false, the request returns a 404 status code when there are no matches or only partial matches.

  • from number

    Skips the specified number of data frame analytics jobs.

  • size number

    Specifies the maximum number of data frame analytics jobs to obtain.

  • verbose boolean

    Defines whether the stats response should be verbose.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • count number Required
    • data_frame_analytics array[object] Required

      An array of objects that contain usage information for data frame analytics jobs, which are sorted by the id value in ascending order.

      Hide data_frame_analytics attributes Show data_frame_analytics attributes object
      • Hide analysis_stats attributes Show analysis_stats attributes object
        • Hide classification_stats attributes Show classification_stats attributes object
          • hyperparameters object Required
            Hide hyperparameters attributes Show hyperparameters attributes object
            • alpha number

              Advanced configuration option. Machine learning uses loss guided tree growing, which means that the decision trees grow where the regularized loss decreases most quickly. This parameter affects loss calculations by acting as a multiplier of the tree depth. Higher alpha values result in shallower trees and faster training times. By default, this value is calculated during hyperparameter optimization. It must be greater than or equal to zero.

            • lambda number

              Advanced configuration option. Regularization parameter to prevent overfitting on the training data set. Multiplies an L2 regularization term which applies to leaf weights of the individual trees in the forest. A high lambda value causes training to favor small leaf weights. This behavior makes the prediction function smoother at the expense of potentially not being able to capture relevant relationships between the features and the dependent variable. A small lambda value results in large individual trees and slower training. By default, this value is calculated during hyperparameter optimization. It must be a nonnegative value.

            • gamma number

              Advanced configuration option. Regularization parameter to prevent overfitting on the training data set. Multiplies a linear penalty associated with the size of individual trees in the forest. A high gamma value causes training to prefer small trees. A small gamma value results in larger individual trees and slower training. By default, this value is calculated during hyperparameter optimization. It must be a nonnegative value.

            • eta number

              Advanced configuration option. The shrinkage applied to the weights. Smaller values result in larger forests which have a better generalization error. However, larger forests cause slower training. By default, this value is calculated during hyperparameter optimization. It must be a value between 0.001 and 1.

            • Advanced configuration option. Specifies the rate at which eta increases for each new tree that is added to the forest. For example, a rate of 1.05 increases eta by 5% for each extra tree. By default, this value is calculated during hyperparameter optimization. It must be between 0.5 and 2.

            • Advanced configuration option. Defines the fraction of features that will be used when selecting a random bag for each candidate split. By default, this value is calculated during hyperparameter optimization.

            • Advanced configuration option. Controls the fraction of data that is used to compute the derivatives of the loss function for tree training. A small value results in the use of a small fraction of the data. If this value is set to be less than 1, accuracy typically improves. However, too small a value may result in poor convergence for the ensemble and so require more trees. By default, this value is calculated during hyperparameter optimization. It must be greater than zero and less than or equal to 1.

            • If the algorithm fails to determine a non-trivial tree (more than a single leaf), this parameter determines how many of such consecutive failures are tolerated. Once the number of attempts exceeds the threshold, the forest training stops.

            • Advanced configuration option. A multiplier responsible for determining the maximum number of hyperparameter optimization steps in the Bayesian optimization procedure. The maximum number of steps is determined based on the number of undefined hyperparameters times the maximum optimization rounds per hyperparameter. By default, this value is calculated during hyperparameter optimization.

            • Advanced configuration option. Defines the maximum number of decision trees in the forest. The maximum value is 2000. By default, this value is calculated during hyperparameter optimization.

            • The maximum number of folds for the cross-validation procedure.

            • Determines the maximum number of splits for every feature that can occur in a decision tree when the tree is trained.

            • Advanced configuration option. Machine learning uses loss guided tree growing, which means that the decision trees grow where the regularized loss decreases most quickly. This soft limit combines with the soft_tree_depth_tolerance to penalize trees that exceed the specified depth; the regularized loss increases quickly beyond this depth. By default, this value is calculated during hyperparameter optimization. It must be greater than or equal to 0.

            • Advanced configuration option. This option controls how quickly the regularized loss increases when the tree depth exceeds soft_tree_depth_limit. By default, this value is calculated during hyperparameter optimization. It must be greater than or equal to 0.01.

          • iteration number Required

            The number of iterations on the analysis.

          • Time unit for milliseconds

          • timing_stats object Required
            Hide timing_stats attributes Show timing_stats attributes object
          • validation_loss object Required
            Hide validation_loss attributes Show validation_loss attributes object
            • fold_values array[string] Required

              Validation loss values for every added decision tree during the forest growing procedure.

            • loss_type string Required

              The type of the loss metric. For example, binomial_logistic.

        • Hide outlier_detection_stats attributes Show outlier_detection_stats attributes object
          • parameters object Required
            Hide parameters attributes Show parameters attributes object
            • Specifies whether the feature influence calculation is enabled.

            • The minimum outlier score that a document needs to have in order to calculate its feature influence score. Value range: 0-1

            • method string

              The method that outlier detection uses. Available methods are lof, ldof, distance_kth_nn, distance_knn, and ensemble. The default value is ensemble, which means that outlier detection uses an ensemble of different methods and normalises and combines their individual outlier scores to obtain the overall outlier score.

            • Defines the value for how many nearest neighbors each method of outlier detection uses to calculate its outlier score. When the value is not set, different values are used for different ensemble members. This default behavior helps improve the diversity in the ensemble; only override it if you are confident that the value you choose is appropriate for the data set.

            • The proportion of the data set that is assumed to be outlying prior to outlier detection. For example, 0.05 means it is assumed that 5% of values are real outliers and 95% are inliers.

            • If true, the following operation is performed on the columns before computing outlier scores: (x_i - mean(x_i)) / sd(x_i).

          • Time unit for milliseconds

          • timing_stats object Required
            Hide timing_stats attributes Show timing_stats attributes object
        • Hide regression_stats attributes Show regression_stats attributes object
          • hyperparameters object Required
            Hide hyperparameters attributes Show hyperparameters attributes object
            • alpha number

              Advanced configuration option. Machine learning uses loss guided tree growing, which means that the decision trees grow where the regularized loss decreases most quickly. This parameter affects loss calculations by acting as a multiplier of the tree depth. Higher alpha values result in shallower trees and faster training times. By default, this value is calculated during hyperparameter optimization. It must be greater than or equal to zero.

            • lambda number

              Advanced configuration option. Regularization parameter to prevent overfitting on the training data set. Multiplies an L2 regularization term which applies to leaf weights of the individual trees in the forest. A high lambda value causes training to favor small leaf weights. This behavior makes the prediction function smoother at the expense of potentially not being able to capture relevant relationships between the features and the dependent variable. A small lambda value results in large individual trees and slower training. By default, this value is calculated during hyperparameter optimization. It must be a nonnegative value.

            • gamma number

              Advanced configuration option. Regularization parameter to prevent overfitting on the training data set. Multiplies a linear penalty associated with the size of individual trees in the forest. A high gamma value causes training to prefer small trees. A small gamma value results in larger individual trees and slower training. By default, this value is calculated during hyperparameter optimization. It must be a nonnegative value.

            • eta number

              Advanced configuration option. The shrinkage applied to the weights. Smaller values result in larger forests which have a better generalization error. However, larger forests cause slower training. By default, this value is calculated during hyperparameter optimization. It must be a value between 0.001 and 1.

            • Advanced configuration option. Specifies the rate at which eta increases for each new tree that is added to the forest. For example, a rate of 1.05 increases eta by 5% for each extra tree. By default, this value is calculated during hyperparameter optimization. It must be between 0.5 and 2.

            • Advanced configuration option. Defines the fraction of features that will be used when selecting a random bag for each candidate split. By default, this value is calculated during hyperparameter optimization.

            • Advanced configuration option. Controls the fraction of data that is used to compute the derivatives of the loss function for tree training. A small value results in the use of a small fraction of the data. If this value is set to be less than 1, accuracy typically improves. However, too small a value may result in poor convergence for the ensemble and so require more trees. By default, this value is calculated during hyperparameter optimization. It must be greater than zero and less than or equal to 1.

            • If the algorithm fails to determine a non-trivial tree (more than a single leaf), this parameter determines how many of such consecutive failures are tolerated. Once the number of attempts exceeds the threshold, the forest training stops.

            • Advanced configuration option. A multiplier responsible for determining the maximum number of hyperparameter optimization steps in the Bayesian optimization procedure. The maximum number of steps is determined based on the number of undefined hyperparameters times the maximum optimization rounds per hyperparameter. By default, this value is calculated during hyperparameter optimization.

            • Advanced configuration option. Defines the maximum number of decision trees in the forest. The maximum value is 2000. By default, this value is calculated during hyperparameter optimization.

            • The maximum number of folds for the cross-validation procedure.

            • Determines the maximum number of splits for every feature that can occur in a decision tree when the tree is trained.

            • Advanced configuration option. Machine learning uses loss guided tree growing, which means that the decision trees grow where the regularized loss decreases most quickly. This soft limit combines with the soft_tree_depth_tolerance to penalize trees that exceed the specified depth; the regularized loss increases quickly beyond this depth. By default, this value is calculated during hyperparameter optimization. It must be greater than or equal to 0.

            • Advanced configuration option. This option controls how quickly the regularized loss increases when the tree depth exceeds soft_tree_depth_limit. By default, this value is calculated during hyperparameter optimization. It must be greater than or equal to 0.01.

          • iteration number Required

            The number of iterations on the analysis.

          • Time unit for milliseconds

          • timing_stats object Required
            Hide timing_stats attributes Show timing_stats attributes object
          • validation_loss object Required
            Hide validation_loss attributes Show validation_loss attributes object
            • fold_values array[string] Required

              Validation loss values for every added decision tree during the forest growing procedure.

            • loss_type string Required

              The type of the loss metric. For example, binomial_logistic.

      • For running jobs only, contains messages relating to the selection of a node to run the job.

      • data_counts object Required
        Hide data_counts attributes Show data_counts attributes object
        • skipped_docs_count number Required

          The number of documents that are skipped during the analysis because they contained values that are not supported by the analysis. For example, outlier detection does not support missing fields so it skips documents with missing fields. Likewise, all types of analysis skip documents that contain arrays with more than one element.

        • test_docs_count number Required

          The number of documents that are not used for training the model and can be used for testing.

        • training_docs_count number Required

          The number of documents that are used for training the model.

      • id string Required
      • memory_usage object Required
        Hide memory_usage attributes Show memory_usage attributes object
        • This value is present when the status is hard_limit and it is a new estimate of how much memory the job needs.

        • peak_usage_bytes number Required

          The number of bytes used at the highest peak of memory usage.

        • status string Required

          The memory usage status.

        • Time unit for milliseconds

      • node object
        Hide node attributes Show node attributes object
      • progress array[object] Required

        The progress report of the data frame analytics job by phase.

        Hide progress attributes Show progress attributes object
        • phase string Required

          Defines the phase of the data frame analytics job.

        • progress_percent number Required

          The progress that the data frame analytics job has made expressed in percentage.

      • state string Required

        Values are started, stopped, starting, stopping, or failed.

GET /_ml/data_frame/analytics/_stats
curl \
 --request GET 'http://api.example.com/_ml/data_frame/analytics/_stats' \
 --header "Authorization: $API_KEY"

































Clear trained model deployment cache Added in 8.5.0

POST /_ml/trained_models/{model_id}/deployment/cache/_clear

Cache will be cleared on all nodes where the trained model is assigned. A trained model deployment may have an inference cache enabled. As requests are handled by each allocated node, their responses may be cached on that individual node. Calling this API clears the caches without restarting the deployment.

Path parameters

  • model_id string Required

    The unique identifier of the trained model.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
POST /_ml/trained_models/{model_id}/deployment/cache/_clear
curl \
 --request POST 'http://api.example.com/_ml/trained_models/{model_id}/deployment/cache/_clear' \
 --header "Authorization: $API_KEY"
Response examples (200)
A successful response when clearing the inference cache.
{
  "cleared": true
}

Path parameters

  • model_id string | array[string] Required

    The unique identifier of the trained model or a model alias.

    You can get information for multiple trained models in a single API request by using a comma-separated list of model IDs or a wildcard expression.

Query parameters

  • Specifies what to do when the request:

    • Contains wildcard expressions and there are no models that match.
    • Contains the _all string or no identifiers and there are no matches.
    • Contains wildcard expressions and there are only partial matches.

    If true, it returns an empty array when there are no matches and the subset of results when there are partial matches.

  • Specifies whether the included model definition should be returned as a JSON map (true) or in a custom compressed format (false).

  • Indicates if certain fields should be removed from the configuration on retrieval. This allows the configuration to be in an acceptable format to be retrieved and then added to another cluster.

  • from number

    Skips the specified number of models.

  • include string

    A comma delimited string of optional fields to include in the response body.

    Supported values include:

    • definition: Includes the model definition.
    • feature_importance_baseline: Includes the baseline for feature importance values.
    • hyperparameters: Includes the information about hyperparameters used to train the model. This information consists of the value, the absolute and relative importance of the hyperparameter as well as an indicator of whether it was specified by the user or tuned during hyperparameter optimization.
    • total_feature_importance: Includes the total feature importance for the training data set. The baseline and total feature importance values are returned in the metadata field in the response body.
    • definition_status: Includes the model definition status.

    Values are definition, feature_importance_baseline, hyperparameters, total_feature_importance, or definition_status.

  • size number

    Specifies the maximum number of models to obtain.

  • tags string | array[string]

    A comma delimited string of tags. A trained model can have many tags, or none. When supplied, only trained models that contain all the supplied tags are returned.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • count number Required
    • trained_model_configs array[object] Required

      An array of trained model resources, which are sorted by the model_id value in ascending order.

      Hide trained_model_configs attributes Show trained_model_configs attributes object
      • model_id string Required
      • Values are tree_ensemble, lang_ident, or pytorch.

      • tags array[string] Required

        A comma delimited string of tags. A trained model can have many tags, or none.

      • version string
      • Information on the creator of the trained model.

      • create_time string | number

        A date and time, either as a string whose format can depend on the context (defaulting to ISO 8601), or a number of milliseconds since the Epoch. Elasticsearch accepts both as input, but will generally output a string representation.

      • Any field map described in the inference configuration takes precedence.

        Hide default_field_map attribute Show default_field_map attribute object
        • * string Additional properties
      • The free-text description of the trained model.

      • The estimated heap usage in bytes to keep the trained model in memory.

      • The estimated number of operations to use the trained model.

      • True if the full model definition is present.

      • Inference configuration provided when storing the model config

        Hide inference_config attributes Show inference_config attributes object
        • Hide regression attributes Show regression attributes object
        • Hide classification attributes Show classification attributes object
          • Specifies the number of top class predictions to return. Defaults to 0.

          • Specifies the maximum number of feature importance values per document.

          • Specifies the type of the predicted field to write. Acceptable values are: string, number, boolean. When boolean is provided 1.0 is transformed to true and 0.0 to false.

          • The field that is added to incoming documents to contain the inference prediction. Defaults to predicted_value.

          • Specifies the field to which the top classes are written. Defaults to top_classes.

        • Hide text_classification attributes Show text_classification attributes object
          • Specifies the number of top class predictions to return. Defaults to 0.

          • Tokenization options stored in inference configuration

            Hide tokenization attributes Show tokenization attributes object
          • The field that is added to incoming documents to contain the inference prediction. Defaults to predicted_value.

          • Classification labels to apply other than the stored labels. Must have the same deminsions as the default configured labels

          • Hide vocabulary attribute Show vocabulary attribute object
        • Hide zero_shot_classification attributes Show zero_shot_classification attributes object
          • Tokenization options stored in inference configuration

            Hide tokenization attributes Show tokenization attributes object
          • Hypothesis template used when tokenizing labels for prediction

          • classification_labels array[string] Required

            The zero shot classification labels indicating entailment, neutral, and contradiction Must contain exactly and only entailment, neutral, and contradiction

          • The field that is added to incoming documents to contain the inference prediction. Defaults to predicted_value.

          • Indicates if more than one true label exists.

          • labels array[string]

            The labels to predict.

        • Hide fill_mask attributes Show fill_mask attributes object
          • The string/token which will be removed from incoming documents and replaced with the inference prediction(s). In a response, this field contains the mask token for the specified model/tokenizer. Each model and tokenizer has a predefined mask token which cannot be changed. Thus, it is recommended not to set this value in requests. However, if this field is present in a request, its value must match the predefined value for that model/tokenizer, otherwise the request will fail.

          • Specifies the number of top class predictions to return. Defaults to 0.

          • Tokenization options stored in inference configuration

            Hide tokenization attributes Show tokenization attributes object
          • The field that is added to incoming documents to contain the inference prediction. Defaults to predicted_value.

          • vocabulary object Required
            Hide vocabulary attribute Show vocabulary attribute object
        • Hide learning_to_rank attributes Show learning_to_rank attributes object
        • ner object
          Hide ner attributes Show ner attributes object
        • Hide pass_through attributes Show pass_through attributes object
        • Hide text_embedding attributes Show text_embedding attributes object
        • Hide text_expansion attributes Show text_expansion attributes object
        • Hide question_answering attributes Show question_answering attributes object
      • input object Required
        Hide input attribute Show input attribute object
        • field_names array[string] Required

          An array of input field names for the model.

      • The license level of the trained model.

      • metadata object
        Hide metadata attributes Show metadata attributes object
        • model_aliases array[string]
        • An object that contains the baseline for feature importance values. For regression analysis, it is a single value. For classification analysis, there is a value for each class.

          Hide feature_importance_baseline attribute Show feature_importance_baseline attribute object
          • * string Additional properties
        • hyperparameters array[object]

          List of the available hyperparameters optimized during the fine_parameter_tuning phase as well as specified by the user.

          Hide hyperparameters attributes Show hyperparameters attributes object
          • A positive number showing how much the parameter influences the variation of the loss function. For hyperparameters with values that are not specified by the user but tuned during hyperparameter optimization.

          • name string Required
          • A number between 0 and 1 showing the proportion of influence on the variation of the loss function among all tuned hyperparameters. For hyperparameters with values that are not specified by the user but tuned during hyperparameter optimization.

          • supplied boolean Required

            Indicates if the hyperparameter is specified by the user (true) or optimized (false).

          • value number Required

            The value of the hyperparameter, either optimized or specified by the user.

        • An array of the total feature importance for each feature used from the training data set. This array of objects is returned if data frame analytics trained the model and the request includes total_feature_importance in the include request parameter.

          Hide total_feature_importance attributes Show total_feature_importance attributes object
          • feature_name string Required
          • importance array[object] Required

            A collection of feature importance statistics related to the training data set for this particular feature.

          • classes array[object] Required

            If the trained model is a classification model, feature importance statistics are gathered per target class value.

      • Hide model_package attributes Show model_package attributes object
      • location object
        Hide location attribute Show location attribute object
        • index object Required
          Hide index attribute Show index attribute object
      • Hide prefix_strings attributes Show prefix_strings attributes object
        • ingest string

          String prepended to input at ingest

GET /_ml/trained_models/{model_id}
curl \
 --request GET 'http://api.example.com/_ml/trained_models/{model_id}' \
 --header "Authorization: $API_KEY"




Delete an unreferenced trained model Added in 7.10.0

DELETE /_ml/trained_models/{model_id}

The request deletes a trained inference model that is not referenced by an ingest pipeline.

Path parameters

  • model_id string Required

    The unique identifier of the trained model.

Query parameters

  • force boolean

    Forcefully deletes a trained model that is referenced by ingest pipelines or has a started deployment.

  • timeout string

    Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

DELETE /_ml/trained_models/{model_id}
curl \
 --request DELETE 'http://api.example.com/_ml/trained_models/{model_id}' \
 --header "Authorization: $API_KEY"
Response examples (200)
A successful response when deleting an existing trained inference model.
{
  "acknowledged": true
}








Query parameters

  • Specifies what to do when the request:

    • Contains wildcard expressions and there are no models that match.
    • Contains the _all string or no identifiers and there are no matches.
    • Contains wildcard expressions and there are only partial matches.

    If true, it returns an empty array when there are no matches and the subset of results when there are partial matches.

  • Specifies whether the included model definition should be returned as a JSON map (true) or in a custom compressed format (false).

  • Indicates if certain fields should be removed from the configuration on retrieval. This allows the configuration to be in an acceptable format to be retrieved and then added to another cluster.

  • from number

    Skips the specified number of models.

  • include string

    A comma delimited string of optional fields to include in the response body.

    Supported values include:

    • definition: Includes the model definition.
    • feature_importance_baseline: Includes the baseline for feature importance values.
    • hyperparameters: Includes the information about hyperparameters used to train the model. This information consists of the value, the absolute and relative importance of the hyperparameter as well as an indicator of whether it was specified by the user or tuned during hyperparameter optimization.
    • total_feature_importance: Includes the total feature importance for the training data set. The baseline and total feature importance values are returned in the metadata field in the response body.
    • definition_status: Includes the model definition status.

    Values are definition, feature_importance_baseline, hyperparameters, total_feature_importance, or definition_status.

  • size number

    Specifies the maximum number of models to obtain.

  • tags string | array[string]

    A comma delimited string of tags. A trained model can have many tags, or none. When supplied, only trained models that contain all the supplied tags are returned.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • count number Required
    • trained_model_configs array[object] Required

      An array of trained model resources, which are sorted by the model_id value in ascending order.

      Hide trained_model_configs attributes Show trained_model_configs attributes object
      • model_id string Required
      • Values are tree_ensemble, lang_ident, or pytorch.

      • tags array[string] Required

        A comma delimited string of tags. A trained model can have many tags, or none.

      • version string
      • Information on the creator of the trained model.

      • create_time string | number

        A date and time, either as a string whose format can depend on the context (defaulting to ISO 8601), or a number of milliseconds since the Epoch. Elasticsearch accepts both as input, but will generally output a string representation.

      • Any field map described in the inference configuration takes precedence.

        Hide default_field_map attribute Show default_field_map attribute object
        • * string Additional properties
      • The free-text description of the trained model.

      • The estimated heap usage in bytes to keep the trained model in memory.

      • The estimated number of operations to use the trained model.

      • True if the full model definition is present.

      • Inference configuration provided when storing the model config

        Hide inference_config attributes Show inference_config attributes object
        • Hide regression attributes Show regression attributes object
        • Hide classification attributes Show classification attributes object
          • Specifies the number of top class predictions to return. Defaults to 0.

          • Specifies the maximum number of feature importance values per document.

          • Specifies the type of the predicted field to write. Acceptable values are: string, number, boolean. When boolean is provided 1.0 is transformed to true and 0.0 to false.

          • The field that is added to incoming documents to contain the inference prediction. Defaults to predicted_value.

          • Specifies the field to which the top classes are written. Defaults to top_classes.

        • Hide text_classification attributes Show text_classification attributes object
          • Specifies the number of top class predictions to return. Defaults to 0.

          • Tokenization options stored in inference configuration

            Hide tokenization attributes Show tokenization attributes object
          • The field that is added to incoming documents to contain the inference prediction. Defaults to predicted_value.

          • Classification labels to apply other than the stored labels. Must have the same deminsions as the default configured labels

          • Hide vocabulary attribute Show vocabulary attribute object
        • Hide zero_shot_classification attributes Show zero_shot_classification attributes object
          • Tokenization options stored in inference configuration

            Hide tokenization attributes Show tokenization attributes object
          • Hypothesis template used when tokenizing labels for prediction

          • classification_labels array[string] Required

            The zero shot classification labels indicating entailment, neutral, and contradiction Must contain exactly and only entailment, neutral, and contradiction

          • The field that is added to incoming documents to contain the inference prediction. Defaults to predicted_value.

          • Indicates if more than one true label exists.

          • labels array[string]

            The labels to predict.

        • Hide fill_mask attributes Show fill_mask attributes object
          • The string/token which will be removed from incoming documents and replaced with the inference prediction(s). In a response, this field contains the mask token for the specified model/tokenizer. Each model and tokenizer has a predefined mask token which cannot be changed. Thus, it is recommended not to set this value in requests. However, if this field is present in a request, its value must match the predefined value for that model/tokenizer, otherwise the request will fail.

          • Specifies the number of top class predictions to return. Defaults to 0.

          • Tokenization options stored in inference configuration

            Hide tokenization attributes Show tokenization attributes object
          • The field that is added to incoming documents to contain the inference prediction. Defaults to predicted_value.

          • vocabulary object Required
            Hide vocabulary attribute Show vocabulary attribute object
        • Hide learning_to_rank attributes Show learning_to_rank attributes object
        • ner object
          Hide ner attributes Show ner attributes object
        • Hide pass_through attributes Show pass_through attributes object
        • Hide text_embedding attributes Show text_embedding attributes object
        • Hide text_expansion attributes Show text_expansion attributes object
        • Hide question_answering attributes Show question_answering attributes object
      • input object Required
        Hide input attribute Show input attribute object
        • field_names array[string] Required

          An array of input field names for the model.

      • The license level of the trained model.

      • metadata object
        Hide metadata attributes Show metadata attributes object
        • model_aliases array[string]
        • An object that contains the baseline for feature importance values. For regression analysis, it is a single value. For classification analysis, there is a value for each class.

          Hide feature_importance_baseline attribute Show feature_importance_baseline attribute object
          • * string Additional properties
        • hyperparameters array[object]

          List of the available hyperparameters optimized during the fine_parameter_tuning phase as well as specified by the user.

          Hide hyperparameters attributes Show hyperparameters attributes object
          • A positive number showing how much the parameter influences the variation of the loss function. For hyperparameters with values that are not specified by the user but tuned during hyperparameter optimization.

          • name string Required
          • A number between 0 and 1 showing the proportion of influence on the variation of the loss function among all tuned hyperparameters. For hyperparameters with values that are not specified by the user but tuned during hyperparameter optimization.

          • supplied boolean Required

            Indicates if the hyperparameter is specified by the user (true) or optimized (false).

          • value number Required

            The value of the hyperparameter, either optimized or specified by the user.

        • An array of the total feature importance for each feature used from the training data set. This array of objects is returned if data frame analytics trained the model and the request includes total_feature_importance in the include request parameter.

          Hide total_feature_importance attributes Show total_feature_importance attributes object
          • feature_name string Required
          • importance array[object] Required

            A collection of feature importance statistics related to the training data set for this particular feature.

          • classes array[object] Required

            If the trained model is a classification model, feature importance statistics are gathered per target class value.

      • Hide model_package attributes Show model_package attributes object
      • location object
        Hide location attribute Show location attribute object
        • index object Required
          Hide index attribute Show index attribute object
      • Hide prefix_strings attributes Show prefix_strings attributes object
        • ingest string

          String prepended to input at ingest

GET /_ml/trained_models
curl \
 --request GET 'http://api.example.com/_ml/trained_models' \
 --header "Authorization: $API_KEY"








Evaluate a trained model Added in 8.3.0

POST /_ml/trained_models/{model_id}/_infer

Path parameters

  • model_id string Required

    The unique identifier of the trained model.

Query parameters

  • timeout string

    Controls the amount of time to wait for inference results.

application/json

Body Required

  • docs array[object] Required

    An array of objects to pass to the model for inference. The objects should contain a fields matching your configured trained model input. Typically, for NLP models, the field name is text_field. Currently, for NLP models, only a single value is allowed.

    Hide docs attribute Show docs attribute object
    • * object Additional properties
  • Hide inference_config attributes Show inference_config attributes object
    • Hide regression attributes Show regression attributes object
    • Hide classification attributes Show classification attributes object
      • Specifies the number of top class predictions to return. Defaults to 0.

      • Specifies the maximum number of feature importance values per document.

      • Specifies the type of the predicted field to write. Acceptable values are: string, number, boolean. When boolean is provided 1.0 is transformed to true and 0.0 to false.

      • The field that is added to incoming documents to contain the inference prediction. Defaults to predicted_value.

      • Specifies the field to which the top classes are written. Defaults to top_classes.

    • Hide text_classification attributes Show text_classification attributes object
      • Specifies the number of top class predictions to return. Defaults to 0.

      • Hide tokenization attributes Show tokenization attributes object
        • truncate string

          Values are first, second, or none.

        • span number

          Span options to apply

      • The field that is added to incoming documents to contain the inference prediction. Defaults to predicted_value.

      • Classification labels to apply other than the stored labels. Must have the same deminsions as the default configured labels

    • Hide zero_shot_classification attributes Show zero_shot_classification attributes object
      • Hide tokenization attributes Show tokenization attributes object
        • truncate string

          Values are first, second, or none.

        • span number

          Span options to apply

      • The field that is added to incoming documents to contain the inference prediction. Defaults to predicted_value.

      • Update the configured multi label option. Indicates if more than one true label exists. Defaults to the configured value.

      • labels array[string] Required

        The labels to predict.

    • Hide fill_mask attributes Show fill_mask attributes object
      • Specifies the number of top class predictions to return. Defaults to 0.

      • Hide tokenization attributes Show tokenization attributes object
        • truncate string

          Values are first, second, or none.

        • span number

          Span options to apply

      • The field that is added to incoming documents to contain the inference prediction. Defaults to predicted_value.

    • ner object
      Hide ner attributes Show ner attributes object
      • Hide tokenization attributes Show tokenization attributes object
        • truncate string

          Values are first, second, or none.

        • span number

          Span options to apply

      • The field that is added to incoming documents to contain the inference prediction. Defaults to predicted_value.

    • Hide pass_through attributes Show pass_through attributes object
      • Hide tokenization attributes Show tokenization attributes object
        • truncate string

          Values are first, second, or none.

        • span number

          Span options to apply

      • The field that is added to incoming documents to contain the inference prediction. Defaults to predicted_value.

    • Hide text_embedding attributes Show text_embedding attributes object
      • Hide tokenization attributes Show tokenization attributes object
        • truncate string

          Values are first, second, or none.

        • span number

          Span options to apply

      • The field that is added to incoming documents to contain the inference prediction. Defaults to predicted_value.

    • Hide text_expansion attributes Show text_expansion attributes object
      • Hide tokenization attributes Show tokenization attributes object
        • truncate string

          Values are first, second, or none.

        • span number

          Span options to apply

      • The field that is added to incoming documents to contain the inference prediction. Defaults to predicted_value.

    • Hide question_answering attributes Show question_answering attributes object
      • question string Required

        The question to answer given the inference context

      • Specifies the number of top class predictions to return. Defaults to 0.

      • Hide tokenization attributes Show tokenization attributes object
        • truncate string

          Values are first, second, or none.

        • span number

          Span options to apply

      • The field that is added to incoming documents to contain the inference prediction. Defaults to predicted_value.

      • The maximum answer length to consider for extraction

Responses

POST /_ml/trained_models/{model_id}/_infer
curl \
 --request POST 'http://api.example.com/_ml/trained_models/{model_id}/_infer' \
 --header "Authorization: $API_KEY" \
 --header "Content-Type: application/json" \
 --data '{"docs":[{"additionalProperty1":{},"additionalProperty2":{}}],"inference_config":{"regression":{"results_field":"string","num_top_feature_importance_values":42.0},"classification":{"num_top_classes":42.0,"num_top_feature_importance_values":42.0,"prediction_field_type":"string","results_field":"string","top_classes_results_field":"string"},"text_classification":{"num_top_classes":42.0,"tokenization":{"truncate":"first","span":42.0},"results_field":"string","classification_labels":["string"]},"zero_shot_classification":{"tokenization":{"truncate":"first","span":42.0},"results_field":"string","multi_label":true,"labels":["string"]},"fill_mask":{"num_top_classes":42.0,"tokenization":{"truncate":"first","span":42.0},"results_field":"string"},"ner":{"tokenization":{"truncate":"first","span":42.0},"results_field":"string"},"pass_through":{"tokenization":{"truncate":"first","span":42.0},"results_field":"string"},"text_embedding":{"tokenization":{"truncate":"first","span":42.0},"results_field":"string"},"text_expansion":{"tokenization":{"truncate":"first","span":42.0},"results_field":"string"},"question_answering":{"question":"string","num_top_classes":42.0,"tokenization":{"truncate":"first","span":42.0},"results_field":"string","max_answer_length":42.0}}}'








Start a trained model deployment Added in 8.0.0

POST /_ml/trained_models/{model_id}/deployment/_start

It allocates the model to every machine learning node.

Path parameters

  • model_id string Required

    The unique identifier of the trained model. Currently, only PyTorch models are supported.

Query parameters

  • cache_size number | string

    The inference cache size (in memory outside the JVM heap) per node for the model. The default value is the same size as the model_size_bytes. To disable the cache, 0b can be provided.

  • A unique identifier for the deployment of the model.

  • The number of model allocations on each node where the model is deployed. All allocations on a node share the same copy of the model in memory but use a separate set of threads to evaluate the model. Increasing this value generally increases the throughput. If this setting is greater than the number of hardware threads it will automatically be changed to a value less than the number of hardware threads. If adaptive_allocations is enabled, do not set this value, because it’s automatically set.

  • priority string

    The deployment priority.

    Values are normal or low.

  • Specifies the number of inference requests that are allowed in the queue. After the number of requests exceeds this value, new requests are rejected with a 429 error.

  • Sets the number of threads used by each model allocation during inference. This generally increases the inference speed. The inference process is a compute-bound process; any number greater than the number of available hardware threads on the machine does not increase the inference speed. If this setting is greater than the number of hardware threads it will automatically be changed to a value less than the number of hardware threads.

  • timeout string

    Specifies the amount of time to wait for the model to deploy.

  • wait_for string

    Specifies the allocation status to wait for before returning.

    Supported values include:

    • started: The trained model is started on at least one node.
    • starting: Trained model deployment is starting but it is not yet deployed on any nodes.
    • fully_allocated: Trained model deployment has started on all valid nodes.

    Values are started, starting, or fully_allocated.

application/json

Body

  • Hide adaptive_allocations attributes Show adaptive_allocations attributes object
    • enabled boolean Required

      If true, adaptive_allocations is enabled

    • Specifies the minimum number of allocations to scale to. If set, it must be greater than or equal to 0. If not defined, the deployment scales to 0.

    • Specifies the maximum number of allocations to scale to. If set, it must be greater than or equal to min_number_of_allocations.

Responses

POST /_ml/trained_models/{model_id}/deployment/_start
curl \
 --request POST 'http://api.example.com/_ml/trained_models/{model_id}/deployment/_start' \
 --header "Authorization: $API_KEY" \
 --header "Content-Type: application/json" \
 --data '{"adaptive_allocations":{"enabled":true,"min_number_of_allocations":42.0,"max_number_of_allocations":42.0}}'





















Get the migration reindexing status Technical preview

GET /_migration/reindex/{index}/_status

Get the status of a migration reindex attempt for a data stream or index.

Path parameters

  • index string | array[string] Required

    The index or data stream name.

Responses

GET /_migration/reindex/{index}/_status
curl \
 --request GET 'http://api.example.com/_migration/reindex/{index}/_status' \
 --header "Authorization: $API_KEY"








Get deprecation information Added in 6.1.0

GET /{index}/_migration/deprecations

Get information about different cluster, node, and index level settings that use deprecated features that will be removed or changed in the next major version.

TIP: This APIs is designed for indirect use by the Upgrade Assistant. You are strongly recommended to use the Upgrade Assistant.

Path parameters

  • index string Required

    Comma-separate list of data streams or indices to check. Wildcard (*) expressions are supported.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • cluster_settings array[object] Required

      Cluster-level deprecation warnings.

      Hide cluster_settings attributes Show cluster_settings attributes object
      • details string

        Optional details about the deprecation warning.

      • level string Required

        Values are none, info, warning, or critical.

      • message string Required

        Descriptive information about the deprecation warning.

      • url string Required

        A link to the breaking change documentation, where you can find more information about this change.

      • _meta object
        Hide _meta attribute Show _meta attribute object
        • * object Additional properties
    • index_settings object Required

      Index warnings are sectioned off per index and can be filtered using an index-pattern in the query. This section includes warnings for the backing indices of data streams specified in the request path.

      Hide index_settings attribute Show index_settings attribute object
      • * array[object] Additional properties
        Hide * attributes Show * attributes object
        • details string

          Optional details about the deprecation warning.

        • level string Required

          Values are none, info, warning, or critical.

        • message string Required

          Descriptive information about the deprecation warning.

        • url string Required

          A link to the breaking change documentation, where you can find more information about this change.

        • _meta object
          Hide _meta attribute Show _meta attribute object
          • * object Additional properties
    • data_streams object Required
      Hide data_streams attribute Show data_streams attribute object
      • * array[object] Additional properties
        Hide * attributes Show * attributes object
        • details string

          Optional details about the deprecation warning.

        • level string Required

          Values are none, info, warning, or critical.

        • message string Required

          Descriptive information about the deprecation warning.

        • url string Required

          A link to the breaking change documentation, where you can find more information about this change.

        • _meta object
          Hide _meta attribute Show _meta attribute object
          • * object Additional properties
    • node_settings array[object] Required

      Node-level deprecation warnings. Since only a subset of your nodes might incorporate these settings, it is important to read the details section for more information about which nodes are affected.

      Hide node_settings attributes Show node_settings attributes object
      • details string

        Optional details about the deprecation warning.

      • level string Required

        Values are none, info, warning, or critical.

      • message string Required

        Descriptive information about the deprecation warning.

      • url string Required

        A link to the breaking change documentation, where you can find more information about this change.

      • _meta object
        Hide _meta attribute Show _meta attribute object
        • * object Additional properties
    • ml_settings array[object] Required

      Machine learning-related deprecation warnings.

      Hide ml_settings attributes Show ml_settings attributes object
      • details string

        Optional details about the deprecation warning.

      • level string Required

        Values are none, info, warning, or critical.

      • message string Required

        Descriptive information about the deprecation warning.

      • url string Required

        A link to the breaking change documentation, where you can find more information about this change.

      • _meta object
        Hide _meta attribute Show _meta attribute object
        • * object Additional properties
    • templates object Required

      Template warnings are sectioned off per template and include deprecations for both component templates and index templates.

      Hide templates attribute Show templates attribute object
      • * array[object] Additional properties
        Hide * attributes Show * attributes object
        • details string

          Optional details about the deprecation warning.

        • level string Required

          Values are none, info, warning, or critical.

        • message string Required

          Descriptive information about the deprecation warning.

        • url string Required

          A link to the breaking change documentation, where you can find more information about this change.

        • _meta object
          Hide _meta attribute Show _meta attribute object
          • * object Additional properties
    • ilm_policies object Required

      ILM policy warnings are sectioned off per policy.

      Hide ilm_policies attribute Show ilm_policies attribute object
      • * array[object] Additional properties
        Hide * attributes Show * attributes object
        • details string

          Optional details about the deprecation warning.

        • level string Required

          Values are none, info, warning, or critical.

        • message string Required

          Descriptive information about the deprecation warning.

        • url string Required

          A link to the breaking change documentation, where you can find more information about this change.

        • _meta object
          Hide _meta attribute Show _meta attribute object
          • * object Additional properties
GET /{index}/_migration/deprecations
curl \
 --request GET 'http://api.example.com/{index}/_migration/deprecations' \
 --header "Authorization: $API_KEY"
Response examples (200)
An abbreviated response from `GET /_migration/deprecations`.
{
  "cluster_settings": [
    {
      "level": "critical",
      "message": "Cluster name cannot contain ':'",
      "url": "https://www.elastic.co/guide/en/elasticsearch/reference/7.0/breaking-changes-7.0.html#_literal_literal_is_no_longer_allowed_in_cluster_name",
      "details": "This cluster is named [mycompany:logging], which contains the illegal character ':'."
    }
  ],
  "node_settings": [],
  "index_settings": {
    "logs:apache": [
      {
        "level": "warning",
        "message": "Index name cannot contain ':'",
        "url": "https://www.elastic.co/guide/en/elasticsearch/reference/7.0/breaking-changes-7.0.html#_literal_literal_is_no_longer_allowed_in_index_name",
        "details": "This index is named [logs:apache], which contains the illegal character ':'."
      }
    ]
  },
  "ml_settings": []
}






























Create or update a query rule Added in 8.15.0

PUT /_query_rules/{ruleset_id}/_rule/{rule_id}

Create or update a query rule within a query ruleset.

IMPORTANT: Due to limitations within pinned queries, you can only pin documents using ids or docs, but cannot use both in single rule. It is advised to use one or the other in query rulesets, to avoid errors. Additionally, pinned queries have a maximum limit of 100 pinned hits. If multiple matching rules pin more than 100 documents, only the first 100 documents are pinned in the order they are specified in the ruleset.

Path parameters

  • ruleset_id string Required

    The unique identifier of the query ruleset containing the rule to be created or updated.

  • rule_id string Required

    The unique identifier of the query rule within the specified ruleset to be created or updated.

application/json

Body Required

  • type string Required

    Values are pinned or exclude.

  • criteria object | array[object] Required

    The criteria that must be met for the rule to be applied. If multiple criteria are specified for a rule, all criteria must be met for the rule to be applied.

    One of:
    Hide attributes Show attributes
    • type string Required

      Values are global, exact, exact_fuzzy, fuzzy, prefix, suffix, contains, lt, lte, gt, gte, or always.

    • metadata string

      The metadata field to match against. This metadata will be used to match against match_criteria sent in the rule. It is required for all criteria types except always.

    • values array[object]

      The values to match against the metadata field. Only one value must match for the criteria to be met. It is required for all criteria types except always.

  • actions object Required
    Hide actions attributes Show actions attributes object
    • ids array[string]

      The unique document IDs of the documents to apply the rule to. Only one of ids or docs may be specified and at least one must be specified.

    • docs array[object]

      The documents to apply the rule to. Only one of ids or docs may be specified and at least one must be specified. There is a maximum value of 100 documents in a rule. You can specify the following attributes for each document:

      • _index: The index of the document to pin.
      • _id: The unique document ID.
      Hide docs attributes Show docs attributes object
  • priority number

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • result string Required

      Values are created, updated, deleted, not_found, or noop.

PUT /_query_rules/{ruleset_id}/_rule/{rule_id}
curl \
 --request PUT 'http://api.example.com/_query_rules/{ruleset_id}/_rule/{rule_id}' \
 --header "Authorization: $API_KEY" \
 --header "Content-Type: application/json" \
 --data '"{\n  \"match_criteria\": {\n    \"query_string\": \"puggles\"\n  }\n}"'
Request example
Run `POST _query_rules/my-ruleset/_test` to test a ruleset. Provide the match criteria that you want to test against.
{
  "match_criteria": {
    "query_string": "puggles"
  }
}

Delete a query rule Added in 8.15.0

DELETE /_query_rules/{ruleset_id}/_rule/{rule_id}

Delete a query rule within a query ruleset. This is a destructive action that is only recoverable by re-adding the same rule with the create or update query rule API.

Path parameters

  • ruleset_id string Required

    The unique identifier of the query ruleset containing the rule to delete

  • rule_id string Required

    The unique identifier of the query rule within the specified ruleset to delete

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

DELETE /_query_rules/{ruleset_id}/_rule/{rule_id}
curl \
 --request DELETE 'http://api.example.com/_query_rules/{ruleset_id}/_rule/{rule_id}' \
 --header "Authorization: $API_KEY"







































































































































Clear a scrolling search

DELETE /_search/scroll

Clear the search context and results for a scrolling search.

External documentation
application/json

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • succeeded boolean Required

      If true, the request succeeded. This does not indicate whether any scrolling search requests were cleared.

    • num_freed number Required

      The number of scrolling search requests cleared.

DELETE /_search/scroll
curl \
 --request DELETE 'http://api.example.com/_search/scroll' \
 --header "Authorization: $API_KEY" \
 --header "Content-Type: application/json" \
 --data '"{\n  \"scroll_id\": \"DXF1ZXJ5QW5kRmV0Y2gBAAAAAAAAAD4WYm9laVYtZndUQlNsdDcwakFMNjU1QQ==\"\n}"'
Request example
Run `DELETE /_search/scroll` to clear the search context and results for a scrolling search.
{
  "scroll_id": "DXF1ZXJ5QW5kRmV0Y2gBAAAAAAAAAD4WYm9laVYtZndUQlNsdDcwakFMNjU1QQ=="
}
































Explain a document match result

GET /{index}/_explain/{id}

Get information about why a specific document matches, or doesn't match, a query. It computes a score explanation for a query and a specific document.

Path parameters

  • index string Required

    Index names that are used to limit the request. Only a single index name can be provided to this parameter.

  • id string Required

    The document identifier.

Query parameters

  • analyzer string

    The analyzer to use for the query string. This parameter can be used only when the q query string parameter is specified.

  • If true, wildcard and prefix queries are analyzed. This parameter can be used only when the q query string parameter is specified.

  • The default operator for query string query: AND or OR. This parameter can be used only when the q query string parameter is specified.

    Values are and, AND, or, or OR.

  • df string

    The field to use as default where no field prefix is given in the query string. This parameter can be used only when the q query string parameter is specified.

  • lenient boolean

    If true, format-based query failures (such as providing text to a numeric field) in the query string will be ignored. This parameter can be used only when the q query string parameter is specified.

  • The node or shard the operation should be performed on. It is random by default.

  • routing string

    A custom value used to route operations to a specific shard.

  • _source boolean | string | array[string]

    True or false to return the _source field or not or a list of fields to return.

  • _source_excludes string | array[string]

    A comma-separated list of source fields to exclude from the response. You can also use this parameter to exclude fields from the subset specified in _source_includes query parameter. If the _source parameter is false, this parameter is ignored.

  • _source_includes string | array[string]

    A comma-separated list of source fields to include in the response. If this parameter is specified, only these source fields are returned. You can exclude fields from this subset using the _source_excludes query parameter. If the _source parameter is false, this parameter is ignored.

  • stored_fields string | array[string]

    A comma-separated list of stored fields to return in the response.

  • q string

    The query in the Lucene query string syntax.

application/json

Body

Responses

GET /{index}/_explain/{id}
curl \
 --request GET 'http://api.example.com/{index}/_explain/{id}' \
 --header "Authorization: $API_KEY" \
 --header "Content-Type: application/json" \
 --data '"{\n  \"query\" : {\n    \"match\" : { \"message\" : \"elasticsearch\" }\n  }\n}"'
Request example
Run `GET /my-index-000001/_explain/0` with the request body. Alternatively, run `GET /my-index-000001/_explain/0?q=message:elasticsearch`
{
  "query" : {
    "match" : { "message" : "elasticsearch" }
  }
}
Response examples (200)
A successful response from `GET /my-index-000001/_explain/0`.
{
  "_index":"my-index-000001",
  "_id":"0",
  "matched":true,
  "explanation":{
      "value":1.6943598,
      "description":"weight(message:elasticsearch in 0) [PerFieldSimilarity], result of:",
      "details":[
        {
            "value":1.6943598,
            "description":"score(freq=1.0), computed as boost * idf * tf from:",
            "details":[
              {
                  "value":2.2,
                  "description":"boost",
                  "details":[]
              },
              {
                  "value":1.3862944,
                  "description":"idf, computed as log(1 + (N - n + 0.5) / (n + 0.5)) from:",
                  "details":[
                    {
                        "value":1,
                        "description":"n, number of documents containing term",
                        "details":[]
                    },
                    {
                        "value":5,
                        "description":"N, total number of documents with field",
                        "details":[]
                    }
                  ]
              },
              {
                  "value":0.5555556,
                  "description":"tf, computed as freq / (freq + k1 * (1 - b + b * dl / avgdl)) from:",
                  "details":[
                    {
                        "value":1.0,
                        "description":"freq, occurrences of term within document",
                        "details":[]
                    },
                    {
                        "value":1.2,
                        "description":"k1, term saturation parameter",
                        "details":[]
                    },
                    {
                        "value":0.75,
                        "description":"b, length normalization parameter",
                        "details":[]
                    },
                    {
                        "value":3.0,
                        "description":"dl, length of field",
                        "details":[]
                    },
                    {
                        "value":5.4,
                        "description":"avgdl, average length of field",
                        "details":[]
                    }
                  ]
              }
            ]
        }
      ]
  }
}
































Run multiple searches Added in 1.3.0

POST /{index}/_msearch

The format of the request is similar to the bulk API format and makes use of the newline delimited JSON (NDJSON) format. The structure is as follows:

header\n
body\n
header\n
body\n

This structure is specifically optimized to reduce parsing if a specific search ends up redirected to another node.

IMPORTANT: The final line of data must end with a newline character \n. Each newline character may be preceded by a carriage return \r. When sending requests to this endpoint the Content-Type header should be set to application/x-ndjson.

Path parameters

  • index string | array[string] Required

    Comma-separated list of data streams, indices, and index aliases to search.

Query parameters

  • If false, the request returns an error if any wildcard expression, index alias, or _all value targets only missing or closed indices. This behavior applies even if the request targets other open indices. For example, a request targeting foo*,bar* returns an error if an index starts with foo but no index starts with bar.

  • If true, network roundtrips between the coordinating node and remote clusters are minimized for cross-cluster search requests.

  • expand_wildcards string | array[string]

    Type of index that wildcard expressions can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams.

    Supported values include:

    • all: Match any data stream or index, including hidden ones.
    • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
    • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
    • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
    • none: Wildcard expressions are not accepted.
  • If true, concrete, expanded or aliased indices are ignored when frozen.

  • If true, missing or closed indices are not included in the response.

  • Indicates whether hit.matched_queries should be rendered as a map that includes the name of the matched query associated with its score (true) or as an array containing the name of the matched queries (false) This functionality reruns each named query on every hit in a search response. Typically, this adds a small overhead to a request. However, using computationally expensive named queries on a large number of hits may add significant overhead.

  • Maximum number of concurrent searches the multi search API can execute. Defaults to max(1, (# of data nodes * min(search thread pool size, 10))).

  • Maximum number of concurrent shard requests that each sub-search request executes per node.

  • Defines a threshold that enforces a pre-filter roundtrip to prefilter search shards based on query rewriting if the number of shards the search request expands to exceeds the threshold. This filter roundtrip can limit the number of shards significantly if for instance a shard can not match any documents based on its rewrite method i.e., if date filters are mandatory to match but the shard bounds and the query are disjoint.

  • If true, hits.total are returned as an integer in the response. Defaults to false, which returns an object.

  • routing string

    Custom routing value used to route search operations to a specific shard.

  • Indicates whether global term and document frequencies should be used when scoring returned documents.

    Supported values include:

    • query_then_fetch: Documents are scored using local term and document frequencies for the shard. This is usually faster but less accurate.
    • dfs_query_then_fetch: Documents are scored using global term and document frequencies across all shards. This is usually slower but more accurate.

    Values are query_then_fetch or dfs_query_then_fetch.

  • typed_keys boolean

    Specifies whether aggregation and suggester names should be prefixed by their respective types in the response.

application/json

Body object Required

One of:

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
POST /{index}/_msearch
curl \
 --request POST 'http://api.example.com/{index}/_msearch' \
 --header "Authorization: $API_KEY" \
 --header "Content-Type: application/json" \
 --data '[{"allow_no_indices":true,"expand_wildcards":"string","ignore_unavailable":true,"index":"string","preference":"string","request_cache":true,"routing":"string","search_type":"query_then_fetch","ccs_minimize_roundtrips":true,"allow_partial_search_results":true,"ignore_throttled":true}]'








Run multiple templated searches Added in 5.0.0

GET /{index}/_msearch/template

Run multiple templated searches with a single request. If you are providing a text file or text input to curl, use the --data-binary flag instead of -d to preserve newlines. For example:

$ cat requests
{ "index": "my-index" }
{ "id": "my-search-template", "params": { "query_string": "hello world", "from": 0, "size": 10 }}
{ "index": "my-other-index" }
{ "id": "my-other-search-template", "params": { "query_type": "match_all" }}

$ curl -H "Content-Type: application/x-ndjson" -XGET localhost:9200/_msearch/template --data-binary "@requests"; echo
External documentation

Path parameters

  • index string | array[string] Required

    A comma-separated list of data streams, indices, and aliases to search. It supports wildcards (*). To search all data streams and indices, omit this parameter or use *.

Query parameters

  • If true, network round-trips are minimized for cross-cluster search requests.

  • The maximum number of concurrent searches the API can run.

  • The type of the search operation.

    Supported values include:

    • query_then_fetch: Documents are scored using local term and document frequencies for the shard. This is usually faster but less accurate.
    • dfs_query_then_fetch: Documents are scored using global term and document frequencies across all shards. This is usually slower but more accurate.

    Values are query_then_fetch or dfs_query_then_fetch.

  • If true, the response returns hits.total as an integer. If false, it returns hits.total as an object.

  • typed_keys boolean

    If true, the response prefixes aggregation and suggester names with their respective types.

application/json

Body object Required

One of:

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
GET /{index}/_msearch/template
curl \
 --request GET 'http://api.example.com/{index}/_msearch/template' \
 --header "Authorization: $API_KEY" \
 --header "Content-Type: application/json" \
 --data '"{ }\n{ \"id\": \"my-search-template\", \"params\": { \"query_string\": \"hello world\", \"from\": 0, \"size\": 10 }}\n{ }\n{ \"id\": \"my-other-search-template\", \"params\": { \"query_type\": \"match_all\" }}"'
Request example
Run `GET my-index/_msearch/template` to run multiple templated searches.
{ }
{ "id": "my-search-template", "params": { "query_string": "hello world", "from": 0, "size": 10 }}
{ }
{ "id": "my-other-search-template", "params": { "query_type": "match_all" }}