Get index template information
Added in 5.2.0
Get information about the index templates in a cluster. You can use index templates to apply index settings and field mappings to new indices at creation. IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the get index template API.
Path parameters
-
name
string Required The name of the template to return. Accepts wildcard expressions. If omitted, all templates are returned.
Query parameters
-
h
string | array[string] List of columns to appear in the response. Supports simple wildcards.
-
s
string | array[string] List of columns that determine how the table should be sorted. Sorting defaults to ascending and can be changed by setting
:asc
or:desc
as a suffix to the column name. -
local
boolean If
true
, the request computes the list of selected nodes from the local cluster state. Iffalse
the list of selected nodes are computed from the cluster state of the master node. In both cases the coordinating node will send requests for further information to each selected node. -
master_timeout
string Period to wait for a connection to the master node.
curl \
--request GET 'http://api.example.com/_cat/templates/{name}' \
--header "Authorization: $API_KEY"
[
{
"name": "my-template-0",
"index_patterns": "[te*]",
"order": "500",
"version": null,
"composed_of": "[]"
},
{
"name": "my-template-1",
"index_patterns": "[tea*]",
"order": "501",
"version": null,
"composed_of": "[]"
},
{
"name": "my-template-2",
"index_patterns": "[teak*]",
"order": "502",
"version": "7",
"composed_of": "[]"
}
]
Get node information
Added in 1.3.0
By default, the API returns all attributes and core settings for cluster nodes.
Path parameters
-
node_id
string | array[string] Required Comma-separated list of node IDs or names used to limit returned information.
Query parameters
-
flat_settings
boolean If true, returns settings in flat format.
-
timeout
string Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
curl \
--request GET 'http://api.example.com/_nodes/{node_id}' \
--header "Authorization: $API_KEY"
{
"_nodes": {},
"cluster_name": "elasticsearch",
"nodes": {
"USpTGYaBSIKbgSUJR2Z9lg": {
"name": "node-0",
"transport_address": "192.168.17:9300",
"host": "node-0.elastic.co",
"ip": "192.168.17",
"version": "{version}",
"transport_version": 100000298,
"index_version": 100000074,
"component_versions": {
"ml_config_version": 100000162,
"transform_config_version": 100000096
},
"build_flavor": "default",
"build_type": "{build_type}",
"build_hash": "587409e",
"roles": [
"master",
"data",
"ingest"
],
"attributes": {},
"plugins": [
{
"name": "analysis-icu",
"version": "{version}",
"description": "The ICU Analysis plugin integrates Lucene ICU
module into elasticsearch, adding ICU relates analysis components.",
"classname":
"org.elasticsearch.plugin.analysis.icu.AnalysisICUPlugin",
"has_native_controller": false
}
],
"modules": [
{
"name": "lang-painless",
"version": "{version}",
"description": "An easy, safe and fast scripting language for
Elasticsearch",
"classname": "org.elasticsearch.painless.PainlessPlugin",
"has_native_controller": false
}
]
}
}
}
Get node statistics
Get statistics for nodes in a cluster. By default, all stats are returned. You can limit the returned information by using metrics.
Query parameters
-
completion_fields
string | array[string] Comma-separated list or wildcard expressions of fields to include in fielddata and suggest statistics.
-
fielddata_fields
string | array[string] Comma-separated list or wildcard expressions of fields to include in fielddata statistics.
-
fields
string | array[string] Comma-separated list or wildcard expressions of fields to include in the statistics.
-
groups
boolean Comma-separated list of search groups to include in the search statistics.
-
include_segment_file_sizes
boolean If true, the call reports the aggregated disk usage of each one of the Lucene index files (only applies if segment stats are requested).
-
level
string Indicates whether statistics are aggregated at the cluster, index, or shard level.
Values are
cluster
,indices
, orshards
. -
timeout
string Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
-
types
array[string] A comma-separated list of document types for the indexing index metric.
-
include_unloaded_segments
boolean If
true
, the response includes information from segments that are not loaded into memory.
curl \
--request GET 'http://api.example.com/_nodes/{node_id}/stats/{metric}' \
--header "Authorization: $API_KEY"
Get node statistics
Get statistics for nodes in a cluster. By default, all stats are returned. You can limit the returned information by using metrics.
Path parameters
-
node_id
string | array[string] Required Comma-separated list of node IDs or names used to limit returned information.
-
metric
string | array[string] Required Limit the information returned to the specified metrics
-
index_metric
string | array[string] Required Limit the information returned for indices metric to the specific index metrics. It can be used only if indices (or all) metric is specified.
Query parameters
-
completion_fields
string | array[string] Comma-separated list or wildcard expressions of fields to include in fielddata and suggest statistics.
-
fielddata_fields
string | array[string] Comma-separated list or wildcard expressions of fields to include in fielddata statistics.
-
fields
string | array[string] Comma-separated list or wildcard expressions of fields to include in the statistics.
-
groups
boolean Comma-separated list of search groups to include in the search statistics.
-
include_segment_file_sizes
boolean If true, the call reports the aggregated disk usage of each one of the Lucene index files (only applies if segment stats are requested).
-
level
string Indicates whether statistics are aggregated at the cluster, index, or shard level.
Values are
cluster
,indices
, orshards
. -
timeout
string Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
-
types
array[string] A comma-separated list of document types for the indexing index metric.
-
include_unloaded_segments
boolean If
true
, the response includes information from segments that are not loaded into memory.
curl \
--request GET 'http://api.example.com/_nodes/{node_id}/stats/{metric}/{index_metric}' \
--header "Authorization: $API_KEY"
Get cross-cluster replication stats
Added in 6.5.0
This API returns stats about auto-following and the same shard-level stats as the get follower stats API.
Query parameters
-
master_timeout
string The period to wait for a connection to the master node. If the master node is not available before the timeout expires, the request fails and returns an error. It can also be set to
-1
to indicate that the request should never timeout. -
timeout
string The period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
curl \
--request GET 'http://api.example.com/_ccr/stats' \
--header "Authorization: $API_KEY"
{
"auto_follow_stats" : {
"number_of_failed_follow_indices" : 0,
"number_of_failed_remote_cluster_state_requests" : 0,
"number_of_successful_follow_indices" : 1,
"recent_auto_follow_errors" : [],
"auto_followed_clusters" : []
},
"follow_stats" : {
"indices" : [
{
"index" : "follower_index",
"total_global_checkpoint_lag" : 256,
"shards" : [
{
"remote_cluster" : "remote_cluster",
"leader_index" : "leader_index",
"follower_index" : "follower_index",
"shard_id" : 0,
"leader_global_checkpoint" : 1024,
"leader_max_seq_no" : 1536,
"follower_global_checkpoint" : 768,
"follower_max_seq_no" : 896,
"last_requested_seq_no" : 897,
"outstanding_read_requests" : 8,
"outstanding_write_requests" : 2,
"write_buffer_operation_count" : 64,
"follower_mapping_version" : 4,
"follower_settings_version" : 2,
"follower_aliases_version" : 8,
"total_read_time_millis" : 32768,
"total_read_remote_exec_time_millis" : 16384,
"successful_read_requests" : 32,
"failed_read_requests" : 0,
"operations_read" : 896,
"bytes_read" : 32768,
"total_write_time_millis" : 16384,
"write_buffer_size_in_bytes" : 1536,
"successful_write_requests" : 16,
"failed_write_requests" : 0,
"operations_written" : 832,
"read_exceptions" : [ ],
"time_since_last_read_millis" : 8
}
]
}
]
}
}
Delete documents
Added in 5.0.0
Deletes documents that match the specified query.
If the Elasticsearch security features are enabled, you must have the following index privileges for the target data stream, index, or alias:
read
delete
orwrite
You can specify the query criteria in the request URI or the request body using the same syntax as the search API. When you submit a delete by query request, Elasticsearch gets a snapshot of the data stream or index when it begins processing the request and deletes matching documents using internal versioning. If a document changes between the time that the snapshot is taken and the delete operation is processed, it results in a version conflict and the delete operation fails.
NOTE: Documents with a version equal to 0 cannot be deleted using delete by query because internal versioning does not support 0 as a valid version number.
While processing a delete by query request, Elasticsearch performs multiple search requests sequentially to find all of the matching documents to delete. A bulk delete request is performed for each batch of matching documents. If a search or bulk request is rejected, the requests are retried up to 10 times, with exponential back off. If the maximum retry limit is reached, processing halts and all failed requests are returned in the response. Any delete requests that completed successfully still stick, they are not rolled back.
You can opt to count version conflicts instead of halting and returning by setting conflicts
to proceed
.
Note that if you opt to count version conflicts the operation could attempt to delete more documents from the source than max_docs
until it has successfully deleted max_docs documents
, or it has gone through every document in the source query.
Throttling delete requests
To control the rate at which delete by query issues batches of delete operations, you can set requests_per_second
to any positive decimal number.
This pads each batch with a wait time to throttle the rate.
Set requests_per_second
to -1
to disable throttling.
Throttling uses a wait time between batches so that the internal scroll requests can be given a timeout that takes the request padding into account.
The padding time is the difference between the batch size divided by the requests_per_second
and the time spent writing.
By default the batch size is 1000
, so if requests_per_second
is set to 500
:
target_time = 1000 / 500 per second = 2 seconds
wait_time = target_time - write_time = 2 seconds - .5 seconds = 1.5 seconds
Since the batch is issued as a single _bulk
request, large batch sizes cause Elasticsearch to create many requests and wait before starting the next set.
This is "bursty" instead of "smooth".
Slicing
Delete by query supports sliced scroll to parallelize the delete process. This can improve efficiency and provide a convenient way to break the request down into smaller parts.
Setting slices
to auto
lets Elasticsearch choose the number of slices to use.
This setting will use one slice per shard, up to a certain limit.
If there are multiple source data streams or indices, it will choose the number of slices based on the index or backing index with the smallest number of shards.
Adding slices to the delete by query operation creates sub-requests which means it has some quirks:
- You can see these requests in the tasks APIs. These sub-requests are "child" tasks of the task for the request with slices.
- Fetching the status of the task for the request with slices only contains the status of completed slices.
- These sub-requests are individually addressable for things like cancellation and rethrottling.
- Rethrottling the request with
slices
will rethrottle the unfinished sub-request proportionally. - Canceling the request with
slices
will cancel each sub-request. - Due to the nature of
slices
each sub-request won't get a perfectly even portion of the documents. All documents will be addressed, but some slices may be larger than others. Expect larger slices to have a more even distribution. - Parameters like
requests_per_second
andmax_docs
on a request withslices
are distributed proportionally to each sub-request. Combine that with the earlier point about distribution being uneven and you should conclude that usingmax_docs
withslices
might not result in exactlymax_docs
documents being deleted. - Each sub-request gets a slightly different snapshot of the source data stream or index though these are all taken at approximately the same time.
If you're slicing manually or otherwise tuning automatic slicing, keep in mind that:
- Query performance is most efficient when the number of slices is equal to the number of shards in the index or backing index. If that number is large (for example, 500), choose a lower number as too many
slices
hurts performance. Settingslices
higher than the number of shards generally does not improve efficiency and adds overhead. - Delete performance scales linearly across available resources with the number of slices.
Whether query or delete performance dominates the runtime depends on the documents being reindexed and cluster resources.
Cancel a delete by query operation
Any delete by query can be canceled using the task cancel API. For example:
POST _tasks/r1A2WoRbTwKZ516z6NEs5A:36619/_cancel
The task ID can be found by using the get tasks API.
Cancellation should happen quickly but might take a few seconds. The get task status API will continue to list the delete by query task until this task checks that it has been cancelled and terminates itself.
Path parameters
-
index
string | array[string] Required A comma-separated list of data streams, indices, and aliases to search. It supports wildcards (
*
). To search all data streams or indices, omit this parameter or use*
or_all
.
Query parameters
-
allow_no_indices
boolean If
false
, the request returns an error if any wildcard expression, index alias, or_all
value targets only missing or closed indices. This behavior applies even if the request targets other open indices. For example, a request targetingfoo*,bar*
returns an error if an index starts withfoo
but no index starts withbar
. -
analyzer
string Analyzer to use for the query string. This parameter can be used only when the
q
query string parameter is specified. -
analyze_wildcard
boolean If
true
, wildcard and prefix queries are analyzed. This parameter can be used only when theq
query string parameter is specified. -
conflicts
string What to do if delete by query hits version conflicts:
abort
orproceed
.Supported values include:
abort
: Stop reindexing if there are conflicts.proceed
: Continue reindexing even if there are conflicts.
Values are
abort
orproceed
. -
default_operator
string The default operator for query string query:
AND
orOR
. This parameter can be used only when theq
query string parameter is specified.Values are
and
,AND
,or
, orOR
. -
df
string The field to use as default where no field prefix is given in the query string. This parameter can be used only when the
q
query string parameter is specified. -
expand_wildcards
string | array[string] The type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. It supports comma-separated values, such as
open,hidden
.Supported values include:
all
: Match any data stream or index, including hidden ones.open
: Match open, non-hidden indices. Also matches any non-hidden data stream.closed
: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.hidden
: Match hidden data streams and hidden indices. Must be combined withopen
,closed
, orboth
.none
: Wildcard expressions are not accepted.
-
from
number Skips the specified number of documents.
-
lenient
boolean If
true
, format-based query failures (such as providing text to a numeric field) in the query string will be ignored. This parameter can be used only when theq
query string parameter is specified. -
max_docs
number The maximum number of documents to process. Defaults to all documents. When set to a value less then or equal to
scroll_size
, a scroll will not be used to retrieve the results for the operation. -
preference
string The node or shard the operation should be performed on. It is random by default.
-
refresh
boolean If
true
, Elasticsearch refreshes all shards involved in the delete by query after the request completes. This is different than the delete API'srefresh
parameter, which causes just the shard that received the delete request to be refreshed. Unlike the delete API, it does not supportwait_for
. -
request_cache
boolean If
true
, the request cache is used for this request. Defaults to the index-level setting. -
requests_per_second
number The throttle for this request in sub-requests per second.
-
routing
string A custom value used to route operations to a specific shard.
-
q
string A query in the Lucene query string syntax.
-
scroll
string The period to retain the search context for scrolling.
-
scroll_size
number The size of the scroll request that powers the operation.
-
search_timeout
string The explicit timeout for each search request. It defaults to no timeout.
-
search_type
string The type of the search operation. Available options include
query_then_fetch
anddfs_query_then_fetch
.Supported values include:
query_then_fetch
: Documents are scored using local term and document frequencies for the shard. This is usually faster but less accurate.dfs_query_then_fetch
: Documents are scored using global term and document frequencies across all shards. This is usually slower but more accurate.
Values are
query_then_fetch
ordfs_query_then_fetch
. -
slices
number | string The number of slices this task should be divided into.
-
sort
array[string] A comma-separated list of
<field>:<direction>
pairs. -
stats
array[string] The specific
tag
of the request for logging and statistical purposes. -
terminate_after
number The maximum number of documents to collect for each shard. If a query reaches this limit, Elasticsearch terminates the query early. Elasticsearch collects documents before sorting.
Use with caution. Elasticsearch applies this parameter to each shard handling the request. When possible, let Elasticsearch perform early termination automatically. Avoid specifying this parameter for requests that target data streams with backing indices across multiple data tiers.
-
timeout
string The period each deletion request waits for active shards.
-
version
boolean If
true
, returns the document version as part of a hit. -
wait_for_active_shards
number | string The number of shard copies that must be active before proceeding with the operation. Set to
all
or any positive integer up to the total number of shards in the index (number_of_replicas+1
). Thetimeout
value controls how long each write request waits for unavailable shards to become available. -
wait_for_completion
boolean If
true
, the request blocks until the operation is complete. Iffalse
, Elasticsearch performs some preflight checks, launches the request, and returns a task you can use to cancel or get the status of the task. Elasticsearch creates a record of this task as a document at.tasks/task/${taskId}
. When you are done with a task, you should delete the task document so Elasticsearch can reclaim the space.
Body
Required
-
max_docs
number The maximum number of documents to delete.
-
query
object An Elasticsearch Query DSL (Domain Specific Language) object that defines a query.
External documentation -
slice
object
curl \
--request POST 'http://api.example.com/{index}/_delete_by_query' \
--header "Authorization: $API_KEY" \
--header "Content-Type: application/json" \
--data '"{\n \"query\": {\n \"match_all\": {}\n }\n}"'
{
"query": {
"match_all": {}
}
}
{
"query": {
"term": {
"user.id": "kimchy"
}
},
"max_docs": 1
}
{
"slice": {
"id": 0,
"max": 2
},
"query": {
"range": {
"http.response.bytes": {
"lt": 2000000
}
}
}
}
{
"query": {
"range": {
"http.response.bytes": {
"lt": 2000000
}
}
}
}
{
"took" : 147,
"timed_out": false,
"total": 119,
"deleted": 119,
"batches": 1,
"version_conflicts": 0,
"noops": 0,
"retries": {
"bulk": 0,
"search": 0
},
"throttled_millis": 0,
"requests_per_second": -1.0,
"throttled_until_millis": 0,
"failures" : [ ]
}
Check for a document source
Added in 5.4.0
Check whether a document source exists in an index. For example:
HEAD my-index-000001/_source/1
A document's source is not available if it is disabled in the mapping.
Query parameters
-
preference
string The node or shard the operation should be performed on. By default, the operation is randomized between the shard replicas.
-
realtime
boolean If
true
, the request is real-time as opposed to near-real-time. -
refresh
boolean If
true
, the request refreshes the relevant shards before retrieving the document. Setting it totrue
should be done after careful thought and verification that this does not cause a heavy load on the system (and slow down indexing). -
routing
string A custom value used to route operations to a specific shard.
-
_source
boolean | string | array[string] Indicates whether to return the
_source
field (true
orfalse
) or lists the fields to return. -
_source_excludes
string | array[string] A comma-separated list of source fields to exclude in the response.
-
_source_includes
string | array[string] A comma-separated list of source fields to include in the response.
-
version
number The version number for concurrency control. It must match the current version of the document for the request to succeed.
-
version_type
string The version type.
Supported values include:
internal
: Use internal versioning that starts at 1 and increments with each update or delete.external
: Only index the document if the specified version is strictly higher than the version of the stored document or if there is no existing document.external_gte
: Only index the document if the specified version is equal or higher than the version of the stored document or if there is no existing document. NOTE: Theexternal_gte
version type is meant for special use cases and should be used with care. If used incorrectly, it can result in loss of data.force
: This option is deprecated because it can cause primary and replica shards to diverge.
Values are
internal
,external
,external_gte
, orforce
.
curl \
--request HEAD 'http://api.example.com/{index}/_source/{id}' \
--header "Authorization: $API_KEY"
Run multiple Fleet searches
Technical preview
Run several Fleet searches with a single API request.
The API follows the same structure as the multi search API.
However, similar to the Fleet search API, it supports the wait_for_checkpoints
parameter.
Path parameters
-
index
string Required A single target to search. If the target is an index alias, it must resolve to a single index.
Query parameters
-
allow_no_indices
boolean If false, the request returns an error if any wildcard expression, index alias, or _all value targets only missing or closed indices. This behavior applies even if the request targets other open indices. For example, a request targeting foo*,bar* returns an error if an index starts with foo but no index starts with bar.
-
ccs_minimize_roundtrips
boolean If true, network roundtrips between the coordinating node and remote clusters are minimized for cross-cluster search requests.
-
expand_wildcards
string | array[string] Type of index that wildcard expressions can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams.
Supported values include:
all
: Match any data stream or index, including hidden ones.open
: Match open, non-hidden indices. Also matches any non-hidden data stream.closed
: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.hidden
: Match hidden data streams and hidden indices. Must be combined withopen
,closed
, orboth
.none
: Wildcard expressions are not accepted.
-
ignore_throttled
boolean If true, concrete, expanded or aliased indices are ignored when frozen.
-
max_concurrent_searches
number Maximum number of concurrent searches the multi search API can execute.
-
Maximum number of concurrent shard requests that each sub-search request executes per node.
-
pre_filter_shard_size
number Defines a threshold that enforces a pre-filter roundtrip to prefilter search shards based on query rewriting if the number of shards the search request expands to exceeds the threshold. This filter roundtrip can limit the number of shards significantly if for instance a shard can not match any documents based on its rewrite method i.e., if date filters are mandatory to match but the shard bounds and the query are disjoint.
-
search_type
string Indicates whether global term and document frequencies should be used when scoring returned documents.
Supported values include:
query_then_fetch
: Documents are scored using local term and document frequencies for the shard. This is usually faster but less accurate.dfs_query_then_fetch
: Documents are scored using global term and document frequencies across all shards. This is usually slower but more accurate.
Values are
query_then_fetch
ordfs_query_then_fetch
. -
rest_total_hits_as_int
boolean If true, hits.total are returned as an integer in the response. Defaults to false, which returns an object.
-
typed_keys
boolean Specifies whether aggregation and suggester names should be prefixed by their respective types in the response.
-
wait_for_checkpoints
array[number] A comma separated list of checkpoints. When configured, the search API will only be executed on a shard after the relevant checkpoint has become visible for search. Defaults to an empty list which will cause Elasticsearch to immediately execute the search.
-
allow_partial_search_results
boolean If true, returns partial results if there are shard request timeouts or shard failures. If false, returns an error with no partial results. Defaults to the configured cluster setting
search.default_allow_partial_results
, which is true by default.
Body
object
Required
-
allow_no_indices
boolean -
expand_wildcards
string | array[string] -
index
string | array[string] -
preference
string -
request_cache
boolean -
routing
string -
search_type
string Values are
query_then_fetch
ordfs_query_then_fetch
. -
ccs_minimize_roundtrips
boolean -
allow_partial_search_results
boolean -
ignore_throttled
boolean
curl \
--request GET 'http://api.example.com/{index}/_fleet/_fleet_msearch' \
--header "Authorization: $API_KEY" \
--header "Content-Type: application/json" \
--data '[{"allow_no_indices":true,"expand_wildcards":"string","ignore_unavailable":true,"index":"string","preference":"string","request_cache":true,"routing":"string","search_type":"query_then_fetch","ccs_minimize_roundtrips":true,"allow_partial_search_results":true,"ignore_throttled":true}]'
Import a dangling index
Added in 7.9.0
If Elasticsearch encounters index data that is absent from the current cluster state, those indices are considered to be dangling.
For example, this can happen if you delete more than cluster.indices.tombstones.size
indices while an Elasticsearch node is offline.
Path parameters
-
index_uuid
string Required The UUID of the index to import. Use the get dangling indices API to locate the UUID.
Query parameters
-
accept_data_loss
boolean Required This parameter must be set to true to import a dangling index. Because Elasticsearch cannot know where the dangling index data came from or determine which shard copies are fresh and which are stale, it cannot guarantee that the imported data represents the latest state of the index when it was last in the cluster.
-
master_timeout
string Specify timeout for connection to master
-
timeout
string Explicit operation timeout
curl \
--request POST 'http://api.example.com/_dangling/{index_uuid}?accept_data_loss=true' \
--header "Authorization: $API_KEY"
{
"acknowledged": true
}
Get mapping definitions
For data streams, the API retrieves mappings for the stream’s backing indices.
Path parameters
-
index
string | array[string] Required Comma-separated list of data streams, indices, and aliases used to limit the request. Supports wildcards (
*
). To target all data streams and indices, omit this parameter or use*
or_all
.
Query parameters
-
allow_no_indices
boolean If
false
, the request returns an error if any wildcard expression, index alias, or_all
value targets only missing or closed indices. This behavior applies even if the request targets other open indices. -
expand_wildcards
string | array[string] Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports comma-separated values, such as
open,hidden
. Valid values are:all
,open
,closed
,hidden
,none
.Supported values include:
all
: Match any data stream or index, including hidden ones.open
: Match open, non-hidden indices. Also matches any non-hidden data stream.closed
: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.hidden
: Match hidden data streams and hidden indices. Must be combined withopen
,closed
, orboth
.none
: Wildcard expressions are not accepted.
-
local
boolean Deprecated If
true
, the request retrieves information from the local node only. -
master_timeout
string Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.
curl \
--request GET 'http://api.example.com/{index}/_mapping' \
--header "Authorization: $API_KEY"
Resolve the cluster
Added in 8.13.0
Resolve the specified index expressions to return information about each cluster, including the local "querying" cluster, if included. If no index expression is provided, the API will return information about all the remote clusters that are configured on the querying cluster.
This endpoint is useful before doing a cross-cluster search in order to determine which remote clusters should be included in a search.
You use the same index expression with this endpoint as you would for cross-cluster search. Index and cluster exclusions are also supported with this endpoint.
For each cluster in the index expression, information is returned about:
- Whether the querying ("local") cluster is currently connected to each remote cluster specified in the index expression. Note that this endpoint actively attempts to contact the remote clusters, unlike the
remote/info
endpoint. - Whether each remote cluster is configured with
skip_unavailable
astrue
orfalse
. - Whether there are any indices, aliases, or data streams on that cluster that match the index expression.
- Whether the search is likely to have errors returned when you do the cross-cluster search (including any authorization errors if you do not have permission to query the index).
- Cluster version information, including the Elasticsearch server version.
For example, GET /_resolve/cluster/my-index-*,cluster*:my-index-*
returns information about the local cluster and all remotely configured clusters that start with the alias cluster*
.
Each cluster returns information about whether it has any indices, aliases or data streams that match my-index-*
.
Note on backwards compatibility
The ability to query without an index expression was added in version 8.18, so when
querying remote clusters older than that, the local cluster will send the index
expression dummy*
to those remote clusters. Thus, if an errors occur, you may see a reference
to that index expression even though you didn't request it. If it causes a problem, you can
instead include an index expression like *:*
to bypass the issue.
Advantages of using this endpoint before a cross-cluster search
You may want to exclude a cluster or index from a search when:
- A remote cluster is not currently connected and is configured with
skip_unavailable=false
. Running a cross-cluster search under those conditions will cause the entire search to fail. - A cluster has no matching indices, aliases or data streams for the index expression (or your user does not have permissions to search them). For example, suppose your index expression is
logs*,remote1:logs*
and the remote1 cluster has no indices, aliases or data streams that matchlogs*
. In that case, that cluster will return no results from that cluster if you include it in a cross-cluster search. - The index expression (combined with any query parameters you specify) will likely cause an exception to be thrown when you do the search. In these cases, the "error" field in the
_resolve/cluster
response will be present. (This is also where security/permission errors will be shown.) - A remote cluster is an older version that does not support the feature you want to use in your search.
Test availability of remote clusters
The remote/info
endpoint is commonly used to test whether the "local" cluster (the cluster being queried) is connected to its remote clusters, but it does not necessarily reflect whether the remote cluster is available or not.
The remote cluster may be available, while the local cluster is not currently connected to it.
You can use the _resolve/cluster
API to attempt to reconnect to remote clusters.
For example with GET _resolve/cluster
or GET _resolve/cluster/*:*
.
The connected
field in the response will indicate whether it was successful.
If a connection was (re-)established, this will also cause the remote/info
endpoint to now indicate a connected status.
Query parameters
-
allow_no_indices
boolean If false, the request returns an error if any wildcard expression, index alias, or
_all
value targets only missing or closed indices. This behavior applies even if the request targets other open indices. For example, a request targetingfoo*,bar*
returns an error if an index starts withfoo
but no index starts withbar
. NOTE: This option is only supported when specifying an index expression. You will get an error if you specify index options to the_resolve/cluster
API endpoint that takes no index expression. -
expand_wildcards
string | array[string] Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports comma-separated values, such as
open,hidden
. Valid values are:all
,open
,closed
,hidden
,none
. NOTE: This option is only supported when specifying an index expression. You will get an error if you specify index options to the_resolve/cluster
API endpoint that takes no index expression.Supported values include:
all
: Match any data stream or index, including hidden ones.open
: Match open, non-hidden indices. Also matches any non-hidden data stream.closed
: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.hidden
: Match hidden data streams and hidden indices. Must be combined withopen
,closed
, orboth
.none
: Wildcard expressions are not accepted.
-
ignore_throttled
boolean Deprecated If true, concrete, expanded, or aliased indices are ignored when frozen. NOTE: This option is only supported when specifying an index expression. You will get an error if you specify index options to the
_resolve/cluster
API endpoint that takes no index expression. -
timeout
string The maximum time to wait for remote clusters to respond. If a remote cluster does not respond within this timeout period, the API response will show the cluster as not connected and include an error message that the request timed out.
The default timeout is unset and the query can take as long as the networking layer is configured to wait for remote clusters that are not responding (typically 30 seconds).
curl \
--request GET 'http://api.example.com/_resolve/cluster' \
--header "Authorization: $API_KEY"
{
"(local)": {
"connected": true,
"skip_unavailable": false,
"matching_indices": true,
"version": {
"number": "8.13.0",
"build_flavor": "default",
"minimum_wire_compatibility_version": "7.17.0",
"minimum_index_compatibility_version": "7.0.0"
}
},
"cluster_one": {
"connected": true,
"skip_unavailable": true,
"matching_indices": true,
"version": {
"number": "8.13.0",
"build_flavor": "default",
"minimum_wire_compatibility_version": "7.17.0",
"minimum_index_compatibility_version": "7.0.0"
}
},
"cluster_two": {
"connected": true,
"skip_unavailable": false,
"matching_indices": true,
"version": {
"number": "8.13.0",
"build_flavor": "default",
"minimum_wire_compatibility_version": "7.17.0",
"minimum_index_compatibility_version": "7.0.0"
}
}
}
{
"(local)": {
"connected": true,
"skip_unavailable": false,
"error": "no such index [not_present]"
},
"cluster_one": {
"connected": true,
"skip_unavailable": true,
"matching_indices": false,
"version": {
"number": "8.13.0",
"build_flavor": "default",
"minimum_wire_compatibility_version": "7.17.0",
"minimum_index_compatibility_version": "7.0.0"
}
},
"cluster_two": {
"connected": false,
"skip_unavailable": false
},
"cluster_three": {
"connected": false,
"skip_unavailable": false,
"error": "Request timed out before receiving a response from the remote cluster"
},
"oldcluster": {
"connected": true,
"skip_unavailable": false,
"matching_indices": true
}
}
Explain the lifecycle state
Added in 6.6.0
Get the current lifecycle status for one or more indices. For data streams, the API retrieves the current lifecycle status for the stream's backing indices.
The response indicates when the index entered each lifecycle state, provides the definition of the running phase, and information about any failures.
Path parameters
-
index
string Required Comma-separated list of data streams, indices, and aliases to target. Supports wildcards (
*
). To target all data streams and indices, use*
or_all
.
Query parameters
-
only_errors
boolean Filters the returned indices to only indices that are managed by ILM and are in an error state, either due to an encountering an error while executing the policy, or attempting to use a policy that does not exist.
-
only_managed
boolean Filters the returned indices to only indices that are managed by ILM.
-
master_timeout
string Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.
curl \
--request GET 'http://api.example.com/{index}/_ilm/explain' \
--header "Authorization: $API_KEY"
{
"indices": {
"my-index-000001": {
"index": "my-index-000001",
"index_creation_date_millis": 1538475653281,
"index_creation_date": "2018-10-15T13:45:21.981Z",
"time_since_index_creation": "15s",
"managed": true,
"policy": "my_policy",
"lifecycle_date_millis": 1538475653281,
"lifecycle_date": "2018-10-15T13:45:21.981Z",
"age": "15s",
"phase": "new",
"phase_time_millis": 1538475653317,
"phase_time": "2018-10-15T13:45:22.577Z",
"action": "complete"
"action_time_millis": 1538475653317,
"action_time": "2018-10-15T13:45:22.577Z",
"step": "complete",
"step_time_millis": 1538475653317,
"step_time": "2018-10-15T13:45:22.577Z"
}
}
}
Query parameters
-
master_timeout
string Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.
-
timeout
string Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
curl \
--request GET 'http://api.example.com/_ilm/policy' \
--header "Authorization: $API_KEY"
{
"my_policy": {
"version": 1,
"modified_date": 82392349,
"policy": {
"phases": {
"warm": {
"min_age": "10d",
"actions": {
"forcemerge": {
"max_num_segments": 1
}
}
},
"delete": {
"min_age": "30d",
"actions": {
"delete": {
"delete_searchable_snapshot": true
}
}
}
}
},
"in_use_by" : {
"indices" : [],
"data_streams" : [],
"composable_templates" : []
}
}
}
Inference
Inference APIs enable you to use certain services, such as built-in machine learning models (ELSER, E5), models uploaded through Eland, Cohere, OpenAI, Azure, Google AI Studio or Hugging Face. For built-in models and models uploaded through Eland, the inference APIs offer an alternative way to use and manage trained models. However, if you do not plan to use the inference APIs to use these models or if you want to use non-NLP models, use the machine learning trained model APIs.
Create an Google AI Studio inference endpoint
Added in 8.15.0
Create an inference endpoint to perform an inference task with the googleaistudio
service.
When you create an inference endpoint, the associated machine learning model is automatically deployed if it is not already running.
After creating the endpoint, wait for the model deployment to complete before using it.
To verify the deployment status, use the get trained model statistics API.
Look for "state": "fully_allocated"
in the response and ensure that the "allocation_count"
matches the "target_allocation_count"
.
Avoid creating multiple endpoints for the same model unless required, as each endpoint consumes significant resources.
Path parameters
-
task_type
string Required The type of the inference task that the model will perform.
Values are
completion
ortext_embedding
. -
googleaistudio_inference_id
string Required The unique identifier of the inference endpoint.
Body
-
chunking_settings
object -
service
string Required Value is
googleaistudio
. -
service_settings
object Required
curl \
--request PUT 'http://api.example.com/_inference/{task_type}/{googleaistudio_inference_id}' \
--header "Authorization: $API_KEY" \
--header "Content-Type: application/json" \
--data '"{\n \"service\": \"googleaistudio\",\n \"service_settings\": {\n \"api_key\": \"api-key\",\n \"model_id\": \"model-id\"\n }\n}"'
{
"service": "googleaistudio",
"service_settings": {
"api_key": "api-key",
"model_id": "model-id"
}
}
Simulate data ingestion
Technical preview
Run ingest pipelines against a set of provided documents, optionally with substitute pipeline definitions, to simulate ingesting data into an index.
This API is meant to be used for troubleshooting or pipeline development, as it does not actually index any data into Elasticsearch.
The API runs the default and final pipeline for that index against a set of documents provided in the body of the request. If a pipeline contains a reroute processor, it follows that reroute processor to the new index, running that index's pipelines as well the same way that a non-simulated ingest would. No data is indexed into Elasticsearch. Instead, the transformed document is returned, along with the list of pipelines that have been run and the name of the index where the document would have been indexed if this were not a simulation. The transformed document is validated against the mappings that would apply to this index, and any validation error is reported in the result.
This API differs from the simulate pipeline API in that you specify a single pipeline for that API, and it runs only that one pipeline. The simulate pipeline API is more useful for developing a single pipeline, while the simulate ingest API is more useful for troubleshooting the interaction of the various pipelines that get applied when ingesting into an index.
By default, the pipeline definitions that are currently in the system are used. However, you can supply substitute pipeline definitions in the body of the request. These will be used in place of the pipeline definitions that are already in the system. This can be used to replace existing pipeline definitions or to create new ones. The pipeline substitutions are used only within this request.
Query parameters
-
pipeline
string The pipeline to use as the default pipeline. This value can be used to override the default pipeline of the index.
Body
Required
-
docs
array[object] Required Sample documents to test in the pipeline.
-
A map of component template names to substitute component template definition objects.
-
index_template_substitutions
object A map of index template names to substitute index template definition objects.
-
mapping_addition
object -
pipeline_substitutions
object Pipelines to test. If you don’t specify the
pipeline
request path parameter, this parameter is required. If you specify both this and the request path parameter, the API only uses the request path parameter.
curl \
--request POST 'http://api.example.com/_ingest/_simulate' \
--header "Authorization: $API_KEY" \
--header "Content-Type: application/json" \
--data '"{\n \"docs\": [\n {\n \"_id\": 123,\n \"_index\": \"my-index\",\n \"_source\": {\n \"foo\": \"bar\"\n }\n },\n {\n \"_id\": 456,\n \"_index\": \"my-index\",\n \"_source\": {\n \"foo\": \"rab\"\n }\n }\n ]\n}"'
{
"docs": [
{
"_id": 123,
"_index": "my-index",
"_source": {
"foo": "bar"
}
},
{
"_id": 456,
"_index": "my-index",
"_source": {
"foo": "rab"
}
}
]
}
{
"docs": [
{
"_index": "my-index",
"_id": 123,
"_source": {
"foo": "bar"
}
},
{
"_index": "my-index",
"_id": 456,
"_source": {
"foo": "rab"
}
}
],
"pipeline_substitutions": {
"my-pipeline": {
"processors": [
{
"uppercase": {
"field": "foo"
}
}
]
}
}
}
{
"docs": [
{
"_index": "my-index",
"_id": "123",
"_source": {
"foo": "foo"
}
},
{
"_index": "my-index",
"_id": "456",
"_source": {
"bar": "rab"
}
}
],
"component_template_substitutions": {
"my-mappings_template": {
"template": {
"mappings": {
"dynamic": "strict",
"properties": {
"foo": {
"type": "keyword"
},
"bar": {
"type": "keyword"
}
}
}
}
}
}
}
{
"docs": [
{
"_id": "id",
"_index": "my-index",
"_source": {
"foo": "bar"
}
},
{
"_id": "id",
"_index": "my-index",
"_source": {
"foo": "rab"
}
}
],
"pipeline_substitutions": {
"my-pipeline": {
"processors": [
{
"set": {
"field": "field3",
"value": "value3"
}
}
]
}
},
"component_template_substitutions": {
"my-component-template": {
"template": {
"mappings": {
"dynamic": true,
"properties": {
"field3": {
"type": "keyword"
}
}
},
"settings": {
"index": {
"default_pipeline": "my-pipeline"
}
}
}
}
},
"index_template_substitutions": {
"my-index-template": {
"index_patterns": [
"my-index-*"
],
"composed_of": [
"component_template_1",
"component_template_2"
]
}
},
"mapping_addition": {
"dynamic": "strict",
"properties": {
"foo": {
"type": "keyword"
}
}
}
}
{
"docs": [
{
"doc": null,
"_id": 123,
"_index": "my-index",
"_version": -3,
"_source": {
"field1": "value1",
"field2": "value2",
"foo": "bar"
},
"executed_pipelines": [
"my-pipeline",
"my-final-pipeline"
]
},
{
"doc": null,
"_id": 456,
"_index": "my-index",
"_version": "-3,",
"_source": {
"field1": "value1",
"field2": "value2",
"foo": "rab"
},
"executed_pipelines": [
"my-pipeline",
"my-final-pipeline"
]
}
]
}
{
"docs": [
{
"doc": null,
"_id": 123,
"_index": "my-index",
"_version": -3,
"_source": {
"field2": "value2",
"foo": "BAR"
},
"executed_pipelines": [
"my-pipeline",
"my-final-pipeline"
]
},
{
"doc": null,
"_id": 456,
"_index": "my-index",
"_version": -3,
"_source": {
"field2": "value2",
"foo": "RAB"
},
"executed_pipelines": [
"my-pipeline",
"my-final-pipeline"
]
}
]
}
{
"docs": [
{
"doc": {
"_id": "123",
"_index": "my-index",
"_version": -3,
"_source": {
"foo": "foo"
},
"executed_pipelines": []
}
},
{
"doc": {
"_id": "456",
"_index": "my-index",
"_version": -3,
"_source": {
"bar": "rab"
},
"executed_pipelines": []
}
}
]
}
Start a basic license
Added in 6.3.0
Start an indefinite basic license, which gives access to all the basic features.
NOTE: In order to start a basic license, you must not currently have a basic license.
If the basic license does not support all of the features that are available with your current license, however, you are notified in the response.
You must then re-submit the API request with the acknowledge
parameter set to true
.
To check the status of your basic license, use the get basic license API.
Query parameters
-
acknowledge
boolean whether the user has acknowledged acknowledge messages (default: false)
-
master_timeout
string Period to wait for a connection to the master node.
-
timeout
string Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
curl \
--request POST 'http://api.example.com/_license/start_basic' \
--header "Authorization: $API_KEY"
{
"basic_was_started": true,
"acknowledged": true
}
Get model snapshots info
Added in 5.4.0
Path parameters
-
job_id
string Required Identifier for the anomaly detection job.
Query parameters
-
desc
boolean If true, the results are sorted in descending order.
-
end
string | number Returns snapshots with timestamps earlier than this time.
-
from
number Skips the specified number of snapshots.
-
size
number Specifies the maximum number of snapshots to obtain.
-
sort
string Specifies the sort field for the requested snapshots. By default, the snapshots are sorted by their timestamp.
-
start
string | number Returns snapshots with timestamps after this time.
Body
-
desc
boolean Refer to the description for the
desc
query parameter. end
string | number A date and time, either as a string whose format can depend on the context (defaulting to ISO 8601), or a number of milliseconds since the Epoch. Elasticsearch accepts both as input, but will generally output a string representation.
-
page
object -
sort
string Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.
start
string | number A date and time, either as a string whose format can depend on the context (defaulting to ISO 8601), or a number of milliseconds since the Epoch. Elasticsearch accepts both as input, but will generally output a string representation.
curl \
--request GET 'http://api.example.com/_ml/anomaly_detectors/{job_id}/model_snapshots' \
--header "Authorization: $API_KEY" \
--header "Content-Type: application/json" \
--data '{"desc":true,"":"string","page":{"from":42.0,"size":42.0},"sort":"string"}'
Update a filter
Added in 6.4.0
Updates the description of a filter, adds items, or removes items from the list.
Path parameters
-
filter_id
string Required A string that uniquely identifies a filter.
Body
Required
-
add_items
array[string] The items to add to the filter.
-
description
string A description for the filter.
-
remove_items
array[string] The items to remove from the filter.
curl \
--request POST 'http://api.example.com/_ml/filters/{filter_id}/_update' \
--header "Authorization: $API_KEY" \
--header "Content-Type: application/json" \
--data '{"add_items":["string"],"description":"string","remove_items":["string"]}'
Explain data frame analytics config
Added in 7.3.0
This API provides explanations for a data frame analytics config that either exists already or one that has not been created yet. The following explanations are provided:
- which fields are included or not in the analysis and why,
- how much memory is estimated to be required. The estimate can be used when deciding the appropriate value for model_memory_limit setting later on. If you have object fields or fields that are excluded via source filtering, they are not included in the explanation.
Body
-
source
object -
dest
object -
analysis
object -
description
string A description of the job.
-
model_memory_limit
string The approximate maximum amount of memory resources that are permitted for analytical processing. If your
elasticsearch.yml
file contains anxpack.ml.max_model_memory_limit
setting, an error occurs when you try to create data frame analytics jobs that havemodel_memory_limit
values greater than that setting. -
max_num_threads
number The maximum number of threads to be used by the analysis. Using more threads may decrease the time necessary to complete the analysis at the cost of using more CPU. Note that the process may use additional threads for operational functionality other than the analysis itself.
-
analyzed_fields
object -
allow_lazy_start
boolean Specifies whether this job can start when there is insufficient machine learning node capacity for it to be immediately assigned to a node.
curl \
--request POST 'http://api.example.com/_ml/data_frame/analytics/_explain' \
--header "Authorization: $API_KEY" \
--header "Content-Type: application/json" \
--data '"{\n \"source\": {\n \"index\": \"houses_sold_last_10_yrs\"\n },\n \"analysis\": {\n \"regression\": {\n \"dependent_variable\": \"price\"\n }\n }\n}"'
{
"source": {
"index": "houses_sold_last_10_yrs"
},
"analysis": {
"regression": {
"dependent_variable": "price"
}
}
}
{
"field_selection": [
{
"field": "number_of_bedrooms",
"mappings_types": [
"integer"
],
"is_included": true,
"is_required": false,
"feature_type": "numerical"
},
{
"field": "postcode",
"mappings_types": [
"text"
],
"is_included": false,
"is_required": false,
"reason": "[postcode.keyword] is preferred because it is aggregatable"
},
{
"field": "postcode.keyword",
"mappings_types": [
"keyword"
],
"is_included": true,
"is_required": false,
"feature_type": "categorical"
},
{
"field": "price",
"mappings_types": [
"float"
],
"is_included": true,
"is_required": true,
"feature_type": "numerical"
}
],
"memory_estimation": {
"expected_memory_without_disk": "128MB",
"expected_memory_with_disk": "32MB"
}
}
Get data frame analytics job stats
Added in 7.3.0
Path parameters
-
id
string Required Identifier for the data frame analytics job. If you do not specify this option, the API returns information for the first hundred data frame analytics jobs.
Query parameters
-
allow_no_match
boolean Specifies what to do when the request:
- Contains wildcard expressions and there are no data frame analytics jobs that match.
- Contains the
_all
string or no identifiers and there are no matches. - Contains wildcard expressions and there are only partial matches.
The default value returns an empty data_frame_analytics array when there are no matches and the subset of results when there are partial matches. If this parameter is
false
, the request returns a 404 status code when there are no matches or only partial matches. -
from
number Skips the specified number of data frame analytics jobs.
-
size
number Specifies the maximum number of data frame analytics jobs to obtain.
-
verbose
boolean Defines whether the stats response should be verbose.
curl \
--request GET 'http://api.example.com/_ml/data_frame/analytics/{id}/_stats' \
--header "Authorization: $API_KEY"
Start the feature migration
Added in 7.16.0
Version upgrades sometimes require changes to how features store configuration information and data in system indices. This API starts the automatic migration process.
Some functionality might be temporarily unavailable during the migration process.
TIP: The API is designed for indirect use by the Upgrade Assistant. We strongly recommend you use the Upgrade Assistant.
curl \
--request POST 'http://api.example.com/_migration/system_features' \
--header "Authorization: $API_KEY"
{
"accepted" : true,
"features" : [
{
"feature_name" : "security"
}
]
}
Prepare a node to be shut down
Added in 7.13.0
NOTE: This feature is designed for indirect use by Elastic Cloud, Elastic Cloud Enterprise, and Elastic Cloud on Kubernetes. Direct use is not supported.
If you specify a node that is offline, it will be prepared for shut down when it rejoins the cluster.
If the operator privileges feature is enabled, you must be an operator to use this API.
The API migrates ongoing tasks and index shards to other nodes as needed to prepare a node to be restarted or shut down and removed from the cluster. This ensures that Elasticsearch can be stopped safely with minimal disruption to the cluster.
You must specify the type of shutdown: restart
, remove
, or replace
.
If a node is already being prepared for shutdown, you can use this API to change the shutdown type.
IMPORTANT: This API does NOT terminate the Elasticsearch process. Monitor the node shutdown status to determine when it is safe to stop Elasticsearch.
Path parameters
-
node_id
string Required The node identifier. This parameter is not validated against the cluster's active nodes. This enables you to register a node for shut down while it is offline. No error is thrown if you specify an invalid node ID.
Query parameters
-
master_timeout
string The period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.
Values are
nanos
,micros
,ms
,s
,m
,h
, ord
. -
timeout
string The period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
Values are
nanos
,micros
,ms
,s
,m
,h
, ord
.
Body
Required
-
type
string Required Values are
restart
,remove
, orreplace
. -
reason
string Required A human-readable reason that the node is being shut down. This field provides information for other cluster operators; it does not affect the shut down process.
-
allocation_delay
string Only valid if type is restart. Controls how long Elasticsearch will wait for the node to restart and join the cluster before reassigning its shards to other nodes. This works the same as delaying allocation with the index.unassigned.node_left.delayed_timeout setting. If you specify both a restart allocation delay and an index-level allocation delay, the longer of the two is used.
-
target_node_name
string Only valid if type is replace. Specifies the name of the node that is replacing the node being shut down. Shards from the shut down node are only allowed to be allocated to the target node, and no other data will be allocated to the target node. During relocation of data certain allocation rules are ignored, such as disk watermarks or user attribute filtering rules.
curl \
--request PUT 'http://api.example.com/_nodes/{node_id}/shutdown' \
--header "Authorization: $API_KEY" \
--header "Content-Type: application/json" \
--data '"{\n \"type\": \"restart\",\n \"reason\": \"Demonstrating how the node shutdown API works\",\n \"allocation_delay\": \"20m\"\n}"'
{
"type": "restart",
"reason": "Demonstrating how the node shutdown API works",
"allocation_delay": "20m"
}
Get rollup job information
Deprecated
Technical preview
Get the configuration, stats, and status of rollup jobs.
NOTE: This API returns only active (both STARTED
and STOPPED
) jobs.
If a job was created, ran for a while, then was deleted, the API does not return any details about it.
For details about a historical rollup job, the rollup capabilities API may be more useful.
curl \
--request GET 'http://api.example.com/_rollup/job' \
--header "Authorization: $API_KEY"
{
"jobs": [
{
"config": {
"id": "sensor",
"index_pattern": "sensor-*",
"rollup_index": "sensor_rollup",
"cron": "*/30 * * * * ?",
"groups": {
"date_histogram": {
"fixed_interval": "1h",
"delay": "7d",
"field": "timestamp",
"time_zone": "UTC"
},
"terms": {
"fields": [
"node"
]
}
},
"metrics": [
{
"field": "temperature",
"metrics": [
"min",
"max",
"sum"
]
},
{
"field": "voltage",
"metrics": [
"avg"
]
}
],
"timeout": "20s",
"page_size": 1000
},
"status": {
"job_state": "stopped"
},
"stats": {
"pages_processed": 0,
"documents_processed": 0,
"rollups_indexed": 0,
"trigger_count": 0,
"index_failures": 0,
"index_time_in_ms": 0,
"index_total": 0,
"search_failures": 0,
"search_time_in_ms": 0,
"search_total": 0,
"processing_time_in_ms": 0,
"processing_total": 0
}
}
]
}
Get the rollup job capabilities
Deprecated
Technical preview
Get the capabilities of any rollup jobs that have been configured for a specific index or index pattern.
This API is useful because a rollup job is often configured to rollup only a subset of fields from the source index. Furthermore, only certain aggregations can be configured for various fields, leading to a limited subset of functionality depending on that configuration. This API enables you to inspect an index and determine:
- Does this index have associated rollup data somewhere in the cluster?
- If yes to the first question, what fields were rolled up, what aggregations can be performed, and where does the data live?
Path parameters
-
id
string Required Index, indices or index-pattern to return rollup capabilities for.
_all
may be used to fetch rollup capabilities from all jobs.
curl \
--request GET 'http://api.example.com/_rollup/data/{id}' \
--header "Authorization: $API_KEY"
{
"sensor-*" : {
"rollup_jobs" : [
{
"job_id" : "sensor",
"rollup_index" : "sensor_rollup",
"index_pattern" : "sensor-*",
"fields" : {
"node" : [
{
"agg" : "terms"
}
],
"temperature" : [
{
"agg" : "min"
},
{
"agg" : "max"
},
{
"agg" : "sum"
}
],
"timestamp" : [
{
"agg" : "date_histogram",
"time_zone" : "UTC",
"fixed_interval" : "1h",
"delay": "7d"
}
],
"voltage" : [
{
"agg" : "avg"
}
]
}
}
]
}
}
Create or update a script or search template
Creates or updates a stored script or search template.
Path parameters
-
id
string Required The identifier for the stored script or search template. It must be unique within the cluster.
Query parameters
-
context
string The context in which the script or search template should run. To prevent errors, the API immediately compiles the script or template in this context. If you specify both this and the
<context>
path parameter, the API uses the request path parameter. -
master_timeout
string The period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. It can also be set to
-1
to indicate that the request should never timeout. -
timeout
string The period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error. It can also be set to
-1
to indicate that the request should never timeout.
curl \
--request PUT 'http://api.example.com/_scripts/{id}' \
--header "Authorization: $API_KEY" \
--header "Content-Type: application/json" \
--data '"{\n \"script\": {\n \"lang\": \"mustache\",\n \"source\": {\n \"query\": {\n \"match\": {\n \"message\": \"{{query_string}}\"\n }\n },\n \"from\": \"{{from}}\",\n \"size\": \"{{size}}\"\n }\n }\n}"'
{
"script": {
"lang": "mustache",
"source": {
"query": {
"match": {
"message": "{{query_string}}"
}
},
"from": "{{from}}",
"size": "{{size}}"
}
}
}
{
"script": {
"lang": "painless",
"source": "Math.log(_score * 2) + params['my_modifier']"
}
}
Create or update a script or search template
Creates or updates a stored script or search template.
Query parameters
-
context
string The context in which the script or search template should run. To prevent errors, the API immediately compiles the script or template in this context. If you specify both this and the
<context>
path parameter, the API uses the request path parameter. -
master_timeout
string The period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. It can also be set to
-1
to indicate that the request should never timeout. -
timeout
string The period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error. It can also be set to
-1
to indicate that the request should never timeout.
curl \
--request POST 'http://api.example.com/_scripts/{id}/{context}' \
--header "Authorization: $API_KEY" \
--header "Content-Type: application/json" \
--data '"{\n \"script\": {\n \"lang\": \"mustache\",\n \"source\": {\n \"query\": {\n \"match\": {\n \"message\": \"{{query_string}}\"\n }\n },\n \"from\": \"{{from}}\",\n \"size\": \"{{size}}\"\n }\n }\n}"'
{
"script": {
"lang": "mustache",
"source": {
"query": {
"match": {
"message": "{{query_string}}"
}
},
"from": "{{from}}",
"size": "{{size}}"
}
}
}
{
"script": {
"lang": "painless",
"source": "Math.log(_score * 2) + params['my_modifier']"
}
}
Run a script
Technical preview
Runs a script and returns a result. Use this API to build and test scripts, such as when defining a script for a runtime field. This API requires very few dependencies and is especially useful if you don't have permissions to write documents on a cluster.
The API uses several contexts, which control how scripts are run, what variables are available at runtime, and what the return type is.
Each context requires a script, but additional parameters depend on the context you're using for that script.
Body
-
context
string Values are
painless_test
,filter
,score
,boolean_field
,date_field
,double_field
,geo_point_field
,ip_field
,keyword_field
,long_field
, orcomposite_field
. -
context_setup
object -
script
object
curl \
--request GET 'http://api.example.com/_scripts/painless/_execute' \
--header "Authorization: $API_KEY" \
--header "Content-Type: application/json" \
--data '"{\n \"script\": {\n \"source\": \"params.count / params.total\",\n \"params\": {\n \"count\": 100.0,\n \"total\": 1000.0\n }\n }\n}"'
{
"script": {
"source": "params.count / params.total",
"params": {
"count": 100.0,
"total": 1000.0
}
}
}
{
"script": {
"source": "doc['field'].value.length() <= params.max_length",
"params": {
"max_length": 4
}
},
"context": "filter",
"context_setup": {
"index": "my-index-000001",
"document": {
"field": "four"
}
}
}
{
"script": {
"source": "doc['rank'].value / params.max_rank",
"params": {
"max_rank": 5.0
}
},
"context": "score",
"context_setup": {
"index": "my-index-000001",
"document": {
"rank": 4
}
}
}
{
"result": "0.1"
}
{
"result": true
}
{
"result": 0.8
}
Run a search
Get search hits that match the query defined in the request.
You can provide search queries using the q
query string parameter or the request body.
If both are specified, only the query parameter is used.
If the Elasticsearch security features are enabled, you must have the read index privilege for the target data stream, index, or alias. For cross-cluster search, refer to the documentation about configuring CCS privileges.
To search a point in time (PIT) for an alias, you must have the read
index privilege for the alias's data streams or indices.
Search slicing
When paging through a large number of documents, it can be helpful to split the search into multiple slices to consume them independently with the slice
and pit
properties.
By default the splitting is done first on the shards, then locally on each shard.
The local splitting partitions the shard into contiguous ranges based on Lucene document IDs.
For instance if the number of shards is equal to 2 and you request 4 slices, the slices 0 and 2 are assigned to the first shard and the slices 1 and 3 are assigned to the second shard.
IMPORTANT: The same point-in-time ID should be used for all slices. If different PIT IDs are used, slices can overlap and miss documents. This situation can occur because the splitting criterion is based on Lucene document IDs, which are not stable across changes to the index.
Path parameters
-
index
string | array[string] Required A comma-separated list of data streams, indices, and aliases to search. It supports wildcards (
*
). To search all data streams and indices, omit this parameter or use*
or_all
.
Query parameters
-
allow_no_indices
boolean If
false
, the request returns an error if any wildcard expression, index alias, or_all
value targets only missing or closed indices. This behavior applies even if the request targets other open indices. For example, a request targetingfoo*,bar*
returns an error if an index starts withfoo
but no index starts withbar
. -
allow_partial_search_results
boolean If
true
and there are shard request timeouts or shard failures, the request returns partial results. Iffalse
, it returns an error with no partial results.To override the default behavior, you can set the
search.default_allow_partial_results
cluster setting tofalse
. -
analyzer
string The analyzer to use for the query string. This parameter can be used only when the
q
query string parameter is specified. -
analyze_wildcard
boolean If
true
, wildcard and prefix queries are analyzed. This parameter can be used only when theq
query string parameter is specified. -
batched_reduce_size
number The number of shard results that should be reduced at once on the coordinating node. If the potential number of shards in the request can be large, this value should be used as a protection mechanism to reduce the memory overhead per search request.
-
ccs_minimize_roundtrips
boolean If
true
, network round-trips between the coordinating node and the remote clusters are minimized when running cross-cluster search (CCS) requests. -
default_operator
string The default operator for the query string query:
AND
orOR
. This parameter can be used only when theq
query string parameter is specified.Values are
and
,AND
,or
, orOR
. -
df
string The field to use as a default when no field prefix is given in the query string. This parameter can be used only when the
q
query string parameter is specified. -
docvalue_fields
string | array[string] A comma-separated list of fields to return as the docvalue representation of a field for each hit.
-
expand_wildcards
string | array[string] The type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. It supports comma-separated values such as
open,hidden
.Supported values include:
all
: Match any data stream or index, including hidden ones.open
: Match open, non-hidden indices. Also matches any non-hidden data stream.closed
: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.hidden
: Match hidden data streams and hidden indices. Must be combined withopen
,closed
, orboth
.none
: Wildcard expressions are not accepted.
-
explain
boolean If
true
, the request returns detailed information about score computation as part of a hit. -
ignore_throttled
boolean Deprecated If
true
, concrete, expanded or aliased indices will be ignored when frozen. -
include_named_queries_score
boolean If
true
, the response includes the score contribution from any named queries.This functionality reruns each named query on every hit in a search response. Typically, this adds a small overhead to a request. However, using computationally expensive named queries on a large number of hits may add significant overhead.
-
lenient
boolean If
true
, format-based query failures (such as providing text to a numeric field) in the query string will be ignored. This parameter can be used only when theq
query string parameter is specified. -
The number of concurrent shard requests per node that the search runs concurrently. This value should be used to limit the impact of the search on the cluster in order to limit the number of concurrent shard requests.
-
preference
string The nodes and shards used for the search. By default, Elasticsearch selects from eligible nodes and shards using adaptive replica selection, accounting for allocation awareness. Valid values are:
_only_local
to run the search only on shards on the local node._local
to, if possible, run the search on shards on the local node, or if not, select shards using the default method._only_nodes:<node-id>,<node-id>
to run the search on only the specified nodes IDs. If suitable shards exist on more than one selected node, use shards on those nodes using the default method. If none of the specified nodes are available, select shards from any available node using the default method._prefer_nodes:<node-id>,<node-id>
to if possible, run the search on the specified nodes IDs. If not, select shards using the default method._shards:<shard>,<shard>
to run the search only on the specified shards. You can combine this value with otherpreference
values. However, the_shards
value must come first. For example:_shards:2,3|_local
.<custom-string>
(any string that does not start with_
) to route searches with the same<custom-string>
to the same shards in the same order.
-
pre_filter_shard_size
number A threshold that enforces a pre-filter roundtrip to prefilter search shards based on query rewriting if the number of shards the search request expands to exceeds the threshold. This filter roundtrip can limit the number of shards significantly if for instance a shard can not match any documents based on its rewrite method (if date filters are mandatory to match but the shard bounds and the query are disjoint). When unspecified, the pre-filter phase is executed if any of these conditions is met:
- The request targets more than 128 shards.
- The request targets one or more read-only index.
- The primary sort of the query targets an indexed field.
-
request_cache
boolean If
true
, the caching of search results is enabled for requests wheresize
is0
. It defaults to index level settings. -
routing
string A custom value that is used to route operations to a specific shard.
-
scroll
string The period to retain the search context for scrolling. By default, this value cannot exceed
1d
(24 hours). You can change this limit by using thesearch.max_keep_alive
cluster-level setting. -
search_type
string Indicates how distributed term frequencies are calculated for relevance scoring.
Supported values include:
query_then_fetch
: Documents are scored using local term and document frequencies for the shard. This is usually faster but less accurate.dfs_query_then_fetch
: Documents are scored using global term and document frequencies across all shards. This is usually slower but more accurate.
Values are
query_then_fetch
ordfs_query_then_fetch
. -
stats
array[string] Specific
tag
of the request for logging and statistical purposes. -
stored_fields
string | array[string] A comma-separated list of stored fields to return as part of a hit. If no fields are specified, no stored fields are included in the response. If this field is specified, the
_source
parameter defaults tofalse
. You can pass_source: true
to return both source fields and stored fields in the search response. -
suggest_field
string The field to use for suggestions.
-
suggest_mode
string The suggest mode. This parameter can be used only when the
suggest_field
andsuggest_text
query string parameters are specified.Supported values include:
missing
: Only generate suggestions for terms that are not in the shard.popular
: Only suggest terms that occur in more docs on the shard than the original term.always
: Suggest any matching suggestions based on terms in the suggest text.
Values are
missing
,popular
, oralways
. -
suggest_size
number The number of suggestions to return. This parameter can be used only when the
suggest_field
andsuggest_text
query string parameters are specified. -
suggest_text
string The source text for which the suggestions should be returned. This parameter can be used only when the
suggest_field
andsuggest_text
query string parameters are specified. -
terminate_after
number The maximum number of documents to collect for each shard. If a query reaches this limit, Elasticsearch terminates the query early. Elasticsearch collects documents before sorting.
IMPORTANT: Use with caution. Elasticsearch applies this parameter to each shard handling the request. When possible, let Elasticsearch perform early termination automatically. Avoid specifying this parameter for requests that target data streams with backing indices across multiple data tiers. If set to
0
(default), the query does not terminate early. -
timeout
string The period of time to wait for a response from each shard. If no response is received before the timeout expires, the request fails and returns an error. It defaults to no timeout.
-
track_total_hits
boolean | number The number of hits matching the query to count accurately. If
true
, the exact number of hits is returned at the cost of some performance. Iffalse
, the response does not include the total number of hits matching the query. -
track_scores
boolean If
true
, the request calculates and returns document scores, even if the scores are not used for sorting. -
typed_keys
boolean If
true
, aggregation and suggester names are be prefixed by their respective types in the response. -
rest_total_hits_as_int
boolean Indicates whether
hits.total
should be rendered as an integer or an object in the rest search response. -
version
boolean If
true
, the request returns the document version as part of a hit. -
_source
boolean | string | array[string] The source fields that are returned for matching documents. These fields are returned in the
hits._source
property of the search response. Valid values are:true
to return the entire document source.false
to not return the document source.<string>
to return the source fields that are specified as a comma-separated list that supports wildcard (*
) patterns.
-
_source_excludes
string | array[string] A comma-separated list of source fields to exclude from the response. You can also use this parameter to exclude fields from the subset specified in
_source_includes
query parameter. If the_source
parameter isfalse
, this parameter is ignored. -
_source_includes
string | array[string] A comma-separated list of source fields to include in the response. If this parameter is specified, only these source fields are returned. You can exclude fields from this subset using the
_source_excludes
query parameter. If the_source
parameter isfalse
, this parameter is ignored. -
seq_no_primary_term
boolean If
true
, the request returns the sequence number and primary term of the last modification of each hit. -
q
string A query in the Lucene query string syntax. Query parameter searches do not support the full Elasticsearch Query DSL but are handy for testing.
IMPORTANT: This parameter overrides the query parameter in the request body. If both parameters are specified, documents matching the query request body parameter are not returned.
-
size
number The number of hits to return. By default, you cannot page through more than 10,000 hits using the
from
andsize
parameters. To page through more hits, use thesearch_after
parameter. -
from
number The starting document offset, which must be non-negative. By default, you cannot page through more than 10,000 hits using the
from
andsize
parameters. To page through more hits, use thesearch_after
parameter. -
sort
string | array[string] A comma-separated list of
<field>:<direction>
pairs.
Body
-
aggregations
object Defines the aggregations that are run as part of the search request.
External documentation -
collapse
object External documentation -
explain
boolean If
true
, the request returns detailed information about score computation as part of a hit. -
ext
object Configuration of search extensions defined by Elasticsearch plugins.
-
from
number The starting document offset, which must be non-negative. By default, you cannot page through more than 10,000 hits using the
from
andsize
parameters. To page through more hits, use thesearch_after
parameter. -
highlight
object -
track_total_hits
boolean | number Number of hits matching the query to count accurately. If true, the exact number of hits is returned at the cost of some performance. If false, the response does not include the total number of hits matching the query. Defaults to 10,000 hits.
-
indices_boost
array[object] Boost the
_score
of documents from specified indices. The boost value is the factor by which scores are multiplied. A boost value greater than1.0
increases the score. A boost value between0
and1.0
decreases the score.External documentation -
docvalue_fields
array[object] An array of wildcard (
*
) field patterns. The request returns doc values for field names matching these patterns in thehits.fields
property of the response.External documentation knn
object | array[object] The approximate kNN search to run.
-
rank
object -
min_score
number The minimum
_score
for matching documents. Documents with a lower_score
are not included in search results and results collected by aggregations. -
post_filter
object An Elasticsearch Query DSL (Domain Specific Language) object that defines a query.
External documentation -
profile
boolean Set to
true
to return detailed timing information about the execution of individual components in a search request. NOTE: This is a debugging tool and adds significant overhead to search execution. -
query
object An Elasticsearch Query DSL (Domain Specific Language) object that defines a query.
External documentation rescore
object | array[object] Can be used to improve precision by reordering just the top (for example 100 - 500) documents returned by the
query
andpost_filter
phases.-
retriever
object -
script_fields
object Retrieve a script evaluation (based on different fields) for each hit.
-
search_after
array[number | string | boolean | null] A field value.
-
size
number The number of hits to return, which must not be negative. By default, you cannot page through more than 10,000 hits using the
from
andsize
parameters. To page through more hits, use thesearch_after
property. -
slice
object _source
boolean | object Defines how to fetch a source. Fetching can be disabled entirely, or the source can be filtered.
-
fields
array[object] An array of wildcard (
*
) field patterns. The request returns values for field names matching these patterns in thehits.fields
property of the response. -
suggest
object -
terminate_after
number The maximum number of documents to collect for each shard. If a query reaches this limit, Elasticsearch terminates the query early. Elasticsearch collects documents before sorting.
IMPORTANT: Use with caution. Elasticsearch applies this property to each shard handling the request. When possible, let Elasticsearch perform early termination automatically. Avoid specifying this property for requests that target data streams with backing indices across multiple data tiers.
If set to
0
(default), the query does not terminate early. -
timeout
string The period of time to wait for a response from each shard. If no response is received before the timeout expires, the request fails and returns an error. Defaults to no timeout.
-
track_scores
boolean If
true
, calculate and return document scores, even if the scores are not used for sorting. -
version
boolean If
true
, the request returns the document version as part of a hit. -
seq_no_primary_term
boolean If
true
, the request returns sequence number and primary term of the last modification of each hit.External documentation -
stored_fields
string | array[string] -
pit
object -
runtime_mappings
object -
stats
array[string] The stats groups to associate with the search. Each group maintains a statistics aggregation for its associated searches. You can retrieve these stats using the indices stats API.
curl \
--request GET 'http://api.example.com/{index}/_search' \
--header "Authorization: $API_KEY" \
--header "Content-Type: application/json" \
--data '"{\n \"query\": {\n \"term\": {\n \"user.id\": \"kimchy\"\n }\n }\n}"'
{
"query": {
"term": {
"user.id": "kimchy"
}
}
}
{
"size": 100,
"query": {
"match" : {
"title" : "elasticsearch"
}
},
"pit": {
"id": "46ToAwMDaWR5BXV1aWQyKwZub2RlXzMAAAAAAAAAACoBYwADaWR4BXV1aWQxAgZub2RlXzEAAAAAAAAAAAEBYQADaWR5BXV1aWQyKgZub2RlXzIAAAAAAAAAAAwBYgACBXV1aWQyAAAFdXVpZDEAAQltYXRjaF9hbGw_gAAAAA==",
"keep_alive": "1m"
}
}
{
"slice": {
"id": 0,
"max": 2
},
"query": {
"match": {
"message": "foo"
}
},
"pit": {
"id": "46ToAwMDaWR5BXV1aWQyKwZub2RlXzMAAAAAAAAAACoBYwADaWR4BXV1aWQxAgZub2RlXzEAAAAAAAAAAAEBYQADaWR5BXV1aWQyKgZub2RlXzIAAAAAAAAAAAwBYgACBXV1aWQyAAAFdXVpZDEAAQltYXRjaF9hbGw_gAAAAA=="
}
}
{
"took": 5,
"timed_out": false,
"_shards": {
"total": 1,
"successful": 1,
"skipped": 0,
"failed": 0
},
"hits": {
"total": {
"value": 20,
"relation": "eq"
},
"max_score": 1.3862942,
"hits": [
{
"_index": "my-index-000001",
"_id": "0",
"_score": 1.3862942,
"_source": {
"@timestamp": "2099-11-15T14:12:12",
"http": {
"request": {
"method": "get"
},
"response": {
"status_code": 200,
"bytes": 1070000
},
"version": "1.1"
},
"source": {
"ip": "127.0.0.1"
},
"message": "GET /search HTTP/1.1 200 1070000",
"user": {
"id": "kimchy"
}
}
}
]
}
}
Clear the cache
Technical preview
Clear indices and data streams from the shared cache for partially mounted indices.
Path parameters
-
index
string | array[string] Required A comma-separated list of data streams, indices, and aliases to clear from the cache. It supports wildcards (
*
).
Query parameters
-
expand_wildcards
string | array[string] Whether to expand wildcard expression to concrete indices that are open, closed or both.
Supported values include:
all
: Match any data stream or index, including hidden ones.open
: Match open, non-hidden indices. Also matches any non-hidden data stream.closed
: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.hidden
: Match hidden data streams and hidden indices. Must be combined withopen
,closed
, orboth
.none
: Wildcard expressions are not accepted.
-
allow_no_indices
boolean Whether to ignore if a wildcard indices expression resolves into no concrete indices. (This includes
_all
string or when no indices have been specified)
curl \
--request POST 'http://api.example.com/{index}/_searchable_snapshots/cache/clear' \
--header "Authorization: $API_KEY"
Create a cross-cluster API key
Create an API key of the cross_cluster
type for the API key based remote cluster access.
A cross_cluster
API key cannot be used to authenticate through the REST interface.
IMPORTANT: To authenticate this request you must use a credential that is not an API key. Even if you use an API key that has the required privilege, the API returns an error.
Cross-cluster API keys are created by the Elasticsearch API key service, which is automatically enabled.
NOTE: Unlike REST API keys, a cross-cluster API key does not capture permissions of the authenticated user. The API key’s effective permission is exactly as specified with the access
property.
A successful request returns a JSON structure that contains the API key, its unique ID, and its name. If applicable, it also returns expiration information for the API key in milliseconds.
By default, API keys never expire. You can specify expiration information when you create the API keys.
Cross-cluster API keys can only be updated with the update cross-cluster API key API. Attempting to update them with the update REST API key API or the bulk update REST API keys API will result in an error.
Body
Required
-
access
object Required -
expiration
string A duration. Units can be
nanos
,micros
,ms
(milliseconds),s
(seconds),m
(minutes),h
(hours) andd
(days). Also accepts "0" without a unit and "-1" to indicate an unspecified value. -
metadata
object -
name
string Required
curl \
--request POST 'http://api.example.com/_security/cross_cluster/api_key' \
--header "Authorization: $API_KEY" \
--header "Content-Type: application/json" \
--data '"{\n \"name\": \"my-cross-cluster-api-key\",\n \"expiration\": \"1d\", \n \"access\": {\n \"search\": [ \n {\n \"names\": [\"logs*\"]\n }\n ],\n \"replication\": [ \n {\n \"names\": [\"archive*\"]\n }\n ]\n },\n \"metadata\": {\n \"description\": \"phase one\",\n \"environment\": {\n \"level\": 1,\n \"trusted\": true,\n \"tags\": [\"dev\", \"staging\"]\n }\n }\n}"'
{
"name": "my-cross-cluster-api-key",
"expiration": "1d",
"access": {
"search": [
{
"names": ["logs*"]
}
],
"replication": [
{
"names": ["archive*"]
}
]
},
"metadata": {
"description": "phase one",
"environment": {
"level": 1,
"trusted": true,
"tags": ["dev", "staging"]
}
}
}
{
"created": true,
"token": {
"name": "Jk5J1HgBuyBK5TpDrdo4",
"value": "AAEAAWVsYXN0aWM...vZmxlZXQtc2VydmVyL3Rva2VuMTo3TFdaSDZ"
}
}
Create or update roles
The role management APIs are generally the preferred way to manage roles in the native realm, rather than using file-based role management. The create or update roles API cannot update roles that are defined in roles files. File-based role management is not available in Elastic Serverless.
Path parameters
-
name
string Required The name of the role that is being created or updated. On Elasticsearch Serverless, the role name must begin with a letter or digit and can only contain letters, digits and the characters '_', '-', and '.'. Each role must have a unique name, as this will serve as the identifier for that role.
Query parameters
-
refresh
string If
true
(the default) then refresh the affected shards to make this operation visible to search, ifwait_for
then wait for a refresh to make this operation visible to search, iffalse
then do nothing with refreshes.Values are
true
,false
, orwait_for
.
Body
Required
-
applications
array[object] A list of application privilege entries.
-
cluster
array[string] A list of cluster privileges. These privileges define the cluster-level actions for users with this role.
-
global
object An object defining global privileges. A global privilege is a form of cluster privilege that is request-aware. Support for global privileges is currently limited to the management of application privileges.
-
indices
array[object] A list of indices permissions entries.
-
remote_indices
array[object] A list of remote indices permissions entries.
NOTE: Remote indices are effective for remote clusters configured with the API key based model. They have no effect for remote clusters configured with the certificate based model.
-
remote_cluster
array[object] A list of remote cluster permissions entries.
-
metadata
object -
run_as
array[string] A list of users that the owners of this role can impersonate. Note: in Serverless, the run-as feature is disabled. For API compatibility, you can still specify an empty
run_as
field, but a non-empty list will be rejected. -
description
string Optional description of the role descriptor
-
transient_metadata
object Indicates roles that might be incompatible with the current cluster license, specifically roles with document and field level security. When the cluster license doesn’t allow certain features for a given role, this parameter is updated dynamically to list the incompatible features. If
enabled
isfalse
, the role is ignored, but is still listed in the response from the authenticate API.
curl \
--request POST 'http://api.example.com/_security/role/{name}' \
--header "Authorization: $API_KEY" \
--header "Content-Type: application/json" \
--data '"{\n \"description\": \"Grants full access to all management features within the cluster.\",\n \"cluster\": [\"all\"],\n \"indices\": [\n {\n \"names\": [ \"index1\", \"index2\" ],\n \"privileges\": [\"all\"],\n \"field_security\" : { // optional\n \"grant\" : [ \"title\", \"body\" ]\n },\n \"query\": \"{\\\"match\\\": {\\\"title\\\": \\\"foo\\\"}}\" // optional\n }\n ],\n \"applications\": [\n {\n \"application\": \"myapp\",\n \"privileges\": [ \"admin\", \"read\" ],\n \"resources\": [ \"*\" ]\n }\n ],\n \"run_as\": [ \"other_user\" ], // optional\n \"metadata\" : { // optional\n \"version\" : 1\n }\n}"'
{
"description": "Grants full access to all management features within the cluster.",
"cluster": ["all"],
"indices": [
{
"names": [ "index1", "index2" ],
"privileges": ["all"],
"field_security" : { // optional
"grant" : [ "title", "body" ]
},
"query": "{\"match\": {\"title\": \"foo\"}}" // optional
}
],
"applications": [
{
"application": "myapp",
"privileges": [ "admin", "read" ],
"resources": [ "*" ]
}
],
"run_as": [ "other_user" ], // optional
"metadata" : { // optional
"version" : 1
}
}
{
"cluster": ["cluster:monitor/main"],
"indices": [
{
"names": ["test"],
"privileges": ["read", "indices:admin/get"]
}
]
}
{
"remote_indices": [
{
"clusters": ["my_remote"],
"names": ["logs*"],
"privileges": ["read", "read_cross_cluster", "view_index_metadata"]
}
],
"remote_cluster": [
{
"clusters": ["my_remote"],
"privileges": ["monitor_stats"]
}
]
}
{
"role": {
"created": true
}
}
Delete role mappings
Added in 5.5.0
Role mappings define which roles are assigned to each user. The role mapping APIs are generally the preferred way to manage role mappings rather than using role mapping files. The delete role mappings API cannot remove role mappings that are defined in role mapping files.
Path parameters
-
name
string Required The distinct name that identifies the role mapping. The name is used solely as an identifier to facilitate interaction via the API; it does not affect the behavior of the mapping in any way.
Query parameters
-
refresh
string If
true
(the default) then refresh the affected shards to make this operation visible to search, ifwait_for
then wait for a refresh to make this operation visible to search, iffalse
then do nothing with refreshes.Values are
true
,false
, orwait_for
.
curl \
--request DELETE 'http://api.example.com/_security/role_mapping/{name}' \
--header "Authorization: $API_KEY"
{
"found" : true
}
Find API keys with a query
Added in 7.15.0
Get a paginated list of API keys and their information. You can optionally filter the results with a query.
To use this API, you must have at least the manage_own_api_key
or the read_security
cluster privileges.
If you have only the manage_own_api_key
privilege, this API returns only the API keys that you own.
If you have the read_security
, manage_api_key
, or greater privileges (including manage_security
), this API returns all API keys regardless of ownership.
Query parameters
-
with_limited_by
boolean Return the snapshot of the owner user's role descriptors associated with the API key. An API key's actual permission is the intersection of its assigned role descriptors and the owner user's role descriptors (effectively limited by it). An API key cannot retrieve any API key’s limited-by role descriptors (including itself) unless it has
manage_api_key
or higher privileges. -
with_profile_uid
boolean Determines whether to also retrieve the profile UID for the API key owner principal. If it exists, the profile UID is returned under the
profile_uid
response field for each API key. -
typed_keys
boolean Determines whether aggregation names are prefixed by their respective types in the response.
Body
-
aggregations
object Any aggregations to run over the corpus of returned API keys. Aggregations and queries work together. Aggregations are computed only on the API keys that match the query. This supports only a subset of aggregation types, namely:
terms
,range
,date_range
,missing
,cardinality
,value_count
,composite
,filter
, andfilters
. Additionally, aggregations only run over the same subset of fields that query works with. -
query
object -
from
number The starting document offset. It must not be negative. By default, you cannot page through more than 10,000 hits using the
from
andsize
parameters. To page through more hits, use thesearch_after
parameter. -
size
number The number of hits to return. It must not be negative. The
size
parameter can be set to0
, in which case no API key matches are returned, only the aggregation results. By default, you cannot page through more than 10,000 hits using thefrom
andsize
parameters. To page through more hits, use thesearch_after
parameter. -
search_after
array[number | string | boolean | null] A field value.
curl \
--request GET 'http://api.example.com/_security/_query/api_key' \
--header "Authorization: $API_KEY" \
--header "Content-Type: application/json" \
--data '"{\n \"query\": {\n \"ids\": {\n \"values\": [\n \"VuaCfGcBCdbkQm-e5aOx\"\n ]\n }\n }\n}"'
{
"query": {
"ids": {
"values": [
"VuaCfGcBCdbkQm-e5aOx"
]
}
}
}
{
"query": {
"bool": {
"must": [
{
"prefix": {
"name": "app1-key-"
}
},
{
"term": {
"invalidated": "false"
}
}
],
"must_not": [
{
"term": {
"name": "app1-key-01"
}
}
],
"filter": [
{
"wildcard": {
"username": "org-*-user"
}
},
{
"term": {
"metadata.environment": "production"
}
}
]
}
},
"from": 20,
"size": 10,
"sort": [
{ "creation": { "order": "desc", "format": "date_time" } },
"name"
]
}
{
"query": {
"term": {
"name": {
"value": "application-key-1"
}
}
}
}
{
"api_keys": [
{
"id": "VuaCfGcBCdbkQm-e5aOx",
"name": "application-key-1",
"creation": 1548550550158,
"expiration": 1548551550158,
"invalidated": false,
"username": "myuser",
"realm": "native1",
"realm_type": "native",
"metadata": {
"application": "my-application"
},
"role_descriptors": { },
"limited_by": [
{
"role-power-user": {
"cluster": [
"monitor"
],
"indices": [
{
"names": [
"*"
],
"privileges": [
"read"
],
"allow_restricted_indices": false
}
],
"applications": [ ],
"run_as": [ ],
"metadata": { },
"transient_metadata": {
"enabled": true
}
}
}
]
}
]
}
{
"total": 100,
"count": 10,
"api_keys": [
{
"id": "CLXgVnsBOGkf8IyjcXU7",
"name": "app1-key-79",
"creation": 1629250154811,
"invalidated": false,
"username": "org-admin-user",
"realm": "native1",
"metadata": {
"environment": "production"
},
"role_descriptors": { },
"_sort": [
"2021-08-18T01:29:14.811Z",
"app1-key-79"
]
},
{
"id": "BrXgVnsBOGkf8IyjbXVB",
"name": "app1-key-78",
"creation": 1629250153794,
"invalidated": false,
"username": "org-admin-user",
"realm": "native1",
"metadata": {
"environment": "production"
},
"role_descriptors": { },
"_sort": [
"2021-08-18T01:29:13.794Z",
"app1-key-78"
]
}
]
}
{
"total": 3,
"count": 3,
"api_keys": [
{
"id": "nkvrGXsB8w290t56q3Rg",
"name": "my-api-key-1",
"creation": 1628227480421,
"expiration": 1629091480421,
"invalidated": false,
"username": "elastic",
"realm": "reserved",
"realm_type": "reserved",
"metadata": {
"letter": "a"
},
"role_descriptors": {
"role-a": {
"cluster": [
"monitor"
],
"indices": [
{
"names": [
"index-a"
],
"privileges": [
"read"
],
"allow_restricted_indices": false
}
],
"applications": [ ],
"run_as": [ ],
"metadata": { },
"transient_metadata": {
"enabled": true
}
}
}
},
{
"id": "oEvrGXsB8w290t5683TI",
"name": "my-api-key-2",
"creation": 1628227498953,
"expiration": 1628313898953,
"invalidated": false,
"username": "elastic",
"realm": "reserved",
"metadata": {
"letter": "b"
},
"role_descriptors": { }
}
]
}
Find API keys with a query
Added in 7.15.0
Get a paginated list of API keys and their information. You can optionally filter the results with a query.
To use this API, you must have at least the manage_own_api_key
or the read_security
cluster privileges.
If you have only the manage_own_api_key
privilege, this API returns only the API keys that you own.
If you have the read_security
, manage_api_key
, or greater privileges (including manage_security
), this API returns all API keys regardless of ownership.
Query parameters
-
with_limited_by
boolean Return the snapshot of the owner user's role descriptors associated with the API key. An API key's actual permission is the intersection of its assigned role descriptors and the owner user's role descriptors (effectively limited by it). An API key cannot retrieve any API key’s limited-by role descriptors (including itself) unless it has
manage_api_key
or higher privileges. -
with_profile_uid
boolean Determines whether to also retrieve the profile UID for the API key owner principal. If it exists, the profile UID is returned under the
profile_uid
response field for each API key. -
typed_keys
boolean Determines whether aggregation names are prefixed by their respective types in the response.
Body
-
aggregations
object Any aggregations to run over the corpus of returned API keys. Aggregations and queries work together. Aggregations are computed only on the API keys that match the query. This supports only a subset of aggregation types, namely:
terms
,range
,date_range
,missing
,cardinality
,value_count
,composite
,filter
, andfilters
. Additionally, aggregations only run over the same subset of fields that query works with. -
query
object -
from
number The starting document offset. It must not be negative. By default, you cannot page through more than 10,000 hits using the
from
andsize
parameters. To page through more hits, use thesearch_after
parameter. -
size
number The number of hits to return. It must not be negative. The
size
parameter can be set to0
, in which case no API key matches are returned, only the aggregation results. By default, you cannot page through more than 10,000 hits using thefrom
andsize
parameters. To page through more hits, use thesearch_after
parameter. -
search_after
array[number | string | boolean | null] A field value.
curl \
--request POST 'http://api.example.com/_security/_query/api_key' \
--header "Authorization: $API_KEY" \
--header "Content-Type: application/json" \
--data '"{\n \"query\": {\n \"ids\": {\n \"values\": [\n \"VuaCfGcBCdbkQm-e5aOx\"\n ]\n }\n }\n}"'
{
"query": {
"ids": {
"values": [
"VuaCfGcBCdbkQm-e5aOx"
]
}
}
}
{
"query": {
"bool": {
"must": [
{
"prefix": {
"name": "app1-key-"
}
},
{
"term": {
"invalidated": "false"
}
}
],
"must_not": [
{
"term": {
"name": "app1-key-01"
}
}
],
"filter": [
{
"wildcard": {
"username": "org-*-user"
}
},
{
"term": {
"metadata.environment": "production"
}
}
]
}
},
"from": 20,
"size": 10,
"sort": [
{ "creation": { "order": "desc", "format": "date_time" } },
"name"
]
}
{
"query": {
"term": {
"name": {
"value": "application-key-1"
}
}
}
}
{
"api_keys": [
{
"id": "VuaCfGcBCdbkQm-e5aOx",
"name": "application-key-1",
"creation": 1548550550158,
"expiration": 1548551550158,
"invalidated": false,
"username": "myuser",
"realm": "native1",
"realm_type": "native",
"metadata": {
"application": "my-application"
},
"role_descriptors": { },
"limited_by": [
{
"role-power-user": {
"cluster": [
"monitor"
],
"indices": [
{
"names": [
"*"
],
"privileges": [
"read"
],
"allow_restricted_indices": false
}
],
"applications": [ ],
"run_as": [ ],
"metadata": { },
"transient_metadata": {
"enabled": true
}
}
}
]
}
]
}
{
"total": 100,
"count": 10,
"api_keys": [
{
"id": "CLXgVnsBOGkf8IyjcXU7",
"name": "app1-key-79",
"creation": 1629250154811,
"invalidated": false,
"username": "org-admin-user",
"realm": "native1",
"metadata": {
"environment": "production"
},
"role_descriptors": { },
"_sort": [
"2021-08-18T01:29:14.811Z",
"app1-key-79"
]
},
{
"id": "BrXgVnsBOGkf8IyjbXVB",
"name": "app1-key-78",
"creation": 1629250153794,
"invalidated": false,
"username": "org-admin-user",
"realm": "native1",
"metadata": {
"environment": "production"
},
"role_descriptors": { },
"_sort": [
"2021-08-18T01:29:13.794Z",
"app1-key-78"
]
}
]
}
{
"total": 3,
"count": 3,
"api_keys": [
{
"id": "nkvrGXsB8w290t56q3Rg",
"name": "my-api-key-1",
"creation": 1628227480421,
"expiration": 1629091480421,
"invalidated": false,
"username": "elastic",
"realm": "reserved",
"realm_type": "reserved",
"metadata": {
"letter": "a"
},
"role_descriptors": {
"role-a": {
"cluster": [
"monitor"
],
"indices": [
{
"names": [
"index-a"
],
"privileges": [
"read"
],
"allow_restricted_indices": false
}
],
"applications": [ ],
"run_as": [ ],
"metadata": { },
"transient_metadata": {
"enabled": true
}
}
}
},
{
"id": "oEvrGXsB8w290t5683TI",
"name": "my-api-key-2",
"creation": 1628227498953,
"expiration": 1628313898953,
"invalidated": false,
"username": "elastic",
"realm": "reserved",
"metadata": {
"letter": "b"
},
"role_descriptors": { }
}
]
}
Clone a snapshot
Added in 7.10.0
Clone part of all of a snapshot into another snapshot in the same repository.
Path parameters
-
repository
string Required The name of the snapshot repository that both source and target snapshot belong to.
-
snapshot
string Required The source snapshot name.
-
target_snapshot
string Required The target snapshot name.
Query parameters
-
master_timeout
string The period to wait for the master node. If the master node is not available before the timeout expires, the request fails and returns an error. To indicate that the request should never timeout, set it to
-1
.
curl \
--request PUT 'http://api.example.com/_snapshot/{repository}/{snapshot}/_clone/{target_snapshot}' \
--header "Authorization: $API_KEY" \
--header "Content-Type: application/json" \
--data '"{\n \"indices\": \"index_a,index_b\"\n}"'
{
"indices": "index_a,index_b"
}
Create a snapshot
Added in 0.0.0
Take a snapshot of a cluster or of data streams and indices.
Path parameters
-
repository
string Required The name of the repository for the snapshot.
-
snapshot
string Required The name of the snapshot. It supportes date math. It must be unique in the repository.
Query parameters
-
master_timeout
string The period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.
-
wait_for_completion
boolean If
true
, the request returns a response when the snapshot is complete. Iffalse
, the request returns a response when the snapshot initializes.
Body
-
expand_wildcards
string | array[string] -
feature_states
array[string] The feature states to include in the snapshot. Each feature state includes one or more system indices containing related data. You can view a list of eligible features using the get features API.
If
include_global_state
istrue
, all current feature states are included by default. Ifinclude_global_state
isfalse
, no feature states are included by default.Note that specifying an empty array will result in the default behavior. To exclude all feature states, regardless of the
include_global_state
value, specify an array with only the valuenone
(["none"]
). -
include_global_state
boolean If
true
, the current cluster state is included in the snapshot. The cluster state includes persistent cluster settings, composable index templates, legacy index templates, ingest pipelines, and ILM policies. It also includes data stored in system indices, such as Watches and task records (configurable viafeature_states
). -
indices
string | array[string] -
metadata
object -
partial
boolean If
true
, it enables you to restore a partial snapshot of indices with unavailable shards. Only shards that were successfully included in the snapshot will be restored. All missing shards will be recreated as empty.If
false
, the entire restore operation will fail if one or more indices included in the snapshot do not have all primary shards available.
curl \
--request POST 'http://api.example.com/_snapshot/{repository}/{snapshot}' \
--header "Authorization: $API_KEY" \
--header "Content-Type: application/json" \
--data '"{\n \"indices\": \"index_1,index_2\",\n \"ignore_unavailable\": true,\n \"include_global_state\": false,\n \"metadata\": {\n \"taken_by\": \"user123\",\n \"taken_because\": \"backup before upgrading\"\n }\n}"'
{
"indices": "index_1,index_2",
"ignore_unavailable": true,
"include_global_state": false,
"metadata": {
"taken_by": "user123",
"taken_because": "backup before upgrading"
}
}
{
"snapshot": {
"snapshot": "snapshot_2",
"uuid": "vdRctLCxSketdKb54xw67g",
"repository": "my_repository",
"version_id": <version_id>,
"version": <version>,
"indices": [],
"data_streams": [],
"feature_states": [],
"include_global_state": false,
"metadata": {
"taken_by": "user123",
"taken_because": "backup before upgrading"
},
"state": "SUCCESS",
"start_time": "2020-06-25T14:00:28.850Z",
"start_time_in_millis": 1593093628850,
"end_time": "2020-06-25T14:00:28.850Z",
"end_time_in_millis": 1593094752018,
"duration_in_millis": 0,
"failures": [],
"shards": {
"total": 0,
"failed": 0,
"successful": 0
}
}
}
Get a synonym rule
Added in 8.10.0
Get a synonym rule from a synonym set.
curl \
--request GET 'http://api.example.com/_synonyms/{set_id}/{rule_id}' \
--header "Authorization: $API_KEY"
{
"id": "test-1",
"synonyms": "hello, hi"
}
Delete a transform
Added in 7.5.0
Path parameters
-
transform_id
string Required Identifier for the transform.
Query parameters
-
force
boolean If this value is false, the transform must be stopped before it can be deleted. If true, the transform is deleted regardless of its current state.
-
delete_dest_index
boolean If this value is true, the destination index is deleted together with the transform. If false, the destination index will not be deleted
-
timeout
string Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
curl \
--request DELETE 'http://api.example.com/_transform/{transform_id}' \
--header "Authorization: $API_KEY"
{
"acknowledged": true
}
Update a transform
Added in 7.2.0
Updates certain properties of a transform.
All updated properties except description
do not take effect until after the transform starts the next checkpoint,
thus there is data consistency in each checkpoint. To use this API, you must have read
and view_index_metadata
privileges for the source indices. You must also have index
and read
privileges for the destination index. When
Elasticsearch security features are enabled, the transform remembers which roles the user who updated it had at the
time of update and runs with those privileges.
Path parameters
-
transform_id
string Required Identifier for the transform.
Query parameters
-
defer_validation
boolean When true, deferrable validations are not run. This behavior may be desired if the source index does not exist until after the transform is created.
-
timeout
string Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
Body
Required
-
dest
object -
description
string Free text description of the transform.
-
frequency
string A duration. Units can be
nanos
,micros
,ms
(milliseconds),s
(seconds),m
(minutes),h
(hours) andd
(days). Also accepts "0" without a unit and "-1" to indicate an unspecified value. -
_meta
object -
source
object -
settings
object -
sync
object retention_policy
object | string | null Defines a retention policy for the transform. Data that meets the defined criteria is deleted from the destination index.
curl \
--request POST 'http://api.example.com/_transform/{transform_id}/_update' \
--header "Authorization: $API_KEY" \
--header "Content-Type: application/json" \
--data '"{\n \"source\": {\n \"index\": \"kibana_sample_data_ecommerce\",\n \"query\": {\n \"term\": {\n \"geoip.continent_name\": {\n \"value\": \"Asia\"\n }\n }\n }\n },\n \"pivot\": {\n \"group_by\": {\n \"customer_id\": {\n \"terms\": {\n \"field\": \"customer_id\",\n \"missing_bucket\": true\n }\n }\n },\n \"aggregations\": {\n \"max_price\": {\n \"max\": {\n \"field\": \"taxful_total_price\"\n }\n }\n }\n },\n \"description\": \"Maximum priced ecommerce data by customer_id in Asia\",\n \"dest\": {\n \"index\": \"kibana_sample_data_ecommerce_transform1\",\n \"pipeline\": \"add_timestamp_pipeline\"\n },\n \"frequency\": \"5m\",\n \"sync\": {\n \"time\": {\n \"field\": \"order_date\",\n \"delay\": \"60s\"\n }\n },\n \"retention_policy\": {\n \"time\": {\n \"field\": \"order_date\",\n \"max_age\": \"30d\"\n }\n }\n}"'
{
"source": {
"index": "kibana_sample_data_ecommerce",
"query": {
"term": {
"geoip.continent_name": {
"value": "Asia"
}
}
}
},
"pivot": {
"group_by": {
"customer_id": {
"terms": {
"field": "customer_id",
"missing_bucket": true
}
}
},
"aggregations": {
"max_price": {
"max": {
"field": "taxful_total_price"
}
}
}
},
"description": "Maximum priced ecommerce data by customer_id in Asia",
"dest": {
"index": "kibana_sample_data_ecommerce_transform1",
"pipeline": "add_timestamp_pipeline"
},
"frequency": "5m",
"sync": {
"time": {
"field": "order_date",
"delay": "60s"
}
},
"retention_policy": {
"time": {
"field": "order_date",
"max_age": "30d"
}
}
}
{
"id": "simple-kibana-ecomm-pivot",
"authorization": {
"roles": [
"superuser"
]
},
"version": "10.0.0",
"create_time": 1712951576767,
"source": {
"index": [
"kibana_sample_data_ecommerce"
],
"query": {
"term": {
"geoip.continent_name": {
"value": "Asia"
}
}
}
},
"dest": {
"index": "kibana_sample_data_ecommerce_transform_v2",
"pipeline": "add_timestamp_pipeline"
},
"frequency": "15m",
"sync": {
"time": {
"field": "order_date",
"delay": "120s"
}
},
"pivot": {
"group_by": {
"customer_id": {
"terms": {
"field": "customer_id",
"missing_bucket": true
}
}
},
"aggregations": {
"max_price": {
"max": {
"field": "taxful_total_price"
}
}
}
},
"description": "Maximum priced ecommerce data by customer_id in Asia",
"settings": {},
"retention_policy": {
"time": {
"field": "order_date",
"max_age": "30d"
}
}
}