Elasticsearch API
http://api.example.com
Elasticsearch provides REST APIs that are used by the UI components and can be called directly to configure and access Elasticsearch features.
Documentation source and versions
This documentation is derived from the 9.0
branch of the elasticsearch-specification repository. It is provided under license Attribution-NonCommercial-NoDerivatives 4.0 International.
This documentation contains work-in-progress information for future Elastic Stack releases.
Last update on Jun 3, 2025.
This API is provided under license Apache 2.0.
Get component templates
Added in 5.1.0
Get information about component templates in a cluster. Component templates are building blocks for constructing index templates that specify index mappings, settings, and aliases.
IMPORTANT: CAT APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the get component template API.
Query parameters
-
h
string | array[string] List of columns to appear in the response. Supports simple wildcards.
-
s
string | array[string] List of columns that determine how the table should be sorted. Sorting defaults to ascending and can be changed by setting
:asc
or:desc
as a suffix to the column name. -
local
boolean If
true
, the request computes the list of selected nodes from the local cluster state. Iffalse
the list of selected nodes are computed from the cluster state of the master node. In both cases the coordinating node will send requests for further information to each selected node. -
master_timeout
string The period to wait for a connection to the master node.
Values are
-1
or0
.
curl \
--request GET 'http://api.example.com/_cat/component_templates' \
--header "Authorization: $API_KEY"
[
{
"name": "my-template-1",
"version": "null",
"alias_count": "0",
"mapping_count": "0",
"settings_count": "1",
"metadata_count": "0",
"included_in": "[my-index-template]"
},
{
"name": "my-template-2",
"version": null,
"alias_count": "0",
"mapping_count": "3",
"settings_count": "0",
"metadata_count": "0",
"included_in": "[my-index-template]"
}
]
Get anomaly detection jobs
Added in 7.7.0
Get configuration and usage information for anomaly detection jobs.
This API returns a maximum of 10,000 jobs.
If the Elasticsearch security features are enabled, you must have monitor_ml
,
monitor
, manage_ml
, or manage
cluster privileges to use this API.
IMPORTANT: CAT APIs are only intended for human consumption using the Kibana console or command line. They are not intended for use by applications. For application consumption, use the get anomaly detection job statistics API.
Path parameters
-
job_id
string Required Identifier for the anomaly detection job.
Query parameters
-
allow_no_match
boolean Specifies what to do when the request:
- Contains wildcard expressions and there are no jobs that match.
- Contains the
_all
string or no identifiers and there are no matches. - Contains wildcard expressions and there are only partial matches.
If
true
, the API returns an empty jobs array when there are no matches and the subset of results when there are partial matches. Iffalse
, the API returns a 404 status code when there are no matches or only partial matches. -
bytes
string The unit used to display byte values.
Values are
b
,kb
,mb
,gb
,tb
, orpb
. -
h
string | array[string] Comma-separated list of column names to display.
Supported values include:
assignment_explanation
(orae
): For open anomaly detection jobs only, contains messages relating to the selection of a node to run the job.buckets.count
(orbc
,bucketsCount
): The number of bucket results produced by the job.buckets.time.exp_avg
(orbtea
,bucketsTimeExpAvg
): Exponential moving average of all bucket processing times, in milliseconds.buckets.time.exp_avg_hour
(orbteah
,bucketsTimeExpAvgHour
): Exponentially-weighted moving average of bucket processing times calculated in a 1 hour time window, in milliseconds.buckets.time.max
(orbtmax
,bucketsTimeMax
): Maximum among all bucket processing times, in milliseconds.buckets.time.min
(orbtmin
,bucketsTimeMin
): Minimum among all bucket processing times, in milliseconds.buckets.time.total
(orbtt
,bucketsTimeTotal
): Sum of all bucket processing times, in milliseconds.data.buckets
(ordb
,dataBuckets
): The number of buckets processed.data.earliest_record
(order
,dataEarliestRecord
): The timestamp of the earliest chronologically input document.data.empty_buckets
(ordeb
,dataEmptyBuckets
): The number of buckets which did not contain any data.data.input_bytes
(ordib
,dataInputBytes
): The number of bytes of input data posted to the anomaly detection job.data.input_fields
(ordif
,dataInputFields
): The total number of fields in input documents posted to the anomaly detection job. This count includes fields that are not used in the analysis. However, be aware that if you are using a datafeed, it extracts only the required fields from the documents it retrieves before posting them to the job.data.input_records
(ordir
,dataInputRecords
): The number of input documents posted to the anomaly detection job.data.invalid_dates
(ordid
,dataInvalidDates
): The number of input documents with either a missing date field or a date that could not be parsed.data.last
(ordl
,dataLast
): The timestamp at which data was last analyzed, according to server time.data.last_empty_bucket
(ordleb
,dataLastEmptyBucket
): The timestamp of the last bucket that did not contain any data.data.last_sparse_bucket
(ordlsb
,dataLastSparseBucket
): The timestamp of the last bucket that was considered sparse.data.latest_record
(ordlr
,dataLatestRecord
): The timestamp of the latest chronologically input document.data.missing_fields
(ordmf
,dataMissingFields
): The number of input documents that are missing a field that the anomaly detection job is configured to analyze. Input documents with missing fields are still processed because it is possible that not all fields are missing.data.out_of_order_timestamps
(ordoot
,dataOutOfOrderTimestamps
): The number of input documents that have a timestamp chronologically preceding the start of the current anomaly detection bucket offset by the latency window. This information is applicable only when you provide data to the anomaly detection job by using the post data API. These out of order documents are discarded, since jobs require time series data to be in ascending chronological order.data.processed_fields
(ordpf
,dataProcessedFields
): The total number of fields in all the documents that have been processed by the anomaly detection job. Only fields that are specified in the detector configuration object contribute to this count. The timestamp is not included in this count.data.processed_records
(ordpr
,dataProcessedRecords
): The number of input documents that have been processed by the anomaly detection job. This value includes documents with missing fields, since they are nonetheless analyzed. If you use datafeeds and have aggregations in your search query, the processed record count is the number of aggregation results processed, not the number of Elasticsearch documents.data.sparse_buckets
(ordsb
,dataSparseBuckets
): The number of buckets that contained few data points compared to the expected number of data points.forecasts.memory.avg
(orfmavg
,forecastsMemoryAvg
): The average memory usage in bytes for forecasts related to the anomaly detection job.forecasts.memory.max
(orfmmax
,forecastsMemoryMax
): The maximum memory usage in bytes for forecasts related to the anomaly detection job.forecasts.memory.min
(orfmmin
,forecastsMemoryMin
): The minimum memory usage in bytes for forecasts related to the anomaly detection job.forecasts.memory.total
(orfmt
,forecastsMemoryTotal
): The total memory usage in bytes for forecasts related to the anomaly detection job.forecasts.records.avg
(orfravg
,forecastsRecordsAvg
): The average number ofm
odel_forecast` documents written for forecasts related to the anomaly detection job.forecasts.records.max
(orfrmax
,forecastsRecordsMax
): The maximum number ofmodel_forecast
documents written for forecasts related to the anomaly detection job.forecasts.records.min
(orfrmin
,forecastsRecordsMin
): The minimum number ofmodel_forecast
documents written for forecasts related to the anomaly detection job.forecasts.records.total
(orfrt
,forecastsRecordsTotal
): The total number ofmodel_forecast
documents written for forecasts related to the anomaly detection job.forecasts.time.avg
(orftavg
,forecastsTimeAvg
): The average runtime in milliseconds for forecasts related to the anomaly detection job.forecasts.time.max
(orftmax
,forecastsTimeMax
): The maximum runtime in milliseconds for forecasts related to the anomaly detection job.forecasts.time.min
(orftmin
,forecastsTimeMin
): The minimum runtime in milliseconds for forecasts related to the anomaly detection job.forecasts.time.total
(orftt
,forecastsTimeTotal
): The total runtime in milliseconds for forecasts related to the anomaly detection job.forecasts.total
(orft
,forecastsTotal
): The number of individual forecasts currently available for the job.id
: Identifier for the anomaly detection job.model.bucket_allocation_failures
(ormbaf
,modelBucketAllocationFailures
): The number of buckets for which new entities in incoming data were not processed due to insufficient model memory.model.by_fields
(ormbf
,modelByFields
): The number of by field values that were analyzed by the models. This value is cumulative for all detectors in the job.model.bytes
(ormb
,modelBytes
): The number of bytes of memory used by the models. This is the maximum value since the last time the model was persisted. If the job is closed, this value indicates the latest size.model.bytes_exceeded
(ormbe
,modelBytesExceeded
): The number of bytes over the high limit for memory usage at the last allocation failure.model.categorization_status
(ormcs
,modelCategorizationStatus
): The status of categorization for the job:ok
orwarn
. Ifok
, categorization is performing acceptably well (or not being used at all). Ifwarn
, categorization is detecting a distribution of categories that suggests the input data is inappropriate for categorization. Problems could be that there is only one category, more than 90% of categories are rare, the number of categories is greater than 50% of the number of categorized documents, there are no frequently matched categories, or more than 50% of categories are dead.model.categorized_doc_count
(ormcdc
,modelCategorizedDocCount
): The number of documents that have had a field categorized.model.dead_category_count
(ormdcc
,modelDeadCategoryCount
): The number of categories created by categorization that will never be assigned again because another category’s definition makes it a superset of the dead category. Dead categories are a side effect of the way categorization has no prior training.model.failed_category_count
(ormdcc
,modelFailedCategoryCount
): The number of times that categorization wanted to create a new category but couldn’t because the job had hit its model memory limit. This count does not track which specific categories failed to be created. Therefore, you cannot use this value to determine the number of unique categories that were missed.model.frequent_category_count
(ormfcc
,modelFrequentCategoryCount
): The number of categories that match more than 1% of categorized documents.model.log_time
(ormlt
,modelLogTime
): The timestamp when the model stats were gathered, according to server time.model.memory_limit
(ormml
,modelMemoryLimit
): The timestamp when the model stats were gathered, according to server time.model.memory_status
(ormms
,modelMemoryStatus
): The status of the mathematical models:ok
,soft_limit
, orhard_limit
. Ifok
, the models stayed below the configured value. Ifsoft_limit
, the models used more than 60% of the configured memory limit and older unused models will be pruned to free up space. Additionally, in categorization jobs no further category examples will be stored. Ifhard_limit
, the models used more space than the configured memory limit. As a result, not all incoming data was processed.model.over_fields
(ormof
,modelOverFields
): The number of over field values that were analyzed by the models. This value is cumulative for all detectors in the job.model.partition_fields
(ormpf
,modelPartitionFields
): The number of partition field values that were analyzed by the models. This value is cumulative for all detectors in the job.model.rare_category_count
(ormrcc
,modelRareCategoryCount
): The number of categories that match just one categorized document.model.timestamp
(ormt
,modelTimestamp
): The timestamp of the last record when the model stats were gathered.model.total_category_count
(ormtcc
,modelTotalCategoryCount
): The number of categories created by categorization.node.address
(orna
,nodeAddress
): The network address of the node that runs the job. This information is available only for open jobs.node.ephemeral_id
(orne
,nodeEphemeralId
): The ephemeral ID of the node that runs the job. This information is available only for open jobs.node.id
(orni
,nodeId
): The unique identifier of the node that runs the job. This information is available only for open jobs.node.name
(ornn
,nodeName
): The name of the node that runs the job. This information is available only for open jobs.opened_time
(orot
): For open jobs only, the elapsed time for which the job has been open.state
(ors
): The status of the anomaly detection job:closed
,closing
,failed
,opened
, oropening
. Ifclosed
, the job finished successfully with its model state persisted. The job must be opened before it can accept further data. Ifclosing
, the job close action is in progress and has not yet completed. A closing job cannot accept further data. Iffailed
, the job did not finish successfully due to an error. This situation can occur due to invalid input data, a fatal error occurring during the analysis, or an external interaction such as the process being killed by the Linux out of memory (OOM) killer. If the job had irrevocably failed, it must be force closed and then deleted. If the datafeed can be corrected, the job can be closed and then re-opened. Ifopened
, the job is available to receive and process data. Ifopening
, the job open action is in progress and has not yet completed.
Values are
assignment_explanation
,ae
,buckets.count
,bc
,bucketsCount
,buckets.time.exp_avg
,btea
,bucketsTimeExpAvg
,buckets.time.exp_avg_hour
,bteah
,bucketsTimeExpAvgHour
,buckets.time.max
,btmax
,bucketsTimeMax
,buckets.time.min
,btmin
,bucketsTimeMin
,buckets.time.total
,btt
,bucketsTimeTotal
,data.buckets
,db
,dataBuckets
,data.earliest_record
,der
,dataEarliestRecord
,data.empty_buckets
,deb
,dataEmptyBuckets
,data.input_bytes
,dib
,dataInputBytes
,data.input_fields
,dif
,dataInputFields
,data.input_records
,dir
,dataInputRecords
,data.invalid_dates
,did
,dataInvalidDates
,data.last
,dl
,dataLast
,data.last_empty_bucket
,dleb
,dataLastEmptyBucket
,data.last_sparse_bucket
,dlsb
,dataLastSparseBucket
,data.latest_record
,dlr
,dataLatestRecord
,data.missing_fields
,dmf
,dataMissingFields
,data.out_of_order_timestamps
,doot
,dataOutOfOrderTimestamps
,data.processed_fields
,dpf
,dataProcessedFields
,data.processed_records
,dpr
,dataProcessedRecords
,data.sparse_buckets
,dsb
,dataSparseBuckets
,forecasts.memory.avg
,fmavg
,forecastsMemoryAvg
,forecasts.memory.max
,fmmax
,forecastsMemoryMax
,forecasts.memory.min
,fmmin
,forecastsMemoryMin
,forecasts.memory.total
,fmt
,forecastsMemoryTotal
,forecasts.records.avg
,fravg
,forecastsRecordsAvg
,forecasts.records.max
,frmax
,forecastsRecordsMax
,forecasts.records.min
,frmin
,forecastsRecordsMin
,forecasts.records.total
,frt
,forecastsRecordsTotal
,forecasts.time.avg
,ftavg
,forecastsTimeAvg
,forecasts.time.max
,ftmax
,forecastsTimeMax
,forecasts.time.min
,ftmin
,forecastsTimeMin
,forecasts.time.total
,ftt
,forecastsTimeTotal
,forecasts.total
,ft
,forecastsTotal
,id
,model.bucket_allocation_failures
,mbaf
,modelBucketAllocationFailures
,model.by_fields
,mbf
,modelByFields
,model.bytes
,mb
,modelBytes
,model.bytes_exceeded
,mbe
,modelBytesExceeded
,model.categorization_status
,mcs
,modelCategorizationStatus
,model.categorized_doc_count
,mcdc
,modelCategorizedDocCount
,model.dead_category_count
,mdcc
,modelDeadCategoryCount
,model.failed_category_count
,modelFailedCategoryCount
,model.frequent_category_count
,mfcc
,modelFrequentCategoryCount
,model.log_time
,mlt
,modelLogTime
,model.memory_limit
,mml
,modelMemoryLimit
,model.memory_status
,mms
,modelMemoryStatus
,model.over_fields
,mof
,modelOverFields
,model.partition_fields
,mpf
,modelPartitionFields
,model.rare_category_count
,mrcc
,modelRareCategoryCount
,model.timestamp
,mt
,modelTimestamp
,model.total_category_count
,mtcc
,modelTotalCategoryCount
,node.address
,na
,nodeAddress
,node.ephemeral_id
,ne
,nodeEphemeralId
,node.id
,ni
,nodeId
,node.name
,nn
,nodeName
,opened_time
,ot
,state
, ors
. -
s
string | array[string] Comma-separated list of column names or column aliases used to sort the response.
Supported values include:
assignment_explanation
(orae
): For open anomaly detection jobs only, contains messages relating to the selection of a node to run the job.buckets.count
(orbc
,bucketsCount
): The number of bucket results produced by the job.buckets.time.exp_avg
(orbtea
,bucketsTimeExpAvg
): Exponential moving average of all bucket processing times, in milliseconds.buckets.time.exp_avg_hour
(orbteah
,bucketsTimeExpAvgHour
): Exponentially-weighted moving average of bucket processing times calculated in a 1 hour time window, in milliseconds.buckets.time.max
(orbtmax
,bucketsTimeMax
): Maximum among all bucket processing times, in milliseconds.buckets.time.min
(orbtmin
,bucketsTimeMin
): Minimum among all bucket processing times, in milliseconds.buckets.time.total
(orbtt
,bucketsTimeTotal
): Sum of all bucket processing times, in milliseconds.data.buckets
(ordb
,dataBuckets
): The number of buckets processed.data.earliest_record
(order
,dataEarliestRecord
): The timestamp of the earliest chronologically input document.data.empty_buckets
(ordeb
,dataEmptyBuckets
): The number of buckets which did not contain any data.data.input_bytes
(ordib
,dataInputBytes
): The number of bytes of input data posted to the anomaly detection job.data.input_fields
(ordif
,dataInputFields
): The total number of fields in input documents posted to the anomaly detection job. This count includes fields that are not used in the analysis. However, be aware that if you are using a datafeed, it extracts only the required fields from the documents it retrieves before posting them to the job.data.input_records
(ordir
,dataInputRecords
): The number of input documents posted to the anomaly detection job.data.invalid_dates
(ordid
,dataInvalidDates
): The number of input documents with either a missing date field or a date that could not be parsed.data.last
(ordl
,dataLast
): The timestamp at which data was last analyzed, according to server time.data.last_empty_bucket
(ordleb
,dataLastEmptyBucket
): The timestamp of the last bucket that did not contain any data.data.last_sparse_bucket
(ordlsb
,dataLastSparseBucket
): The timestamp of the last bucket that was considered sparse.data.latest_record
(ordlr
,dataLatestRecord
): The timestamp of the latest chronologically input document.data.missing_fields
(ordmf
,dataMissingFields
): The number of input documents that are missing a field that the anomaly detection job is configured to analyze. Input documents with missing fields are still processed because it is possible that not all fields are missing.data.out_of_order_timestamps
(ordoot
,dataOutOfOrderTimestamps
): The number of input documents that have a timestamp chronologically preceding the start of the current anomaly detection bucket offset by the latency window. This information is applicable only when you provide data to the anomaly detection job by using the post data API. These out of order documents are discarded, since jobs require time series data to be in ascending chronological order.data.processed_fields
(ordpf
,dataProcessedFields
): The total number of fields in all the documents that have been processed by the anomaly detection job. Only fields that are specified in the detector configuration object contribute to this count. The timestamp is not included in this count.data.processed_records
(ordpr
,dataProcessedRecords
): The number of input documents that have been processed by the anomaly detection job. This value includes documents with missing fields, since they are nonetheless analyzed. If you use datafeeds and have aggregations in your search query, the processed record count is the number of aggregation results processed, not the number of Elasticsearch documents.data.sparse_buckets
(ordsb
,dataSparseBuckets
): The number of buckets that contained few data points compared to the expected number of data points.forecasts.memory.avg
(orfmavg
,forecastsMemoryAvg
): The average memory usage in bytes for forecasts related to the anomaly detection job.forecasts.memory.max
(orfmmax
,forecastsMemoryMax
): The maximum memory usage in bytes for forecasts related to the anomaly detection job.forecasts.memory.min
(orfmmin
,forecastsMemoryMin
): The minimum memory usage in bytes for forecasts related to the anomaly detection job.forecasts.memory.total
(orfmt
,forecastsMemoryTotal
): The total memory usage in bytes for forecasts related to the anomaly detection job.forecasts.records.avg
(orfravg
,forecastsRecordsAvg
): The average number ofm
odel_forecast` documents written for forecasts related to the anomaly detection job.forecasts.records.max
(orfrmax
,forecastsRecordsMax
): The maximum number ofmodel_forecast
documents written for forecasts related to the anomaly detection job.forecasts.records.min
(orfrmin
,forecastsRecordsMin
): The minimum number ofmodel_forecast
documents written for forecasts related to the anomaly detection job.forecasts.records.total
(orfrt
,forecastsRecordsTotal
): The total number ofmodel_forecast
documents written for forecasts related to the anomaly detection job.forecasts.time.avg
(orftavg
,forecastsTimeAvg
): The average runtime in milliseconds for forecasts related to the anomaly detection job.forecasts.time.max
(orftmax
,forecastsTimeMax
): The maximum runtime in milliseconds for forecasts related to the anomaly detection job.forecasts.time.min
(orftmin
,forecastsTimeMin
): The minimum runtime in milliseconds for forecasts related to the anomaly detection job.forecasts.time.total
(orftt
,forecastsTimeTotal
): The total runtime in milliseconds for forecasts related to the anomaly detection job.forecasts.total
(orft
,forecastsTotal
): The number of individual forecasts currently available for the job.id
: Identifier for the anomaly detection job.model.bucket_allocation_failures
(ormbaf
,modelBucketAllocationFailures
): The number of buckets for which new entities in incoming data were not processed due to insufficient model memory.model.by_fields
(ormbf
,modelByFields
): The number of by field values that were analyzed by the models. This value is cumulative for all detectors in the job.model.bytes
(ormb
,modelBytes
): The number of bytes of memory used by the models. This is the maximum value since the last time the model was persisted. If the job is closed, this value indicates the latest size.model.bytes_exceeded
(ormbe
,modelBytesExceeded
): The number of bytes over the high limit for memory usage at the last allocation failure.model.categorization_status
(ormcs
,modelCategorizationStatus
): The status of categorization for the job:ok
orwarn
. Ifok
, categorization is performing acceptably well (or not being used at all). Ifwarn
, categorization is detecting a distribution of categories that suggests the input data is inappropriate for categorization. Problems could be that there is only one category, more than 90% of categories are rare, the number of categories is greater than 50% of the number of categorized documents, there are no frequently matched categories, or more than 50% of categories are dead.model.categorized_doc_count
(ormcdc
,modelCategorizedDocCount
): The number of documents that have had a field categorized.model.dead_category_count
(ormdcc
,modelDeadCategoryCount
): The number of categories created by categorization that will never be assigned again because another category’s definition makes it a superset of the dead category. Dead categories are a side effect of the way categorization has no prior training.model.failed_category_count
(ormdcc
,modelFailedCategoryCount
): The number of times that categorization wanted to create a new category but couldn’t because the job had hit its model memory limit. This count does not track which specific categories failed to be created. Therefore, you cannot use this value to determine the number of unique categories that were missed.model.frequent_category_count
(ormfcc
,modelFrequentCategoryCount
): The number of categories that match more than 1% of categorized documents.model.log_time
(ormlt
,modelLogTime
): The timestamp when the model stats were gathered, according to server time.model.memory_limit
(ormml
,modelMemoryLimit
): The timestamp when the model stats were gathered, according to server time.model.memory_status
(ormms
,modelMemoryStatus
): The status of the mathematical models:ok
,soft_limit
, orhard_limit
. Ifok
, the models stayed below the configured value. Ifsoft_limit
, the models used more than 60% of the configured memory limit and older unused models will be pruned to free up space. Additionally, in categorization jobs no further category examples will be stored. Ifhard_limit
, the models used more space than the configured memory limit. As a result, not all incoming data was processed.model.over_fields
(ormof
,modelOverFields
): The number of over field values that were analyzed by the models. This value is cumulative for all detectors in the job.model.partition_fields
(ormpf
,modelPartitionFields
): The number of partition field values that were analyzed by the models. This value is cumulative for all detectors in the job.model.rare_category_count
(ormrcc
,modelRareCategoryCount
): The number of categories that match just one categorized document.model.timestamp
(ormt
,modelTimestamp
): The timestamp of the last record when the model stats were gathered.model.total_category_count
(ormtcc
,modelTotalCategoryCount
): The number of categories created by categorization.node.address
(orna
,nodeAddress
): The network address of the node that runs the job. This information is available only for open jobs.node.ephemeral_id
(orne
,nodeEphemeralId
): The ephemeral ID of the node that runs the job. This information is available only for open jobs.node.id
(orni
,nodeId
): The unique identifier of the node that runs the job. This information is available only for open jobs.node.name
(ornn
,nodeName
): The name of the node that runs the job. This information is available only for open jobs.opened_time
(orot
): For open jobs only, the elapsed time for which the job has been open.state
(ors
): The status of the anomaly detection job:closed
,closing
,failed
,opened
, oropening
. Ifclosed
, the job finished successfully with its model state persisted. The job must be opened before it can accept further data. Ifclosing
, the job close action is in progress and has not yet completed. A closing job cannot accept further data. Iffailed
, the job did not finish successfully due to an error. This situation can occur due to invalid input data, a fatal error occurring during the analysis, or an external interaction such as the process being killed by the Linux out of memory (OOM) killer. If the job had irrevocably failed, it must be force closed and then deleted. If the datafeed can be corrected, the job can be closed and then re-opened. Ifopened
, the job is available to receive and process data. Ifopening
, the job open action is in progress and has not yet completed.
Values are
assignment_explanation
,ae
,buckets.count
,bc
,bucketsCount
,buckets.time.exp_avg
,btea
,bucketsTimeExpAvg
,buckets.time.exp_avg_hour
,bteah
,bucketsTimeExpAvgHour
,buckets.time.max
,btmax
,bucketsTimeMax
,buckets.time.min
,btmin
,bucketsTimeMin
,buckets.time.total
,btt
,bucketsTimeTotal
,data.buckets
,db
,dataBuckets
,data.earliest_record
,der
,dataEarliestRecord
,data.empty_buckets
,deb
,dataEmptyBuckets
,data.input_bytes
,dib
,dataInputBytes
,data.input_fields
,dif
,dataInputFields
,data.input_records
,dir
,dataInputRecords
,data.invalid_dates
,did
,dataInvalidDates
,data.last
,dl
,dataLast
,data.last_empty_bucket
,dleb
,dataLastEmptyBucket
,data.last_sparse_bucket
,dlsb
,dataLastSparseBucket
,data.latest_record
,dlr
,dataLatestRecord
,data.missing_fields
,dmf
,dataMissingFields
,data.out_of_order_timestamps
,doot
,dataOutOfOrderTimestamps
,data.processed_fields
,dpf
,dataProcessedFields
,data.processed_records
,dpr
,dataProcessedRecords
,data.sparse_buckets
,dsb
,dataSparseBuckets
,forecasts.memory.avg
,fmavg
,forecastsMemoryAvg
,forecasts.memory.max
,fmmax
,forecastsMemoryMax
,forecasts.memory.min
,fmmin
,forecastsMemoryMin
,forecasts.memory.total
,fmt
,forecastsMemoryTotal
,forecasts.records.avg
,fravg
,forecastsRecordsAvg
,forecasts.records.max
,frmax
,forecastsRecordsMax
,forecasts.records.min
,frmin
,forecastsRecordsMin
,forecasts.records.total
,frt
,forecastsRecordsTotal
,forecasts.time.avg
,ftavg
,forecastsTimeAvg
,forecasts.time.max
,ftmax
,forecastsTimeMax
,forecasts.time.min
,ftmin
,forecastsTimeMin
,forecasts.time.total
,ftt
,forecastsTimeTotal
,forecasts.total
,ft
,forecastsTotal
,id
,model.bucket_allocation_failures
,mbaf
,modelBucketAllocationFailures
,model.by_fields
,mbf
,modelByFields
,model.bytes
,mb
,modelBytes
,model.bytes_exceeded
,mbe
,modelBytesExceeded
,model.categorization_status
,mcs
,modelCategorizationStatus
,model.categorized_doc_count
,mcdc
,modelCategorizedDocCount
,model.dead_category_count
,mdcc
,modelDeadCategoryCount
,model.failed_category_count
,modelFailedCategoryCount
,model.frequent_category_count
,mfcc
,modelFrequentCategoryCount
,model.log_time
,mlt
,modelLogTime
,model.memory_limit
,mml
,modelMemoryLimit
,model.memory_status
,mms
,modelMemoryStatus
,model.over_fields
,mof
,modelOverFields
,model.partition_fields
,mpf
,modelPartitionFields
,model.rare_category_count
,mrcc
,modelRareCategoryCount
,model.timestamp
,mt
,modelTimestamp
,model.total_category_count
,mtcc
,modelTotalCategoryCount
,node.address
,na
,nodeAddress
,node.ephemeral_id
,ne
,nodeEphemeralId
,node.id
,ni
,nodeId
,node.name
,nn
,nodeName
,opened_time
,ot
,state
, ors
. -
time
string The unit used to display time values.
Values are
nanos
,micros
,ms
,s
,m
,h
, ord
.
curl \
--request GET 'http://api.example.com/_cat/ml/anomaly_detectors/{job_id}' \
--header "Authorization: $API_KEY"
[
{
"id": "high_sum_total_sales",
"s": "closed",
"dpr": "14022",
"mb": "1.5mb"
},
{
"id": "low_request_rate",
"s": "closed",
"dpr": "1216",
"mb": "40.5kb"
},
{
"id": "response_code_rates",
"s": "closed",
"dpr": "28146",
"mb": "132.7kb"
},
{
"id": "url_scanning",
"s": "closed",
"dpr": "28146",
"mb": "501.6kb"
}
]
Get shard recovery information
Get information about ongoing and completed shard recoveries. Shard recovery is the process of initializing a shard copy, such as restoring a primary shard from a snapshot or syncing a replica shard from a primary shard. When a shard recovery completes, the recovered shard is available for search and indexing. For data streams, the API returns information about the stream’s backing indices. IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the index recovery API.
Path parameters
-
index
string | array[string] Required A comma-separated list of data streams, indices, and aliases used to limit the request. Supports wildcards (
*
). To target all data streams and indices, omit this parameter or use*
or_all
.
Query parameters
-
active_only
boolean If
true
, the response only includes ongoing shard recoveries. -
bytes
string The unit used to display byte values.
Values are
b
,kb
,mb
,gb
,tb
, orpb
. -
detailed
boolean If
true
, the response includes detailed information about shard recoveries. -
index
string | array[string] Comma-separated list or wildcard expression of index names to limit the returned information
-
h
string | array[string] List of columns to appear in the response. Supports simple wildcards.
-
s
string | array[string] List of columns that determine how the table should be sorted. Sorting defaults to ascending and can be changed by setting
:asc
or:desc
as a suffix to the column name. -
time
string Unit used to display time values.
Values are
nanos
,micros
,ms
,s
,m
,h
, ord
.
curl \
--request GET 'http://api.example.com/_cat/recovery/{index}' \
--header "Authorization: $API_KEY"
[
{
"index": "my-index-000001 ",
"shard": "0",
"time": "13ms",
"type": "store",
"stage": "done",
"source_host": "n/a",
"source_node": "n/a",
"target_host": "127.0.0.1",
"target_node": "node-0",
"repository": "n/a",
"snapshot": "n/a",
"files": "0",
"files_recovered": "0",
"files_percent": "100.0%",
"files_total": "13",
"bytes": "0b",
"bytes_recovered": "0b",
"bytes_percent": "100.0%",
"bytes_total": "9928b",
"translog_ops": "0",
"translog_ops_recovered": "0",
"translog_ops_percent": "100.0%"
}
]
[
{
"i": "my-index-000001",
"s": "0",
"t": "1252ms",
"ty": "peer",
"st": "done",
"shost": "192.168.1.1",
"thost": "192.168.1.1",
"f": "0",
"fp": "100.0%",
"b": "0b",
"bp": "100.0%",
}
]
[
{
"i": "my-index-000001",
"s": "0",
"t": "1978ms",
"ty": "snapshot",
"st": "done",
"rep": "my-repo",
"snap": "snap-1",
"f": "79",
"fp": "8.0%",
"b": "12086",
"bp": "9.0%"
}
]
Clear the archived repositories metering
Technical preview
Clear the archived repositories metering information in the cluster.
Path parameters
-
node_id
string | array[string] Required Comma-separated list of node IDs or names used to limit returned information.
-
max_archive_version
number Required Specifies the maximum
archive_version
to be cleared from the archive.
curl \
--request DELETE 'http://api.example.com/_nodes/{node_id}/_repositories_metering/{max_archive_version}' \
--header "Authorization: $API_KEY"
Get node statistics
Get statistics for nodes in a cluster. By default, all stats are returned. You can limit the returned information by using metrics.
Path parameters
-
metric
string | array[string] Required Limit the information returned to the specified metrics
-
index_metric
string | array[string] Required Limit the information returned for indices metric to the specific index metrics. It can be used only if indices (or all) metric is specified.
Query parameters
-
completion_fields
string | array[string] Comma-separated list or wildcard expressions of fields to include in fielddata and suggest statistics.
-
fielddata_fields
string | array[string] Comma-separated list or wildcard expressions of fields to include in fielddata statistics.
-
fields
string | array[string] Comma-separated list or wildcard expressions of fields to include in the statistics.
-
groups
boolean Comma-separated list of search groups to include in the search statistics.
-
include_segment_file_sizes
boolean If true, the call reports the aggregated disk usage of each one of the Lucene index files (only applies if segment stats are requested).
-
level
string Indicates whether statistics are aggregated at the cluster, index, or shard level.
Values are
cluster
,indices
, orshards
. -
timeout
string Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
Values are
-1
or0
. -
types
array[string] A comma-separated list of document types for the indexing index metric.
-
include_unloaded_segments
boolean If
true
, the response includes information from segments that are not loaded into memory.
curl \
--request GET 'http://api.example.com/_nodes/stats/{metric}/{index_metric}' \
--header "Authorization: $API_KEY"
Set the connector sync job stats
Technical preview
Stats include: deleted_document_count
, indexed_document_count
, indexed_document_volume
, and total_document_count
.
You can also update last_seen
.
This API is mainly used by the connector service for updating sync job information.
To sync data using self-managed connectors, you need to deploy the Elastic connector service on your own infrastructure. This service runs automatically on Elastic Cloud for Elastic managed connectors.
Path parameters
-
connector_sync_job_id
string Required The unique identifier of the connector sync job.
Body
Required
-
deleted_document_count
number Required The number of documents the sync job deleted.
-
indexed_document_count
number Required The number of documents the sync job indexed.
-
indexed_document_volume
number Required The total size of the data (in MiB) the sync job indexed.
-
last_seen
string A duration. Units can be
nanos
,micros
,ms
(milliseconds),s
(seconds),m
(minutes),h
(hours) andd
(days). Also accepts "0" without a unit and "-1" to indicate an unspecified value. -
metadata
object -
total_document_count
number The total number of documents in the target index after the sync job finished.
curl \
--request PUT 'http://api.example.com/_connector/_sync_job/{connector_sync_job_id}/_stats' \
--header "Authorization: $API_KEY" \
--header "Content-Type: application/json" \
--data '{"deleted_document_count":42.0,"indexed_document_count":42.0,"indexed_document_volume":42.0,"last_seen":"string","metadata":{"additionalProperty1":{},"additionalProperty2":{}},"total_document_count":42.0}'
Update the connector pipeline
Beta
When you create a new connector, the configuration of an ingest pipeline is populated with default settings.
Path parameters
-
connector_id
string Required The unique identifier of the connector to be updated
curl \
--request PUT 'http://api.example.com/_connector/{connector_id}/_pipeline' \
--header "Authorization: $API_KEY" \
--header "Content-Type: application/json" \
--data '"{\n \"pipeline\": {\n \"extract_binary_content\": true,\n \"name\": \"my-connector-pipeline\",\n \"reduce_whitespace\": true,\n \"run_ml_inference\": true\n }\n}"'
{
"pipeline": {
"extract_binary_content": true,
"name": "my-connector-pipeline",
"reduce_whitespace": true,
"run_ml_inference": true
}
}
{
"result": "updated"
}
Bulk index or delete documents
Perform multiple index
, create
, delete
, and update
actions in a single request.
This reduces overhead and can greatly increase indexing speed.
If the Elasticsearch security features are enabled, you must have the following index privileges for the target data stream, index, or index alias:
- To use the
create
action, you must have thecreate_doc
,create
,index
, orwrite
index privilege. Data streams support only thecreate
action. - To use the
index
action, you must have thecreate
,index
, orwrite
index privilege. - To use the
delete
action, you must have thedelete
orwrite
index privilege. - To use the
update
action, you must have theindex
orwrite
index privilege. - To automatically create a data stream or index with a bulk API request, you must have the
auto_configure
,create_index
, ormanage
index privilege. - To make the result of a bulk operation visible to search using the
refresh
parameter, you must have themaintenance
ormanage
index privilege.
Automatic data stream creation requires a matching index template with data stream enabled.
The actions are specified in the request body using a newline delimited JSON (NDJSON) structure:
action_and_meta_data\n
optional_source\n
action_and_meta_data\n
optional_source\n
....
action_and_meta_data\n
optional_source\n
The index
and create
actions expect a source on the next line and have the same semantics as the op_type
parameter in the standard index API.
A create
action fails if a document with the same ID already exists in the target
An index
action adds or replaces a document as necessary.
NOTE: Data streams support only the create
action.
To update or delete a document in a data stream, you must target the backing index containing the document.
An update
action expects that the partial doc, upsert, and script and its options are specified on the next line.
A delete
action does not expect a source on the next line and has the same semantics as the standard delete API.
NOTE: The final line of data must end with a newline character (\n
).
Each newline character may be preceded by a carriage return (\r
).
When sending NDJSON data to the _bulk
endpoint, use a Content-Type
header of application/json
or application/x-ndjson
.
Because this format uses literal newline characters (\n
) as delimiters, make sure that the JSON actions and sources are not pretty printed.
If you provide a target in the request path, it is used for any actions that don't explicitly specify an _index
argument.
A note on the format: the idea here is to make processing as fast as possible.
As some of the actions are redirected to other shards on other nodes, only action_meta_data
is parsed on the receiving node side.
Client libraries using this protocol should try and strive to do something similar on the client side, and reduce buffering as much as possible.
There is no "correct" number of actions to perform in a single bulk request. Experiment with different settings to find the optimal size for your particular workload. Note that Elasticsearch limits the maximum size of a HTTP request to 100mb by default so clients must ensure that no request exceeds this size. It is not possible to index a single document that exceeds the size limit, so you must pre-process any such documents into smaller pieces before sending them to Elasticsearch. For instance, split documents into pages or chapters before indexing them, or store raw binary data in a system outside Elasticsearch and replace the raw data with a link to the external system in the documents that you send to Elasticsearch.
Client suppport for bulk requests
Some of the officially supported clients provide helpers to assist with bulk requests and reindexing:
- Go: Check out
esutil.BulkIndexer
- Perl: Check out
Search::Elasticsearch::Client::5_0::Bulk
andSearch::Elasticsearch::Client::5_0::Scroll
- Python: Check out
elasticsearch.helpers.*
- JavaScript: Check out
client.helpers.*
- .NET: Check out
BulkAllObservable
- PHP: Check out bulk indexing.
Submitting bulk requests with cURL
If you're providing text file input to curl
, you must use the --data-binary
flag instead of plain -d
.
The latter doesn't preserve newlines. For example:
$ cat requests
{ "index" : { "_index" : "test", "_id" : "1" } }
{ "field1" : "value1" }
$ curl -s -H "Content-Type: application/x-ndjson" -XPOST localhost:9200/_bulk --data-binary "@requests"; echo
{"took":7, "errors": false, "items":[{"index":{"_index":"test","_id":"1","_version":1,"result":"created","forced_refresh":false}}]}
Optimistic concurrency control
Each index
and delete
action within a bulk API call may include the if_seq_no
and if_primary_term
parameters in their respective action and meta data lines.
The if_seq_no
and if_primary_term
parameters control how operations are run, based on the last modification to existing documents. See Optimistic concurrency control for more details.
Versioning
Each bulk item can include the version value using the version
field.
It automatically follows the behavior of the index or delete operation based on the _version
mapping.
It also support the version_type
.
Routing
Each bulk item can include the routing value using the routing
field.
It automatically follows the behavior of the index or delete operation based on the _routing
mapping.
NOTE: Data streams do not support custom routing unless they were created with the allow_custom_routing
setting enabled in the template.
Wait for active shards
When making bulk calls, you can set the wait_for_active_shards
parameter to require a minimum number of shard copies to be active before starting to process the bulk request.
Refresh
Control when the changes made by this request are visible to search.
NOTE: Only the shards that receive the bulk request will be affected by refresh.
Imagine a _bulk?refresh=wait_for
request with three documents in it that happen to be routed to different shards in an index with five shards.
The request will only wait for those three shards to refresh.
The other two shards that make up the index do not participate in the _bulk
request at all.
Path parameters
-
index
string Required The name of the data stream, index, or index alias to perform bulk actions on.
Query parameters
-
include_source_on_error
boolean True or false if to include the document source in the error message in case of parsing errors.
-
list_executed_pipelines
boolean If
true
, the response will include the ingest pipelines that were run for each index or create. -
pipeline
string The pipeline identifier to use to preprocess incoming documents. If the index has a default ingest pipeline specified, setting the value to
_none
turns off the default ingest pipeline for this request. If a final pipeline is configured, it will always run regardless of the value of this parameter. -
refresh
string If
true
, Elasticsearch refreshes the affected shards to make this operation visible to search. Ifwait_for
, wait for a refresh to make this operation visible to search. Iffalse
, do nothing with refreshes. Valid values:true
,false
,wait_for
.Values are
true
,false
, orwait_for
. -
routing
string A custom value that is used to route operations to a specific shard.
-
_source
boolean | string | array[string] Indicates whether to return the
_source
field (true
orfalse
) or contains a list of fields to return. -
_source_excludes
string | array[string] A comma-separated list of source fields to exclude from the response. You can also use this parameter to exclude fields from the subset specified in
_source_includes
query parameter. If the_source
parameter isfalse
, this parameter is ignored. -
_source_includes
string | array[string] A comma-separated list of source fields to include in the response. If this parameter is specified, only these source fields are returned. You can exclude fields from this subset using the
_source_excludes
query parameter. If the_source
parameter isfalse
, this parameter is ignored. -
timeout
string The period each action waits for the following operations: automatic index creation, dynamic mapping updates, and waiting for active shards. The default is
1m
(one minute), which guarantees Elasticsearch waits for at least the timeout before failing. The actual wait time could be longer, particularly when multiple waits occur.Values are
-1
or0
. -
wait_for_active_shards
number | string The number of shard copies that must be active before proceeding with the operation. Set to
all
or any positive integer up to the total number of shards in the index (number_of_replicas+1
). The default is1
, which waits for each primary shard to be active.Values are
all
orindex-setting
. -
require_alias
boolean If
true
, the request's actions must target an index alias. -
require_data_stream
boolean If
true
, the request's actions must target a data stream (existing or to be created).
curl \
--request PUT 'http://api.example.com/{index}/_bulk' \
--header "Authorization: $API_KEY" \
--header "Content-Type: application/json" \
--data '"{ \"index\" : { \"_index\" : \"test\", \"_id\" : \"1\" } }\n{ \"field1\" : \"value1\" }\n{ \"delete\" : { \"_index\" : \"test\", \"_id\" : \"2\" } }\n{ \"create\" : { \"_index\" : \"test\", \"_id\" : \"3\" } }\n{ \"field1\" : \"value3\" }\n{ \"update\" : {\"_id\" : \"1\", \"_index\" : \"test\"} }\n{ \"doc\" : {\"field2\" : \"value2\"} }"'
{ "index" : { "_index" : "test", "_id" : "1" } }
{ "field1" : "value1" }
{ "delete" : { "_index" : "test", "_id" : "2" } }
{ "create" : { "_index" : "test", "_id" : "3" } }
{ "field1" : "value3" }
{ "update" : {"_id" : "1", "_index" : "test"} }
{ "doc" : {"field2" : "value2"} }
{ "update" : {"_id" : "1", "_index" : "index1", "retry_on_conflict" : 3} }
{ "doc" : {"field" : "value"} }
{ "update" : { "_id" : "0", "_index" : "index1", "retry_on_conflict" : 3} }
{ "script" : { "source": "ctx._source.counter += params.param1", "lang" : "painless", "params" : {"param1" : 1}}, "upsert" : {"counter" : 1}}
{ "update" : {"_id" : "2", "_index" : "index1", "retry_on_conflict" : 3} }
{ "doc" : {"field" : "value"}, "doc_as_upsert" : true }
{ "update" : {"_id" : "3", "_index" : "index1", "_source" : true} }
{ "doc" : {"field" : "value"} }
{ "update" : {"_id" : "4", "_index" : "index1"} }
{ "doc" : {"field" : "value"}, "_source": true}
{ "update": {"_id": "5", "_index": "index1"} }
{ "doc": {"my_field": "foo"} }
{ "update": {"_id": "6", "_index": "index1"} }
{ "doc": {"my_field": "foo"} }
{ "create": {"_id": "7", "_index": "index1"} }
{ "my_field": "foo" }
{ "index" : { "_index" : "my_index", "_id" : "1", "dynamic_templates": {"work_location": "geo_point"}} }
{ "field" : "value1", "work_location": "41.12,-71.34", "raw_location": "41.12,-71.34"}
{ "create" : { "_index" : "my_index", "_id" : "2", "dynamic_templates": {"home_location": "geo_point"}} }
{ "field" : "value2", "home_location": "41.12,-71.34"}
{
"took": 30,
"errors": false,
"items": [
{
"index": {
"_index": "test",
"_id": "1",
"_version": 1,
"result": "created",
"_shards": {
"total": 2,
"successful": 1,
"failed": 0
},
"status": 201,
"_seq_no" : 0,
"_primary_term": 1
}
},
{
"delete": {
"_index": "test",
"_id": "2",
"_version": 1,
"result": "not_found",
"_shards": {
"total": 2,
"successful": 1,
"failed": 0
},
"status": 404,
"_seq_no" : 1,
"_primary_term" : 2
}
},
{
"create": {
"_index": "test",
"_id": "3",
"_version": 1,
"result": "created",
"_shards": {
"total": 2,
"successful": 1,
"failed": 0
},
"status": 201,
"_seq_no" : 2,
"_primary_term" : 3
}
},
{
"update": {
"_index": "test",
"_id": "1",
"_version": 2,
"result": "updated",
"_shards": {
"total": 2,
"successful": 1,
"failed": 0
},
"status": 200,
"_seq_no" : 3,
"_primary_term" : 4
}
}
]
}
{
"took": 486,
"errors": true,
"items": [
{
"update": {
"_index": "index1",
"_id": "5",
"status": 404,
"error": {
"type": "document_missing_exception",
"reason": "[5]: document missing",
"index_uuid": "aAsFqTI0Tc2W0LCWgPNrOA",
"shard": "0",
"index": "index1"
}
}
},
{
"update": {
"_index": "index1",
"_id": "6",
"status": 404,
"error": {
"type": "document_missing_exception",
"reason": "[6]: document missing",
"index_uuid": "aAsFqTI0Tc2W0LCWgPNrOA",
"shard": "0",
"index": "index1"
}
}
},
{
"create": {
"_index": "index1",
"_id": "7",
"_version": 1,
"result": "created",
"_shards": {
"total": 2,
"successful": 1,
"failed": 0
},
"_seq_no": 0,
"_primary_term": 1,
"status": 201
}
}
]
}
{
"items": [
{
"update": {
"error": {
"type": "document_missing_exception",
"reason": "[5]: document missing",
"index_uuid": "aAsFqTI0Tc2W0LCWgPNrOA",
"shard": "0",
"index": "index1"
}
}
},
{
"update": {
"error": {
"type": "document_missing_exception",
"reason": "[6]: document missing",
"index_uuid": "aAsFqTI0Tc2W0LCWgPNrOA",
"shard": "0",
"index": "index1"
}
}
}
]
}
Get the async EQL status
Added in 7.9.0
Get the current status for an async EQL search or a stored synchronous EQL search without returning results.
Path parameters
-
id
string Required Identifier for the search.
curl \
--request GET 'http://api.example.com/_eql/search/status/{id}' \
--header "Authorization: $API_KEY"
{
"id": "FmNJRUZ1YWZCU3dHY1BIOUhaenVSRkEaaXFlZ3h4c1RTWFNocDdnY2FSaERnUTozNDE=",
"is_running" : true,
"is_partial" : true,
"start_time_in_millis" : 1611690235000,
"expiration_time_in_millis" : 1611690295000
}
Stop async ES|QL query
Added in 8.18.0
This API interrupts the query execution and returns the results so far. If the Elasticsearch security features are enabled, only the user who first submitted the ES|QL query can stop it.
Path parameters
-
id
string Required The unique identifier of the query. A query ID is provided in the ES|QL async query API response for a query that does not complete in the designated time. A query ID is also provided when the request was submitted with the
keep_on_completion
parameter set totrue
.
Query parameters
-
drop_null_columns
boolean Indicates whether columns that are entirely
null
will be removed from thecolumns
andvalues
portion of the results. Iftrue
, the response will include an extra section under the nameall_columns
which has the name of all the columns.
curl \
--request POST 'http://api.example.com/_query/async/{id}/stop' \
--header "Authorization: $API_KEY"
Run a Fleet search
Technical preview
The purpose of the Fleet search API is to provide an API where the search will be run only after the provided checkpoint has been processed and is visible for searches inside of Elasticsearch.
Path parameters
-
index
string Required A single target to search. If the target is an index alias, it must resolve to a single index.
Query parameters
-
allow_no_indices
boolean -
analyzer
string -
analyze_wildcard
boolean -
batched_reduce_size
number -
ccs_minimize_roundtrips
boolean -
default_operator
string Values are
and
,AND
,or
, orOR
. -
df
string -
docvalue_fields
string | array[string] -
expand_wildcards
string | array[string] Supported values include:
all
: Match any data stream or index, including hidden ones.open
: Match open, non-hidden indices. Also matches any non-hidden data stream.closed
: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.hidden
: Match hidden data streams and hidden indices. Must be combined withopen
,closed
, orboth
.none
: Wildcard expressions are not accepted.
Values are
all
,open
,closed
,hidden
, ornone
. -
explain
boolean -
ignore_throttled
boolean -
lenient
boolean -
preference
string -
pre_filter_shard_size
number -
request_cache
boolean -
routing
string -
scroll
string A duration. Units can be
nanos
,micros
,ms
(milliseconds),s
(seconds),m
(minutes),h
(hours) andd
(days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.Values are
-1
or0
. -
search_type
string Supported values include:
query_then_fetch
: Documents are scored using local term and document frequencies for the shard. This is usually faster but less accurate.dfs_query_then_fetch
: Documents are scored using global term and document frequencies across all shards. This is usually slower but more accurate.
Values are
query_then_fetch
ordfs_query_then_fetch
. -
stats
array[string] -
stored_fields
string | array[string] -
suggest_field
string Specifies which field to use for suggestions.
-
suggest_mode
string Supported values include:
missing
: Only generate suggestions for terms that are not in the shard.popular
: Only suggest terms that occur in more docs on the shard than the original term.always
: Suggest any matching suggestions based on terms in the suggest text.
Values are
missing
,popular
, oralways
. -
suggest_size
number -
suggest_text
string The source text for which the suggestions should be returned.
-
terminate_after
number -
timeout
string A duration. Units can be
nanos
,micros
,ms
(milliseconds),s
(seconds),m
(minutes),h
(hours) andd
(days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.Values are
-1
or0
. -
track_total_hits
boolean | number Number of hits matching the query to count accurately. If true, the exact number of hits is returned at the cost of some performance. If false, the response does not include the total number of hits matching the query. Defaults to 10,000 hits.
-
track_scores
boolean -
typed_keys
boolean -
rest_total_hits_as_int
boolean -
version
boolean -
_source
boolean | string | array[string] Defines how to fetch a source. Fetching can be disabled entirely, or the source can be filtered. Used as a query parameter along with the
_source_includes
and_source_excludes
parameters. -
_source_excludes
string | array[string] -
_source_includes
string | array[string] -
seq_no_primary_term
boolean -
q
string -
size
number -
from
number -
sort
string | array[string] -
wait_for_checkpoints
array[number] A comma separated list of checkpoints. When configured, the search API will only be executed on a shard after the relevant checkpoint has become visible for search. Defaults to an empty list which will cause Elasticsearch to immediately execute the search.
-
allow_partial_search_results
boolean If true, returns partial results if there are shard request timeouts or shard failures. If false, returns an error with no partial results. Defaults to the configured cluster setting
search.default_allow_partial_results
, which is true by default.
Body
-
aggregations
object -
collapse
object External documentation -
explain
boolean If true, returns detailed information about score computation as part of a hit.
-
ext
object Configuration of search extensions defined by Elasticsearch plugins.
-
from
number Starting document offset. By default, you cannot page through more than 10,000 hits using the from and size parameters. To page through more hits, use the search_after parameter.
-
highlight
object -
track_total_hits
boolean | number Number of hits matching the query to count accurately. If true, the exact number of hits is returned at the cost of some performance. If false, the response does not include the total number of hits matching the query. Defaults to 10,000 hits.
-
indices_boost
array[object] Boosts the _score of documents from specified indices.
-
docvalue_fields
array[object] Array of wildcard (*) patterns. The request returns doc values for field names matching these patterns in the hits.fields property of the response.
-
min_score
number Minimum _score for matching documents. Documents with a lower _score are not included in search results and results collected by aggregations.
-
post_filter
object An Elasticsearch Query DSL (Domain Specific Language) object that defines a query.
External documentation -
profile
boolean -
query
object An Elasticsearch Query DSL (Domain Specific Language) object that defines a query.
External documentation rescore
object | array[object] -
script_fields
object Retrieve a script evaluation (based on different fields) for each hit.
-
search_after
array[number | string | boolean | null] A field value.
-
size
number The number of hits to return. By default, you cannot page through more than 10,000 hits using the from and size parameters. To page through more hits, use the search_after parameter.
-
slice
object _source
boolean | object Defines how to fetch a source. Fetching can be disabled entirely, or the source can be filtered.
-
fields
array[object] Array of wildcard (*) patterns. The request returns values for field names matching these patterns in the hits.fields property of the response.
-
suggest
object -
terminate_after
number Maximum number of documents to collect for each shard. If a query reaches this limit, Elasticsearch terminates the query early. Elasticsearch collects documents before sorting. Defaults to 0, which does not terminate query execution early.
-
timeout
string Specifies the period of time to wait for a response from each shard. If no response is received before the timeout expires, the request fails and returns an error. Defaults to no timeout.
-
track_scores
boolean If true, calculate and return document scores, even if the scores are not used for sorting.
-
version
boolean If true, returns document version as part of a hit.
-
seq_no_primary_term
boolean If true, returns sequence number and primary term of the last modification of each hit. See Optimistic concurrency control.
-
stored_fields
string | array[string] -
pit
object -
runtime_mappings
object -
stats
array[string] Stats groups to associate with the search. Each group maintains a statistics aggregation for its associated searches. You can retrieve these stats using the indices stats API.
curl \
--request POST 'http://api.example.com/{index}/_fleet/_fleet_search' \
--header "Authorization: $API_KEY" \
--header "Content-Type: application/json" \
--data '{"aggregations":{},"collapse":{},"explain":true,"ext":{"additionalProperty1":{},"additionalProperty2":{}},"from":42.0,"highlight":{"":"plain","boundary_chars":"string","boundary_max_scan":42.0,"boundary_scanner":"chars","boundary_scanner_locale":"string","force_source":true,"fragmenter":"simple","fragment_size":42.0,"highlight_filter":true,"highlight_query":{},"max_fragment_length":42.0,"max_analyzed_offset":42.0,"no_match_size":42.0,"number_of_fragments":42.0,"options":{"additionalProperty1":{},"additionalProperty2":{}},"order":"score","phrase_limit":42.0,"post_tags":["string"],"pre_tags":["string"],"require_field_match":true,"tags_schema":"styled","encoder":"default","fields":{}},"track_total_hits":true,"indices_boost":[{"additionalProperty1":42.0,"additionalProperty2":42.0}],"docvalue_fields":[{"field":"string","format":"string","include_unmapped":true}],"min_score":42.0,"post_filter":{},"profile":true,"query":{},"rescore":{"window_size":42.0,"query":{"rescore_query":{},"query_weight":42.0,"rescore_query_weight":42.0,"score_mode":"avg"},"learning_to_rank":{"model_id":"string","params":{"additionalProperty1":{},"additionalProperty2":{}}}},"script_fields":{"additionalProperty1":{"script":{"":"painless","id":"string","params":{"additionalProperty1":{},"additionalProperty2":{}},"options":{"additionalProperty1":"string","additionalProperty2":"string"}},"ignore_failure":true},"additionalProperty2":{"script":{"":"painless","id":"string","params":{"additionalProperty1":{},"additionalProperty2":{}},"options":{"additionalProperty1":"string","additionalProperty2":"string"}},"ignore_failure":true}},"search_after":[42.0],"size":42.0,"slice":{"field":"string","id":"string","max":42.0},"":true,"fields":[{"field":"string","format":"string","include_unmapped":true}],"suggest":{"text":"string"},"terminate_after":42.0,"timeout":"string","track_scores":true,"version":true,"seq_no_primary_term":true,"stored_fields":"string","pit":{"id":"string","keep_alive":"string"},"runtime_mappings":{"additionalProperty1":{"fields":{"additionalProperty1":{"type":"boolean"},"additionalProperty2":{"type":"boolean"}},"fetch_fields":[{"field":"string","format":"string"}],"format":"string","input_field":"string","target_field":"string","target_index":"string","script":{"":"painless","id":"string","params":{"additionalProperty1":{},"additionalProperty2":{}},"options":{"additionalProperty1":"string","additionalProperty2":"string"}},"type":"boolean"},"additionalProperty2":{"fields":{"additionalProperty1":{"type":"boolean"},"additionalProperty2":{"type":"boolean"}},"fetch_fields":[{"field":"string","format":"string"}],"format":"string","input_field":"string","target_field":"string","target_index":"string","script":{"":"painless","id":"string","params":{"additionalProperty1":{},"additionalProperty2":{}},"options":{"additionalProperty1":"string","additionalProperty2":"string"}},"type":"boolean"}},"stats":["string"]}'
Flush data streams or indices
Flushing a data stream or index is the process of making sure that any data that is currently only stored in the transaction log is also permanently stored in the Lucene index. When restarting, Elasticsearch replays any unflushed operations from the transaction log into the Lucene index to bring it back into the state that it was in before the restart. Elasticsearch automatically triggers flushes as needed, using heuristics that trade off the size of the unflushed transaction log against the cost of performing each flush.
After each operation has been flushed it is permanently stored in the Lucene index. This may mean that there is no need to maintain an additional copy of it in the transaction log. The transaction log is made up of multiple files, called generations, and Elasticsearch will delete any generation files when they are no longer needed, freeing up disk space.
It is also possible to trigger a flush on one or more indices using the flush API, although it is rare for users to need to call this API directly. If you call the flush API after indexing some documents then a successful response indicates that Elasticsearch has flushed all the documents that were indexed before the flush API was called.
Path parameters
-
index
string | array[string] Required Comma-separated list of data streams, indices, and aliases to flush. Supports wildcards (
*
). To flush all data streams and indices, omit this parameter or use*
or_all
.
Query parameters
-
allow_no_indices
boolean If
false
, the request returns an error if any wildcard expression, index alias, or_all
value targets only missing or closed indices. This behavior applies even if the request targets other open indices. -
expand_wildcards
string | array[string] Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports comma-separated values, such as
open,hidden
. Valid values are:all
,open
,closed
,hidden
,none
.Supported values include:
all
: Match any data stream or index, including hidden ones.open
: Match open, non-hidden indices. Also matches any non-hidden data stream.closed
: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.hidden
: Match hidden data streams and hidden indices. Must be combined withopen
,closed
, orboth
.none
: Wildcard expressions are not accepted.
Values are
all
,open
,closed
,hidden
, ornone
. -
force
boolean If
true
, the request forces a flush even if there are no changes to commit to the index. -
wait_if_ongoing
boolean If
true
, the flush operation blocks until execution when another flush operation is running. Iffalse
, Elasticsearch returns an error if you request a flush when another flush operation is running.
curl \
--request POST 'http://api.example.com/{index}/_flush' \
--header "Authorization: $API_KEY"
Refresh an index
A refresh makes recent operations performed on one or more indices available for search. For data streams, the API runs the refresh operation on the stream’s backing indices.
By default, Elasticsearch periodically refreshes indices every second, but only on indices that have received one search request or more in the last 30 seconds.
You can change this default interval with the index.refresh_interval
setting.
Refresh requests are synchronous and do not return a response until the refresh operation completes.
Refreshes are resource-intensive. To ensure good cluster performance, it's recommended to wait for Elasticsearch's periodic refresh rather than performing an explicit refresh when possible.
If your application workflow indexes documents and then runs a search to retrieve the indexed document, it's recommended to use the index API's refresh=wait_for
query parameter option.
This option ensures the indexing operation waits for a periodic refresh before running the search.
Query parameters
-
allow_no_indices
boolean If
false
, the request returns an error if any wildcard expression, index alias, or_all
value targets only missing or closed indices. This behavior applies even if the request targets other open indices. -
expand_wildcards
string | array[string] Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports comma-separated values, such as
open,hidden
. Valid values are:all
,open
,closed
,hidden
,none
.Supported values include:
all
: Match any data stream or index, including hidden ones.open
: Match open, non-hidden indices. Also matches any non-hidden data stream.closed
: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.hidden
: Match hidden data streams and hidden indices. Must be combined withopen
,closed
, orboth
.none
: Wildcard expressions are not accepted.
Values are
all
,open
,closed
,hidden
, ornone
.
curl \
--request POST 'http://api.example.com/_refresh' \
--header "Authorization: $API_KEY"
Query parameters
-
master_timeout
string Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.
Values are
-1
or0
. -
timeout
string Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
Values are
-1
or0
.
curl \
--request GET 'http://api.example.com/_ilm/policy' \
--header "Authorization: $API_KEY"
{
"my_policy": {
"version": 1,
"modified_date": 82392349,
"policy": {
"phases": {
"warm": {
"min_age": "10d",
"actions": {
"forcemerge": {
"max_num_segments": 1
}
}
},
"delete": {
"min_age": "30d",
"actions": {
"delete": {
"delete_searchable_snapshot": true
}
}
}
}
},
"in_use_by" : {
"indices" : [],
"data_streams" : [],
"composable_templates" : []
}
}
}
Remove policies from an index
Added in 6.6.0
Remove the assigned lifecycle policies from an index or a data stream's backing indices. It also stops managing the indices.
Path parameters
-
index
string Required The name of the index to remove policy on
curl \
--request POST 'http://api.example.com/{index}/_ilm/remove' \
--header "Authorization: $API_KEY"
{
"has_failures" : false,
"failed_indexes" : []
}
Create an ELSER inference endpoint
Deprecated
Added in 8.11.0
Create an inference endpoint to perform an inference task with the elser
service.
You can also deploy ELSER by using the Elasticsearch inference integration.
Your Elasticsearch deployment contains a preconfigured ELSER inference endpoint, you only need to create the enpoint using the API if you want to customize the settings.
The API request will automatically download and deploy the ELSER model if it isn't already downloaded.
You might see a 502 bad gateway error in the response when using the Kibana Console. This error usually just reflects a timeout, while the model downloads in the background. You can check the download progress in the Machine Learning UI. If using the Python client, you can set the timeout parameter to a higher value.
After creating the endpoint, wait for the model deployment to complete before using it.
To verify the deployment status, use the get trained model statistics API.
Look for "state": "fully_allocated"
in the response and ensure that the "allocation_count"
matches the "target_allocation_count"
.
Avoid creating multiple endpoints for the same model unless required, as each endpoint consumes significant resources.
Path parameters
-
task_type
string Required The type of the inference task that the model will perform.
Value is
sparse_embedding
. -
elser_inference_id
string Required The unique identifier of the inference endpoint.
Body
-
chunking_settings
object -
service
string Required Value is
elser
. -
service_settings
object Required
curl \
--request PUT 'http://api.example.com/_inference/{task_type}/{elser_inference_id}' \
--header "Authorization: $API_KEY" \
--header "Content-Type: application/json" \
--data '"{\n \"service\": \"elser\",\n \"service_settings\": {\n \"num_allocations\": 1,\n \"num_threads\": 1\n }\n}"'
{
"service": "elser",
"service_settings": {
"num_allocations": 1,
"num_threads": 1
}
}
{
"service": "elser",
"service_settings": {
"adaptive_allocations": {
"enabled": true,
"min_number_of_allocations": 3,
"max_number_of_allocations": 10
},
"num_threads": 1
}
}
{
"inference_id": "my-elser-model",
"task_type": "sparse_embedding",
"service": "elser",
"service_settings": {
"num_allocations": 1,
"num_threads": 1
},
"task_settings": {}
}
Create a VoyageAI inference endpoint
Added in 8.19.0
Create an inference endpoint to perform an inference task with the voyageai
service.
Avoid creating multiple endpoints for the same model unless required, as each endpoint consumes significant resources.
Path parameters
-
task_type
string Required The type of the inference task that the model will perform.
Values are
text_embedding
orrerank
. -
voyageai_inference_id
string Required The unique identifier of the inference endpoint.
Body
-
chunking_settings
object -
service
string Required Value is
voyageai
. -
service_settings
object Required -
task_settings
object
curl \
--request PUT 'http://api.example.com/_inference/{task_type}/{voyageai_inference_id}' \
--header "Authorization: $API_KEY" \
--header "Content-Type: application/json" \
--data '"{\n \"service\": \"voyageai\",\n \"service_settings\": {\n \"model_id\": \"voyage-3-large\",\n \"dimensions\": 512\n }\n}"'
{
"service": "voyageai",
"service_settings": {
"model_id": "voyage-3-large",
"dimensions": 512
}
}
{
"service": "voyageai",
"service_settings": {
"model_id": "rerank-2"
}
}
Update the license
You can update your license at runtime without shutting down your nodes. License updates take effect immediately. If the license you are installing does not support all of the features that were available with your previous license, however, you are notified in the response. You must then re-submit the API request with the acknowledge parameter set to true.
NOTE: If Elasticsearch security features are enabled and you are installing a gold or higher license, you must enable TLS on the transport networking layer before you install the license. If the operator privileges feature is enabled, only operator users can use this API.
Query parameters
-
acknowledge
boolean Specifies whether you acknowledge the license changes.
-
master_timeout
string The period to wait for a connection to the master node.
Values are
-1
or0
. -
timeout
string The period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
Values are
-1
or0
.
curl \
--request PUT 'http://api.example.com/_license' \
--header "Authorization: $API_KEY" \
--header "Content-Type: application/json" \
--data '"{\n \"licenses\": [\n {\n \"uid\":\"893361dc-9749-4997-93cb-802e3d7fa4xx\",\n \"type\":\"basic\",\n \"issue_date_in_millis\":1411948800000,\n \"expiry_date_in_millis\":1914278399999,\n \"max_nodes\":1,\n \"issued_to\":\"issuedTo\",\n \"issuer\":\"issuer\",\n \"signature\":\"xx\"\n }\n ]\n}"'
{
"licenses": [
{
"uid":"893361dc-9749-4997-93cb-802e3d7fa4xx",
"type":"basic",
"issue_date_in_millis":1411948800000,
"expiry_date_in_millis":1914278399999,
"max_nodes":1,
"issued_to":"issuedTo",
"issuer":"issuer",
"signature":"xx"
}
]
}
{
"acknowledged": false,
"license_status": "valid",
"acknowledge": {
"message": "\"\"\"This license update requires acknowledgement. To acknowledge the license, please read the following messages and update the license again, this time with the \"acknowledge=true\" parameter:\"\"\"",
"watcher": [
"Watcher will be disabled"
],
"logstash": [
"Logstash will no longer poll for centrally-managed pipelines"
],
"security": [
"The following X-Pack security functionality will be disabled ..."
]
}
}
Update the license
You can update your license at runtime without shutting down your nodes. License updates take effect immediately. If the license you are installing does not support all of the features that were available with your previous license, however, you are notified in the response. You must then re-submit the API request with the acknowledge parameter set to true.
NOTE: If Elasticsearch security features are enabled and you are installing a gold or higher license, you must enable TLS on the transport networking layer before you install the license. If the operator privileges feature is enabled, only operator users can use this API.
Query parameters
-
acknowledge
boolean Specifies whether you acknowledge the license changes.
-
master_timeout
string The period to wait for a connection to the master node.
Values are
-1
or0
. -
timeout
string The period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
Values are
-1
or0
.
curl \
--request POST 'http://api.example.com/_license' \
--header "Authorization: $API_KEY" \
--header "Content-Type: application/json" \
--data '"{\n \"licenses\": [\n {\n \"uid\":\"893361dc-9749-4997-93cb-802e3d7fa4xx\",\n \"type\":\"basic\",\n \"issue_date_in_millis\":1411948800000,\n \"expiry_date_in_millis\":1914278399999,\n \"max_nodes\":1,\n \"issued_to\":\"issuedTo\",\n \"issuer\":\"issuer\",\n \"signature\":\"xx\"\n }\n ]\n}"'
{
"licenses": [
{
"uid":"893361dc-9749-4997-93cb-802e3d7fa4xx",
"type":"basic",
"issue_date_in_millis":1411948800000,
"expiry_date_in_millis":1914278399999,
"max_nodes":1,
"issued_to":"issuedTo",
"issuer":"issuer",
"signature":"xx"
}
]
}
{
"acknowledged": false,
"license_status": "valid",
"acknowledge": {
"message": "\"\"\"This license update requires acknowledgement. To acknowledge the license, please read the following messages and update the license again, this time with the \"acknowledge=true\" parameter:\"\"\"",
"watcher": [
"Watcher will be disabled"
],
"logstash": [
"Logstash will no longer poll for centrally-managed pipelines"
],
"security": [
"The following X-Pack security functionality will be disabled ..."
]
}
}
Delete forecasts from a job
Added in 6.5.0
By default, forecasts are retained for 14 days. You can specify a
different retention period with the expires_in
parameter in the forecast
jobs API. The delete forecast API enables you to delete one or more
forecasts before they expire.
Path parameters
-
job_id
string Required Identifier for the anomaly detection job.
Query parameters
-
allow_no_forecasts
boolean Specifies whether an error occurs when there are no forecasts. In particular, if this parameter is set to
false
and there are no forecasts associated with the job, attempts to delete all forecasts return an error. -
timeout
string Specifies the period of time to wait for the completion of the delete operation. When this period of time elapses, the API fails and returns an error.
Values are
-1
or0
.
curl \
--request DELETE 'http://api.example.com/_ml/anomaly_detectors/{job_id}/_forecast' \
--header "Authorization: $API_KEY"
{
"acknowledged": true
}
Prepare a node to be shut down
Added in 7.13.0
NOTE: This feature is designed for indirect use by Elastic Cloud, Elastic Cloud Enterprise, and Elastic Cloud on Kubernetes. Direct use is not supported.
If you specify a node that is offline, it will be prepared for shut down when it rejoins the cluster.
If the operator privileges feature is enabled, you must be an operator to use this API.
The API migrates ongoing tasks and index shards to other nodes as needed to prepare a node to be restarted or shut down and removed from the cluster. This ensures that Elasticsearch can be stopped safely with minimal disruption to the cluster.
You must specify the type of shutdown: restart
, remove
, or replace
.
If a node is already being prepared for shutdown, you can use this API to change the shutdown type.
IMPORTANT: This API does NOT terminate the Elasticsearch process. Monitor the node shutdown status to determine when it is safe to stop Elasticsearch.
Path parameters
-
node_id
string Required The node identifier. This parameter is not validated against the cluster's active nodes. This enables you to register a node for shut down while it is offline. No error is thrown if you specify an invalid node ID.
Query parameters
-
master_timeout
string The period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.
Values are
nanos
,micros
,ms
,s
,m
,h
, ord
. -
timeout
string The period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
Values are
nanos
,micros
,ms
,s
,m
,h
, ord
.
Body
Required
-
type
string Required Values are
restart
,remove
, orreplace
. -
reason
string Required A human-readable reason that the node is being shut down. This field provides information for other cluster operators; it does not affect the shut down process.
-
allocation_delay
string Only valid if type is restart. Controls how long Elasticsearch will wait for the node to restart and join the cluster before reassigning its shards to other nodes. This works the same as delaying allocation with the index.unassigned.node_left.delayed_timeout setting. If you specify both a restart allocation delay and an index-level allocation delay, the longer of the two is used.
-
target_node_name
string Only valid if type is replace. Specifies the name of the node that is replacing the node being shut down. Shards from the shut down node are only allowed to be allocated to the target node, and no other data will be allocated to the target node. During relocation of data certain allocation rules are ignored, such as disk watermarks or user attribute filtering rules.
curl \
--request PUT 'http://api.example.com/_nodes/{node_id}/shutdown' \
--header "Authorization: $API_KEY" \
--header "Content-Type: application/json" \
--data '"{\n \"type\": \"restart\",\n \"reason\": \"Demonstrating how the node shutdown API works\",\n \"allocation_delay\": \"20m\"\n}"'
{
"type": "restart",
"reason": "Demonstrating how the node shutdown API works",
"allocation_delay": "20m"
}
Run a script
Technical preview
Runs a script and returns a result. Use this API to build and test scripts, such as when defining a script for a runtime field. This API requires very few dependencies and is especially useful if you don't have permissions to write documents on a cluster.
The API uses several contexts, which control how scripts are run, what variables are available at runtime, and what the return type is.
Each context requires a script, but additional parameters depend on the context you're using for that script.
Body
-
context
string Values are
painless_test
,filter
,score
,boolean_field
,date_field
,double_field
,geo_point_field
,ip_field
,keyword_field
,long_field
, orcomposite_field
. -
context_setup
object -
script
object
curl \
--request POST 'http://api.example.com/_scripts/painless/_execute' \
--header "Authorization: $API_KEY" \
--header "Content-Type: application/json" \
--data '"{\n \"script\": {\n \"source\": \"params.count / params.total\",\n \"params\": {\n \"count\": 100.0,\n \"total\": 1000.0\n }\n }\n}"'
{
"script": {
"source": "params.count / params.total",
"params": {
"count": 100.0,
"total": 1000.0
}
}
}
{
"script": {
"source": "doc['field'].value.length() <= params.max_length",
"params": {
"max_length": 4
}
},
"context": "filter",
"context_setup": {
"index": "my-index-000001",
"document": {
"field": "four"
}
}
}
{
"script": {
"source": "doc['rank'].value / params.max_rank",
"params": {
"max_rank": 5.0
}
},
"context": "score",
"context_setup": {
"index": "my-index-000001",
"document": {
"rank": 4
}
}
}
{
"result": "0.1"
}
{
"result": true
}
{
"result": 0.8
}
curl \
--request POST 'http://api.example.com/_render/template' \
--header "Authorization: $API_KEY" \
--header "Content-Type: application/json" \
--data '"{\n \"id\": \"my-search-template\",\n \"params\": {\n \"query_string\": \"hello world\",\n \"from\": 20,\n \"size\": 10\n }\n}"'
{
"id": "my-search-template",
"params": {
"query_string": "hello world",
"from": 20,
"size": 10
}
}
Clear the cache
Technical preview
Clear indices and data streams from the shared cache for partially mounted indices.
Query parameters
-
expand_wildcards
string | array[string] Whether to expand wildcard expression to concrete indices that are open, closed or both.
Supported values include:
all
: Match any data stream or index, including hidden ones.open
: Match open, non-hidden indices. Also matches any non-hidden data stream.closed
: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.hidden
: Match hidden data streams and hidden indices. Must be combined withopen
,closed
, orboth
.none
: Wildcard expressions are not accepted.
Values are
all
,open
,closed
,hidden
, ornone
. -
allow_no_indices
boolean Whether to ignore if a wildcard indices expression resolves into no concrete indices. (This includes
_all
string or when no indices have been specified)
curl \
--request POST 'http://api.example.com/_searchable_snapshots/cache/clear' \
--header "Authorization: $API_KEY"
Get searchable snapshot statistics
Added in 7.10.0
Path parameters
-
index
string | array[string] Required A comma-separated list of data streams and indices to retrieve statistics for.
Query parameters
-
level
string Return stats aggregated at cluster, index or shard level
Values are
cluster
,indices
, orshards
.
curl \
--request GET 'http://api.example.com/{index}/_searchable_snapshots/stats' \
--header "Authorization: $API_KEY"
Check user privileges
Added in 6.4.0
Determine whether the specified user has a specified list of privileges. All users can use this API, but only to determine their own privileges. To check the privileges of other users, you must use the run as feature.
Path parameters
-
user
string Required Username
Body
Required
-
application
array[object] -
cluster
array[string] A list of the cluster privileges that you want to check.
-
index
array[object]
curl \
--request GET 'http://api.example.com/_security/user/{user}/_has_privileges' \
--header "Authorization: $API_KEY" \
--header "Content-Type: application/json" \
--data '"{\n \"cluster\": [ \"monitor\", \"manage\" ],\n \"index\" : [\n {\n \"names\": [ \"suppliers\", \"products\" ],\n \"privileges\": [ \"read\" ]\n },\n {\n \"names\": [ \"inventory\" ],\n \"privileges\" : [ \"read\", \"write\" ]\n }\n ],\n \"application\": [\n {\n \"application\": \"inventory_manager\",\n \"privileges\" : [ \"read\", \"data:write/inventory\" ],\n \"resources\" : [ \"product/1852563\" ]\n }\n ]\n}"'
{
"cluster": [ "monitor", "manage" ],
"index" : [
{
"names": [ "suppliers", "products" ],
"privileges": [ "read" ]
},
{
"names": [ "inventory" ],
"privileges" : [ "read", "write" ]
}
],
"application": [
{
"application": "inventory_manager",
"privileges" : [ "read", "data:write/inventory" ],
"resources" : [ "product/1852563" ]
}
]
}
{
"username": "rdeniro",
"has_all_requested" : false,
"cluster" : {
"monitor" : true,
"manage" : false
},
"index" : {
"suppliers" : {
"read" : true
},
"products" : {
"read" : true
},
"inventory" : {
"read" : true,
"write" : false
}
},
"application" : {
"inventory_manager" : {
"product/1852563" : {
"read": false,
"data:write/inventory": false
}
}
}
}
Stop snapshot lifecycle management
Added in 7.6.0
Stop all snapshot lifecycle management (SLM) operations and the SLM plugin. This API is useful when you are performing maintenance on a cluster and need to prevent SLM from performing any actions on your data streams or indices. Stopping SLM does not stop any snapshots that are in progress. You can manually trigger snapshots with the run snapshot lifecycle policy API even if SLM is stopped.
The API returns a response as soon as the request is acknowledged, but the plugin might continue to run until in-progress operations complete and it can be safely stopped. Use the get snapshot lifecycle management status API to see if SLM is running.
Query parameters
-
master_timeout
string The period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. To indicate that the request should never timeout, set it to
-1
.Values are
-1
or0
. -
timeout
string The period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error. To indicate that the request should never timeout, set it to
-1
.Values are
-1
or0
.
curl \
--request POST 'http://api.example.com/_slm/stop' \
--header "Authorization: $API_KEY"
Translate SQL into Elasticsearch queries
Added in 6.3.0
Translate an SQL search into a search API request containing Query DSL.
It accepts the same request body parameters as the SQL search API, excluding cursor
.
Body
Required
-
fetch_size
number The maximum number of rows (or entries) to return in one response.
-
filter
object An Elasticsearch Query DSL (Domain Specific Language) object that defines a query.
External documentation -
query
string Required The SQL query to run.
-
time_zone
string
curl \
--request GET 'http://api.example.com/_sql/translate' \
--header "Authorization: $API_KEY" \
--header "Content-Type: application/json" \
--data '"{\n \"query\": \"SELECT * FROM library ORDER BY page_count DESC\",\n \"fetch_size\": 10\n}"'
{
"query": "SELECT * FROM library ORDER BY page_count DESC",
"fetch_size": 10
}