Get shard allocation information Generally available

GET /_cat/allocation/{node_id}

All methods and paths for this operation:

GET /_cat/allocation

GET /_cat/allocation/{node_id}

Get a snapshot of the number of shards allocated to each data node and their disk space.

IMPORTANT: CAT APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications.

Required authorization

  • Cluster privileges: monitor

Path parameters

  • node_id string | array[string]

    A comma-separated list of node identifiers or names used to limit the returned information.

Query parameters

  • bytes string

    The unit used to display byte values.

    Values are b, kb, mb, gb, tb, or pb.

  • h string | array[string]

    A comma-separated list of columns names to display. It supports simple wildcards.

    Supported values include:

    • shards (or s): The number of shards on the node.
    • shards.undesired: The number of shards scheduled to be moved elsewhere in the cluster.
    • write_load.forecast (or wlf, writeLoadForecast): The sum of index write load forecasts.
    • disk.indices.forecast (or dif, diskIndicesForecast): The sum of shard size forecasts.
    • disk.indices (or di, diskIndices): The disk space used by Elasticsearch indices.
    • disk.used (or du, diskUsed): The total disk space used on the node.
    • disk.avail (or da, diskAvail): The available disk space on the node.
    • disk.total (or dt, diskTotal): The total disk capacity of all volumes on the node.
    • disk.percent (or dp, diskPercent): The percentage of disk space used on the node.
    • host (or h): IThe host of the node.
    • ip: The IP address of the node.
    • node (or n): The name of the node.
    • node.role (or r, role, nodeRole): The roles assigned to the node.

    Values are shards, s, shards.undesired, write_load.forecast, wlf, writeLoadForecast, disk.indices.forecast, dif, diskIndicesForecast, disk.indices, di, diskIndices, disk.used, du, diskUsed, disk.avail, da, diskAvail, disk.total, dt, diskTotal, disk.percent, dp, diskPercent, host, h, ip, node, n, node.role, r, role, or nodeRole.

  • s string | array[string]

    List of columns that determine how the table should be sorted. Sorting defaults to ascending and can be changed by setting :asc or :desc as a suffix to the column name.

  • local boolean

    If true, the request computes the list of selected nodes from the local cluster state. If false the list of selected nodes are computed from the cluster state of the master node. In both cases the coordinating node will send requests for further information to each selected node.

  • master_timeout string

    Period to wait for a connection to the master node.

    Values are -1 or 0.

Responses

GET /_cat/allocation/{node_id}
GET /_cat/allocation?v=true&format=json
resp = client.cat.allocation(
    v=True,
    format="json",
)
const response = await client.cat.allocation({
  v: "true",
  format: "json",
});
response = client.cat.allocation(
  v: "true",
  format: "json"
)
$resp = $client->cat()->allocation([
    "v" => "true",
    "format" => "json",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_cat/allocation?v=true&format=json"
client.cat().allocation();
Response examples (200)
A successful response from `GET /_cat/allocation?v=true&format=json`. It shows a single shard is allocated to the one node available.
[
  {
    "shards": "1",
    "shards.undesired": "0",
    "write_load.forecast": "0.0",
    "disk.indices.forecast": "260b",
    "disk.indices": "260b",
    "disk.used": "47.3gb",
    "disk.avail": "43.4gb",
    "disk.total": "100.7gb",
    "disk.percent": "46",
    "host": "127.0.0.1",
    "ip": "127.0.0.1",
    "node": "CSUXak2",
    "node.role": "himrst"
  }
]




































Get anomaly detection jobs Generally available; Added in 7.7.0

GET /_cat/ml/anomaly_detectors/{job_id}

All methods and paths for this operation:

GET /_cat/ml/anomaly_detectors

GET /_cat/ml/anomaly_detectors/{job_id}

Get configuration and usage information for anomaly detection jobs. This API returns a maximum of 10,000 jobs. If the Elasticsearch security features are enabled, you must have monitor_ml, monitor, manage_ml, or manage cluster privileges to use this API.

IMPORTANT: CAT APIs are only intended for human consumption using the Kibana console or command line. They are not intended for use by applications. For application consumption, use the get anomaly detection job statistics API.

Required authorization

  • Cluster privileges: monitor_ml

Path parameters

  • job_id string Required

    Identifier for the anomaly detection job.

Query parameters

  • allow_no_match boolean

    Specifies what to do when the request:

    • Contains wildcard expressions and there are no jobs that match.
    • Contains the _all string or no identifiers and there are no matches.
    • Contains wildcard expressions and there are only partial matches.

    If true, the API returns an empty jobs array when there are no matches and the subset of results when there are partial matches. If false, the API returns a 404 status code when there are no matches or only partial matches.

  • bytes string

    The unit used to display byte values.

    Values are b, kb, mb, gb, tb, or pb.

  • h string | array[string]

    Comma-separated list of column names to display.

    Supported values include:

    • assignment_explanation (or ae): For open anomaly detection jobs only, contains messages relating to the selection of a node to run the job.
    • buckets.count (or bc, bucketsCount): The number of bucket results produced by the job.
    • buckets.time.exp_avg (or btea, bucketsTimeExpAvg): Exponential moving average of all bucket processing times, in milliseconds.
    • buckets.time.exp_avg_hour (or bteah, bucketsTimeExpAvgHour): Exponentially-weighted moving average of bucket processing times calculated in a 1 hour time window, in milliseconds.
    • buckets.time.max (or btmax, bucketsTimeMax): Maximum among all bucket processing times, in milliseconds.
    • buckets.time.min (or btmin, bucketsTimeMin): Minimum among all bucket processing times, in milliseconds.
    • buckets.time.total (or btt, bucketsTimeTotal): Sum of all bucket processing times, in milliseconds.
    • data.buckets (or db, dataBuckets): The number of buckets processed.
    • data.earliest_record (or der, dataEarliestRecord): The timestamp of the earliest chronologically input document.
    • data.empty_buckets (or deb, dataEmptyBuckets): The number of buckets which did not contain any data.
    • data.input_bytes (or dib, dataInputBytes): The number of bytes of input data posted to the anomaly detection job.
    • data.input_fields (or dif, dataInputFields): The total number of fields in input documents posted to the anomaly detection job. This count includes fields that are not used in the analysis. However, be aware that if you are using a datafeed, it extracts only the required fields from the documents it retrieves before posting them to the job.
    • data.input_records (or dir, dataInputRecords): The number of input documents posted to the anomaly detection job.
    • data.invalid_dates (or did, dataInvalidDates): The number of input documents with either a missing date field or a date that could not be parsed.
    • data.last (or dl, dataLast): The timestamp at which data was last analyzed, according to server time.
    • data.last_empty_bucket (or dleb, dataLastEmptyBucket): The timestamp of the last bucket that did not contain any data.
    • data.last_sparse_bucket (or dlsb, dataLastSparseBucket): The timestamp of the last bucket that was considered sparse.
    • data.latest_record (or dlr, dataLatestRecord): The timestamp of the latest chronologically input document.
    • data.missing_fields (or dmf, dataMissingFields): The number of input documents that are missing a field that the anomaly detection job is configured to analyze. Input documents with missing fields are still processed because it is possible that not all fields are missing.
    • data.out_of_order_timestamps (or doot, dataOutOfOrderTimestamps): The number of input documents that have a timestamp chronologically preceding the start of the current anomaly detection bucket offset by the latency window. This information is applicable only when you provide data to the anomaly detection job by using the post data API. These out of order documents are discarded, since jobs require time series data to be in ascending chronological order.
    • data.processed_fields (or dpf, dataProcessedFields): The total number of fields in all the documents that have been processed by the anomaly detection job. Only fields that are specified in the detector configuration object contribute to this count. The timestamp is not included in this count.
    • data.processed_records (or dpr, dataProcessedRecords): The number of input documents that have been processed by the anomaly detection job. This value includes documents with missing fields, since they are nonetheless analyzed. If you use datafeeds and have aggregations in your search query, the processed record count is the number of aggregation results processed, not the number of Elasticsearch documents.
    • data.sparse_buckets (or dsb, dataSparseBuckets): The number of buckets that contained few data points compared to the expected number of data points.
    • forecasts.memory.avg (or fmavg, forecastsMemoryAvg): The average memory usage in bytes for forecasts related to the anomaly detection job.
    • forecasts.memory.max (or fmmax, forecastsMemoryMax): The maximum memory usage in bytes for forecasts related to the anomaly detection job.
    • forecasts.memory.min (or fmmin, forecastsMemoryMin): The minimum memory usage in bytes for forecasts related to the anomaly detection job.
    • forecasts.memory.total (or fmt, forecastsMemoryTotal): The total memory usage in bytes for forecasts related to the anomaly detection job.
    • forecasts.records.avg (or fravg, forecastsRecordsAvg): The average number of model_forecast` documents written for forecasts related to the anomaly detection job.
    • forecasts.records.max (or frmax, forecastsRecordsMax): The maximum number of model_forecast documents written for forecasts related to the anomaly detection job.
    • forecasts.records.min (or frmin, forecastsRecordsMin): The minimum number of model_forecast documents written for forecasts related to the anomaly detection job.
    • forecasts.records.total (or frt, forecastsRecordsTotal): The total number of model_forecast documents written for forecasts related to the anomaly detection job.
    • forecasts.time.avg (or ftavg, forecastsTimeAvg): The average runtime in milliseconds for forecasts related to the anomaly detection job.
    • forecasts.time.max (or ftmax, forecastsTimeMax): The maximum runtime in milliseconds for forecasts related to the anomaly detection job.
    • forecasts.time.min (or ftmin, forecastsTimeMin): The minimum runtime in milliseconds for forecasts related to the anomaly detection job.
    • forecasts.time.total (or ftt, forecastsTimeTotal): The total runtime in milliseconds for forecasts related to the anomaly detection job.
    • forecasts.total (or ft, forecastsTotal): The number of individual forecasts currently available for the job.
    • id: Identifier for the anomaly detection job.
    • model.bucket_allocation_failures (or mbaf, modelBucketAllocationFailures): The number of buckets for which new entities in incoming data were not processed due to insufficient model memory.
    • model.by_fields (or mbf, modelByFields): The number of by field values that were analyzed by the models. This value is cumulative for all detectors in the job.
    • model.bytes (or mb, modelBytes): The number of bytes of memory used by the models. This is the maximum value since the last time the model was persisted. If the job is closed, this value indicates the latest size.
    • model.bytes_exceeded (or mbe, modelBytesExceeded): The number of bytes over the high limit for memory usage at the last allocation failure.
    • model.categorization_status (or mcs, modelCategorizationStatus): The status of categorization for the job: ok or warn. If ok, categorization is performing acceptably well (or not being used at all). If warn, categorization is detecting a distribution of categories that suggests the input data is inappropriate for categorization. Problems could be that there is only one category, more than 90% of categories are rare, the number of categories is greater than 50% of the number of categorized documents, there are no frequently matched categories, or more than 50% of categories are dead.
    • model.categorized_doc_count (or mcdc, modelCategorizedDocCount): The number of documents that have had a field categorized.
    • model.dead_category_count (or mdcc, modelDeadCategoryCount): The number of categories created by categorization that will never be assigned again because another category’s definition makes it a superset of the dead category. Dead categories are a side effect of the way categorization has no prior training.
    • model.failed_category_count (or mdcc, modelFailedCategoryCount): The number of times that categorization wanted to create a new category but couldn’t because the job had hit its model memory limit. This count does not track which specific categories failed to be created. Therefore, you cannot use this value to determine the number of unique categories that were missed.
    • model.frequent_category_count (or mfcc, modelFrequentCategoryCount): The number of categories that match more than 1% of categorized documents.
    • model.log_time (or mlt, modelLogTime): The timestamp when the model stats were gathered, according to server time.
    • model.memory_limit (or mml, modelMemoryLimit): The timestamp when the model stats were gathered, according to server time.
    • model.memory_status (or mms, modelMemoryStatus): The status of the mathematical models: ok, soft_limit, or hard_limit. If ok, the models stayed below the configured value. If soft_limit, the models used more than 60% of the configured memory limit and older unused models will be pruned to free up space. Additionally, in categorization jobs no further category examples will be stored. If hard_limit, the models used more space than the configured memory limit. As a result, not all incoming data was processed.
    • model.over_fields (or mof, modelOverFields): The number of over field values that were analyzed by the models. This value is cumulative for all detectors in the job.
    • model.partition_fields (or mpf, modelPartitionFields): The number of partition field values that were analyzed by the models. This value is cumulative for all detectors in the job.
    • model.rare_category_count (or mrcc, modelRareCategoryCount): The number of categories that match just one categorized document.
    • model.timestamp (or mt, modelTimestamp): The timestamp of the last record when the model stats were gathered.
    • model.total_category_count (or mtcc, modelTotalCategoryCount): The number of categories created by categorization.
    • node.address (or na, nodeAddress): The network address of the node that runs the job. This information is available only for open jobs.
    • node.ephemeral_id (or ne, nodeEphemeralId): The ephemeral ID of the node that runs the job. This information is available only for open jobs.
    • node.id (or ni, nodeId): The unique identifier of the node that runs the job. This information is available only for open jobs.
    • node.name (or nn, nodeName): The name of the node that runs the job. This information is available only for open jobs.
    • opened_time (or ot): For open jobs only, the elapsed time for which the job has been open.
    • state (or s): The status of the anomaly detection job: closed, closing, failed, opened, or opening. If closed, the job finished successfully with its model state persisted. The job must be opened before it can accept further data. If closing, the job close action is in progress and has not yet completed. A closing job cannot accept further data. If failed, the job did not finish successfully due to an error. This situation can occur due to invalid input data, a fatal error occurring during the analysis, or an external interaction such as the process being killed by the Linux out of memory (OOM) killer. If the job had irrevocably failed, it must be force closed and then deleted. If the datafeed can be corrected, the job can be closed and then re-opened. If opened, the job is available to receive and process data. If opening, the job open action is in progress and has not yet completed.
  • s string | array[string]

    Comma-separated list of column names or column aliases used to sort the response.

    Supported values include:

    • assignment_explanation (or ae): For open anomaly detection jobs only, contains messages relating to the selection of a node to run the job.
    • buckets.count (or bc, bucketsCount): The number of bucket results produced by the job.
    • buckets.time.exp_avg (or btea, bucketsTimeExpAvg): Exponential moving average of all bucket processing times, in milliseconds.
    • buckets.time.exp_avg_hour (or bteah, bucketsTimeExpAvgHour): Exponentially-weighted moving average of bucket processing times calculated in a 1 hour time window, in milliseconds.
    • buckets.time.max (or btmax, bucketsTimeMax): Maximum among all bucket processing times, in milliseconds.
    • buckets.time.min (or btmin, bucketsTimeMin): Minimum among all bucket processing times, in milliseconds.
    • buckets.time.total (or btt, bucketsTimeTotal): Sum of all bucket processing times, in milliseconds.
    • data.buckets (or db, dataBuckets): The number of buckets processed.
    • data.earliest_record (or der, dataEarliestRecord): The timestamp of the earliest chronologically input document.
    • data.empty_buckets (or deb, dataEmptyBuckets): The number of buckets which did not contain any data.
    • data.input_bytes (or dib, dataInputBytes): The number of bytes of input data posted to the anomaly detection job.
    • data.input_fields (or dif, dataInputFields): The total number of fields in input documents posted to the anomaly detection job. This count includes fields that are not used in the analysis. However, be aware that if you are using a datafeed, it extracts only the required fields from the documents it retrieves before posting them to the job.
    • data.input_records (or dir, dataInputRecords): The number of input documents posted to the anomaly detection job.
    • data.invalid_dates (or did, dataInvalidDates): The number of input documents with either a missing date field or a date that could not be parsed.
    • data.last (or dl, dataLast): The timestamp at which data was last analyzed, according to server time.
    • data.last_empty_bucket (or dleb, dataLastEmptyBucket): The timestamp of the last bucket that did not contain any data.
    • data.last_sparse_bucket (or dlsb, dataLastSparseBucket): The timestamp of the last bucket that was considered sparse.
    • data.latest_record (or dlr, dataLatestRecord): The timestamp of the latest chronologically input document.
    • data.missing_fields (or dmf, dataMissingFields): The number of input documents that are missing a field that the anomaly detection job is configured to analyze. Input documents with missing fields are still processed because it is possible that not all fields are missing.
    • data.out_of_order_timestamps (or doot, dataOutOfOrderTimestamps): The number of input documents that have a timestamp chronologically preceding the start of the current anomaly detection bucket offset by the latency window. This information is applicable only when you provide data to the anomaly detection job by using the post data API. These out of order documents are discarded, since jobs require time series data to be in ascending chronological order.
    • data.processed_fields (or dpf, dataProcessedFields): The total number of fields in all the documents that have been processed by the anomaly detection job. Only fields that are specified in the detector configuration object contribute to this count. The timestamp is not included in this count.
    • data.processed_records (or dpr, dataProcessedRecords): The number of input documents that have been processed by the anomaly detection job. This value includes documents with missing fields, since they are nonetheless analyzed. If you use datafeeds and have aggregations in your search query, the processed record count is the number of aggregation results processed, not the number of Elasticsearch documents.
    • data.sparse_buckets (or dsb, dataSparseBuckets): The number of buckets that contained few data points compared to the expected number of data points.
    • forecasts.memory.avg (or fmavg, forecastsMemoryAvg): The average memory usage in bytes for forecasts related to the anomaly detection job.
    • forecasts.memory.max (or fmmax, forecastsMemoryMax): The maximum memory usage in bytes for forecasts related to the anomaly detection job.
    • forecasts.memory.min (or fmmin, forecastsMemoryMin): The minimum memory usage in bytes for forecasts related to the anomaly detection job.
    • forecasts.memory.total (or fmt, forecastsMemoryTotal): The total memory usage in bytes for forecasts related to the anomaly detection job.
    • forecasts.records.avg (or fravg, forecastsRecordsAvg): The average number of model_forecast` documents written for forecasts related to the anomaly detection job.
    • forecasts.records.max (or frmax, forecastsRecordsMax): The maximum number of model_forecast documents written for forecasts related to the anomaly detection job.
    • forecasts.records.min (or frmin, forecastsRecordsMin): The minimum number of model_forecast documents written for forecasts related to the anomaly detection job.
    • forecasts.records.total (or frt, forecastsRecordsTotal): The total number of model_forecast documents written for forecasts related to the anomaly detection job.
    • forecasts.time.avg (or ftavg, forecastsTimeAvg): The average runtime in milliseconds for forecasts related to the anomaly detection job.
    • forecasts.time.max (or ftmax, forecastsTimeMax): The maximum runtime in milliseconds for forecasts related to the anomaly detection job.
    • forecasts.time.min (or ftmin, forecastsTimeMin): The minimum runtime in milliseconds for forecasts related to the anomaly detection job.
    • forecasts.time.total (or ftt, forecastsTimeTotal): The total runtime in milliseconds for forecasts related to the anomaly detection job.
    • forecasts.total (or ft, forecastsTotal): The number of individual forecasts currently available for the job.
    • id: Identifier for the anomaly detection job.
    • model.bucket_allocation_failures (or mbaf, modelBucketAllocationFailures): The number of buckets for which new entities in incoming data were not processed due to insufficient model memory.
    • model.by_fields (or mbf, modelByFields): The number of by field values that were analyzed by the models. This value is cumulative for all detectors in the job.
    • model.bytes (or mb, modelBytes): The number of bytes of memory used by the models. This is the maximum value since the last time the model was persisted. If the job is closed, this value indicates the latest size.
    • model.bytes_exceeded (or mbe, modelBytesExceeded): The number of bytes over the high limit for memory usage at the last allocation failure.
    • model.categorization_status (or mcs, modelCategorizationStatus): The status of categorization for the job: ok or warn. If ok, categorization is performing acceptably well (or not being used at all). If warn, categorization is detecting a distribution of categories that suggests the input data is inappropriate for categorization. Problems could be that there is only one category, more than 90% of categories are rare, the number of categories is greater than 50% of the number of categorized documents, there are no frequently matched categories, or more than 50% of categories are dead.
    • model.categorized_doc_count (or mcdc, modelCategorizedDocCount): The number of documents that have had a field categorized.
    • model.dead_category_count (or mdcc, modelDeadCategoryCount): The number of categories created by categorization that will never be assigned again because another category’s definition makes it a superset of the dead category. Dead categories are a side effect of the way categorization has no prior training.
    • model.failed_category_count (or mdcc, modelFailedCategoryCount): The number of times that categorization wanted to create a new category but couldn’t because the job had hit its model memory limit. This count does not track which specific categories failed to be created. Therefore, you cannot use this value to determine the number of unique categories that were missed.
    • model.frequent_category_count (or mfcc, modelFrequentCategoryCount): The number of categories that match more than 1% of categorized documents.
    • model.log_time (or mlt, modelLogTime): The timestamp when the model stats were gathered, according to server time.
    • model.memory_limit (or mml, modelMemoryLimit): The timestamp when the model stats were gathered, according to server time.
    • model.memory_status (or mms, modelMemoryStatus): The status of the mathematical models: ok, soft_limit, or hard_limit. If ok, the models stayed below the configured value. If soft_limit, the models used more than 60% of the configured memory limit and older unused models will be pruned to free up space. Additionally, in categorization jobs no further category examples will be stored. If hard_limit, the models used more space than the configured memory limit. As a result, not all incoming data was processed.
    • model.over_fields (or mof, modelOverFields): The number of over field values that were analyzed by the models. This value is cumulative for all detectors in the job.
    • model.partition_fields (or mpf, modelPartitionFields): The number of partition field values that were analyzed by the models. This value is cumulative for all detectors in the job.
    • model.rare_category_count (or mrcc, modelRareCategoryCount): The number of categories that match just one categorized document.
    • model.timestamp (or mt, modelTimestamp): The timestamp of the last record when the model stats were gathered.
    • model.total_category_count (or mtcc, modelTotalCategoryCount): The number of categories created by categorization.
    • node.address (or na, nodeAddress): The network address of the node that runs the job. This information is available only for open jobs.
    • node.ephemeral_id (or ne, nodeEphemeralId): The ephemeral ID of the node that runs the job. This information is available only for open jobs.
    • node.id (or ni, nodeId): The unique identifier of the node that runs the job. This information is available only for open jobs.
    • node.name (or nn, nodeName): The name of the node that runs the job. This information is available only for open jobs.
    • opened_time (or ot): For open jobs only, the elapsed time for which the job has been open.
    • state (or s): The status of the anomaly detection job: closed, closing, failed, opened, or opening. If closed, the job finished successfully with its model state persisted. The job must be opened before it can accept further data. If closing, the job close action is in progress and has not yet completed. A closing job cannot accept further data. If failed, the job did not finish successfully due to an error. This situation can occur due to invalid input data, a fatal error occurring during the analysis, or an external interaction such as the process being killed by the Linux out of memory (OOM) killer. If the job had irrevocably failed, it must be force closed and then deleted. If the datafeed can be corrected, the job can be closed and then re-opened. If opened, the job is available to receive and process data. If opening, the job open action is in progress and has not yet completed.
  • time string

    The unit used to display time values.

    Values are nanos, micros, ms, s, m, h, or d.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • id string

      The anomaly detection job identifier.

    • state string

      The status of the anomaly detection job.

      Supported values include:

      • closing: The job close action is in progress and has not yet completed. A closing job cannot accept further data.
      • closed: The job finished successfully with its model state persisted. The job must be opened before it can accept further data.
      • opened: The job is available to receive and process data.
      • failed: The job did not finish successfully due to an error. This situation can occur due to invalid input data, a fatal error occurring during the analysis, or an external interaction such as the process being killed by the Linux out of memory (OOM) killer. If the job had irrevocably failed, it must be force closed and then deleted. If the datafeed can be corrected, the job can be closed and then re-opened.
      • opening: The job open action is in progress and has not yet completed.

      Values are closing, closed, opened, failed, or opening.

    • opened_time string

      For open jobs only, the amount of time the job has been opened.

    • assignment_explanation string

      For open anomaly detection jobs only, contains messages relating to the selection of a node to run the job.

    • data.processed_records string

      The number of input documents that have been processed by the anomaly detection job. This value includes documents with missing fields, since they are nonetheless analyzed. If you use datafeeds and have aggregations in your search query, the processed_record_count is the number of aggregation results processed, not the number of Elasticsearch documents.

    • data.processed_fields string

      The total number of fields in all the documents that have been processed by the anomaly detection job. Only fields that are specified in the detector configuration object contribute to this count. The timestamp is not included in this count.

    • data.input_bytes number | string

      The number of bytes of input data posted to the anomaly detection job.

      One of:

      The number of bytes of input data posted to the anomaly detection job.

    • data.input_records string

      The number of input documents posted to the anomaly detection job.

    • data.input_fields string

      The total number of fields in input documents posted to the anomaly detection job. This count includes fields that are not used in the analysis. However, be aware that if you are using a datafeed, it extracts only the required fields from the documents it retrieves before posting them to the job.

    • data.invalid_dates string

      The number of input documents with either a missing date field or a date that could not be parsed.

    • data.missing_fields string

      The number of input documents that are missing a field that the anomaly detection job is configured to analyze. Input documents with missing fields are still processed because it is possible that not all fields are missing. If you are using datafeeds or posting data to the job in JSON format, a high missing_field_count is often not an indication of data issues. It is not necessarily a cause for concern.

    • data.out_of_order_timestamps string

      The number of input documents that have a timestamp chronologically preceding the start of the current anomaly detection bucket offset by the latency window. This information is applicable only when you provide data to the anomaly detection job by using the post data API. These out of order documents are discarded, since jobs require time series data to be in ascending chronological order.

    • data.empty_buckets string

      The number of buckets which did not contain any data. If your data contains many empty buckets, consider increasing your bucket_span or using functions that are tolerant to gaps in data such as mean, non_null_sum or non_zero_count.

    • data.sparse_buckets string

      The number of buckets that contained few data points compared to the expected number of data points. If your data contains many sparse buckets, consider using a longer bucket_span.

    • data.buckets string

      The total number of buckets processed.

    • data.earliest_record string

      The timestamp of the earliest chronologically input document.

    • data.latest_record string

      The timestamp of the latest chronologically input document.

    • data.last string

      The timestamp at which data was last analyzed, according to server time.

    • data.last_empty_bucket string

      The timestamp of the last bucket that did not contain any data.

    • data.last_sparse_bucket string

      The timestamp of the last bucket that was considered sparse.

    • model.bytes number | string

      The number of bytes of memory used by the models. This is the maximum value since the last time the model was persisted. If the job is closed, this value indicates the latest size.

      One of:

      The number of bytes of memory used by the models. This is the maximum value since the last time the model was persisted. If the job is closed, this value indicates the latest size.

    • model.memory_status string

      The status of the mathematical models.

      Values are ok, soft_limit, or hard_limit.

    • model.bytes_exceeded number | string

      The number of bytes over the high limit for memory usage at the last allocation failure.

      One of:

      The number of bytes over the high limit for memory usage at the last allocation failure.

    • model.memory_limit string

      The upper limit for model memory usage, checked on increasing values.

    • model.by_fields string

      The number of by field values that were analyzed by the models. This value is cumulative for all detectors in the job.

    • model.over_fields string

      The number of over field values that were analyzed by the models. This value is cumulative for all detectors in the job.

    • model.partition_fields string

      The number of partition field values that were analyzed by the models. This value is cumulative for all detectors in the job.

    • model.bucket_allocation_failures string

      The number of buckets for which new entities in incoming data were not processed due to insufficient model memory. This situation is also signified by a hard_limit: memory_status property value.

    • model.categorization_status string

      The status of categorization for the job.

      Values are ok or warn.

    • model.categorized_doc_count string

      The number of documents that have had a field categorized.

    • model.total_category_count string

      The number of categories created by categorization.

    • model.frequent_category_count string

      The number of categories that match more than 1% of categorized documents.

    • model.rare_category_count string

      The number of categories that match just one categorized document.

    • model.dead_category_count string

      The number of categories created by categorization that will never be assigned again because another category’s definition makes it a superset of the dead category. Dead categories are a side effect of the way categorization has no prior training.

    • model.failed_category_count string

      The number of times that categorization wanted to create a new category but couldn’t because the job had hit its model_memory_limit. This count does not track which specific categories failed to be created. Therefore you cannot use this value to determine the number of unique categories that were missed.

    • model.log_time string

      The timestamp when the model stats were gathered, according to server time.

    • model.timestamp string

      The timestamp of the last record when the model stats were gathered.

    • forecasts.total string

      The number of individual forecasts currently available for the job. A value of one or more indicates that forecasts exist.

    • forecasts.memory.min string

      The minimum memory usage in bytes for forecasts related to the anomaly detection job.

    • forecasts.memory.max string

      The maximum memory usage in bytes for forecasts related to the anomaly detection job.

    • forecasts.memory.avg string

      The average memory usage in bytes for forecasts related to the anomaly detection job.

    • forecasts.memory.total string

      The total memory usage in bytes for forecasts related to the anomaly detection job.

    • forecasts.records.min string

      The minimum number of model_forecast documents written for forecasts related to the anomaly detection job.

    • forecasts.records.max string

      The maximum number of model_forecast documents written for forecasts related to the anomaly detection job.

    • forecasts.records.avg string

      The average number of model_forecast documents written for forecasts related to the anomaly detection job.

    • forecasts.records.total string

      The total number of model_forecast documents written for forecasts related to the anomaly detection job.

    • forecasts.time.min string

      The minimum runtime in milliseconds for forecasts related to the anomaly detection job.

    • forecasts.time.max string

      The maximum runtime in milliseconds for forecasts related to the anomaly detection job.

    • forecasts.time.avg string

      The average runtime in milliseconds for forecasts related to the anomaly detection job.

    • forecasts.time.total string

      The total runtime in milliseconds for forecasts related to the anomaly detection job.

    • node.id string

      The uniqe identifier of the assigned node.

    • node.name string

      The name of the assigned node.

    • node.ephemeral_id string

      The ephemeral identifier of the assigned node.

    • node.address string

      The network address of the assigned node.

    • buckets.count string

      The number of bucket results produced by the job.

    • buckets.time.total string

      The sum of all bucket processing times, in milliseconds.

    • buckets.time.min string

      The minimum of all bucket processing times, in milliseconds.

    • buckets.time.max string

      The maximum of all bucket processing times, in milliseconds.

    • buckets.time.exp_avg string

      The exponential moving average of all bucket processing times, in milliseconds.

    • buckets.time.exp_avg_hour string

      The exponential moving average of bucket processing times calculated in a one hour time window, in milliseconds.

GET /_cat/ml/anomaly_detectors/{job_id}
GET _cat/ml/anomaly_detectors?h=id,s,dpr,mb&v=true&format=json
resp = client.cat.ml_jobs(
    h="id,s,dpr,mb",
    v=True,
    format="json",
)
const response = await client.cat.mlJobs({
  h: "id,s,dpr,mb",
  v: "true",
  format: "json",
});
response = client.cat.ml_jobs(
  h: "id,s,dpr,mb",
  v: "true",
  format: "json"
)
$resp = $client->cat()->mlJobs([
    "h" => "id,s,dpr,mb",
    "v" => "true",
    "format" => "json",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_cat/ml/anomaly_detectors?h=id,s,dpr,mb&v=true&format=json"
client.cat().mlJobs();
Response examples (200)
A successful response from `GET _cat/ml/anomaly_detectors?h=id,s,dpr,mb&v=true&format=json`.
[
  {
    "id": "high_sum_total_sales",
    "s": "closed",
    "dpr": "14022",
    "mb": "1.5mb"
  },
  {
    "id": "low_request_rate",
    "s": "closed",
    "dpr": "1216",
    "mb": "40.5kb"
  },
  {
    "id": "response_code_rates",
    "s": "closed",
    "dpr": "28146",
    "mb": "132.7kb"
  },
  {
    "id": "url_scanning",
    "s": "closed",
    "dpr": "28146",
    "mb": "501.6kb"
  }
]
















Get plugin information Generally available

GET /_cat/plugins

Get a list of plugins running on each node of a cluster. IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the nodes info API.

Required authorization

  • Cluster privileges: monitor

Query parameters

  • h string | array[string]

    List of columns to appear in the response. Supports simple wildcards.

  • s string | array[string]

    List of columns that determine how the table should be sorted. Sorting defaults to ascending and can be changed by setting :asc or :desc as a suffix to the column name.

  • include_bootstrap boolean

    Include bootstrap plugins in the response

  • local boolean

    If true, the request computes the list of selected nodes from the local cluster state. If false the list of selected nodes are computed from the cluster state of the master node. In both cases the coordinating node will send requests for further information to each selected node.

  • master_timeout string

    Period to wait for a connection to the master node.

    Values are -1 or 0.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • id string

      The unique node identifier.

    • name string

      The node name.

    • component string

      The component name.

    • version string

      The component version.

    • description string

      The plugin details.

    • type string

      The plugin type.

GET /_cat/plugins
GET /_cat/plugins?v=true&s=component&h=name,component,version,description&format=json
resp = client.cat.plugins(
    v=True,
    s="component",
    h="name,component,version,description",
    format="json",
)
const response = await client.cat.plugins({
  v: "true",
  s: "component",
  h: "name,component,version,description",
  format: "json",
});
response = client.cat.plugins(
  v: "true",
  s: "component",
  h: "name,component,version,description",
  format: "json"
)
$resp = $client->cat()->plugins([
    "v" => "true",
    "s" => "component",
    "h" => "name,component,version,description",
    "format" => "json",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_cat/plugins?v=true&s=component&h=name,component,version,description&format=json"
client.cat().plugins();
Response examples (200)
A successful response from `GET /_cat/plugins?v=true&s=component&h=name,component,version,description&format=json`.
[
  { "name": "U7321H6", "component": "analysis-icu", "version": "8.17.0", "description": "The ICU Analysis plugin integrates the Lucene ICU module into Elasticsearch, adding ICU-related analysis components."},
  {"name": "U7321H6", "component": "analysis-kuromoji",   "verison":  "8.17.0", description: "The Japanese (kuromoji) Analysis plugin integrates Lucene kuromoji analysis module into elasticsearch."},
  {"name" "U7321H6", "component": "analysis-nori", "version":         "8.17.0", "description": "The Korean (nori) Analysis plugin integrates Lucene nori analysis module into elasticsearch."},
  {"name": "U7321H6", "component": "analysis-phonetic",   "verison":  "8.17.0", "description": "The Phonetic Analysis plugin integrates phonetic token filter analysis with elasticsearch."},
  {"name": "U7321H6", "component": "analysis-smartcn",   "verison":  "8.17.0", "description": "Smart Chinese Analysis plugin integrates Lucene Smart Chinese analysis module into elasticsearch."},
  {"name": "U7321H6", "component": "analysis-stempel",   "verison":  "8.17.0", "description": "The Stempel (Polish) Analysis plugin integrates Lucene stempel (polish) analysis module into elasticsearch."},
  {"name": "U7321H6", "component": "analysis-ukrainian",   "verison":  "8.17.0", "description": "The Ukrainian Analysis plugin integrates the Lucene UkrainianMorfologikAnalyzer into elasticsearch."},
  {"name": "U7321H6", "component": "discovery-azure-classic",   "verison":  "8.17.0", "description": "The Azure Classic Discovery plugin allows to use Azure Classic API for the unicast discovery mechanism"},
  {"name": "U7321H6", "component": "discovery-ec2",   "verison":  "8.17.0", "description": "The EC2 discovery plugin allows to use AWS API for the unicast discovery mechanism."},
  {"name": "U7321H6", "component": "discovery-gce",   "verison":  "8.17.0", "description": "The Google Compute Engine (GCE) Discovery plugin allows to use GCE API for the unicast discovery mechanism."},
  {"name": "U7321H6", "component": "mapper-annotated-text",   "verison":  "8.17.0", "description": "The Mapper Annotated_text plugin adds support for text fields with markup used to inject annotation tokens into the index."},
  {"name": "U7321H6", "component": "mapper-murmur3",   "verison":  "8.17.0", "description": "The Mapper Murmur3 plugin allows to compute hashes of a field's values at index-time and to store them in the index."},
  {"name": "U7321H6", "component": "mapper-size",   "verison":  "8.17.0", "description": "The Mapper Size plugin allows document to record their uncompressed size at index time."},
  {"name": "U7321H6", "component": "store-smb",   "verison":  "8.17.0", "description": "The Store SMB plugin adds support for SMB stores."}
]












Get shard information Generally available

GET /_cat/shards/{index}

All methods and paths for this operation:

GET /_cat/shards

GET /_cat/shards/{index}

Get information about the shards in a cluster. For data streams, the API returns information about the backing indices. IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications.

Required authorization

  • Index privileges: monitor
  • Cluster privileges: monitor

Path parameters

  • index string | array[string] Required

    A comma-separated list of data streams, indices, and aliases used to limit the request. Supports wildcards (*). To target all data streams and indices, omit this parameter or use * or _all.

Query parameters

  • bytes string

    The unit used to display byte values.

    Values are b, kb, mb, gb, tb, or pb.

  • h string | array[string]

    List of columns to appear in the response. Supports simple wildcards.

    Supported values include:

    • completion.size (or cs, completionSize): Size of completion. For example: 0b.
    • dataset.size: Disk space used by the shard’s dataset, which may or may not be the size on disk, but includes space used by the shard on object storage. Reported as a size value for example: 5kb.
    • dense_vector.value_count (or dvc, denseVectorCount): Number of indexed dense vectors.
    • docs (or d, dc): Number of documents in shard, for example: 25.
    • fielddata.evictions (or fe, fielddataEvictions): Fielddata cache evictions, for example: 0.
    • fielddata.memory_size (or fm, fielddataMemory): Used fielddata cache memory, for example: 0b.
    • flush.total (or ft, flushTotal): Number of flushes, for example: 1.
    • flush.total_time (or ftt, flushTotalTime): Time spent in flush, for example: 1.
    • get.current (or gc, getCurrent): Number of current get operations, for example: 0.
    • get.exists_time (or geti, getExistsTime): Time spent in successful gets, for example: 14ms.
    • get.exists_total (or geto, getExistsTotal): Number of successful get operations, for example: 2.
    • get.missing_time (or gmti, getMissingTime): Time spent in failed gets, for example: 0s.
    • get.missing_total (or gmto, getMissingTotal): Number of failed get operations, for example: 1.
    • get.time (or gti, getTime): Time spent in get, for example: 14ms.
    • get.total (or gto, getTotal): Number of get operations, for example: 2.
    • id: ID of the node, for example: k0zy.
    • index (or i, idx): Name of the index.
    • indexing.delete_current (or idc, indexingDeleteCurrent): Number of current deletion operations, for example: 0.
    • indexing.delete_time (or idti, indexingDeleteTime): Time spent in deletions, for example: 2ms.
    • indexing.delete_total (or idto, indexingDeleteTotal): Number of deletion operations, for example: 2.
    • indexing.index_current (or iic, indexingIndexCurrent): Number of current indexing operations, for example: 0.
    • indexing.index_failed_due_to_version_conflict (or iifvc, indexingIndexFailedDueToVersionConflict): Number of failed indexing operations due to version conflict, for example: 0.
    • indexing.index_failed (or iif, indexingIndexFailed): Number of failed indexing operations, for example: 0.
    • indexing.index_time (or iiti, indexingIndexTime): Time spent in indexing, such as for example: 134ms.
    • indexing.index_total (or iito, indexingIndexTotal): Number of indexing operations, for example: 1.
    • ip: IP address of the node, for example: 127.0.1.1.
    • merges.current (or mc, mergesCurrent): Number of current merge operations, for example: 0.
    • merges.current_docs (or mcd, mergesCurrentDocs): Number of current merging documents, for example: 0.
    • merges.current_size (or mcs, mergesCurrentSize): Size of current merges, for example: 0b.
    • merges.total (or mt, mergesTotal): Number of completed merge operations, for example: 0.
    • merges.total_docs (or mtd, mergesTotalDocs): Number of merged documents, for example: 0.
    • merges.total_size (or mts, mergesTotalSize): Size of current merges, for example: 0b.
    • merges.total_time (or mtt, mergesTotalTime): Time spent merging documents, for example: 0s.
    • node (or n): Node name, for example: I8hydUG.
    • prirep (or p, pr, primaryOrReplica): Shard type. Returned values are primary or replica.
    • query_cache.evictions (or qce, queryCacheEvictions): Query cache evictions, for example: 0.
    • query_cache.memory_size (or qcm, queryCacheMemory): Used query cache memory, for example: 0b.
    • recoverysource.type (or rs): Type of recovery source.
    • refresh.time (or rti, refreshTime): Time spent in refreshes, for example: 91ms.
    • refresh.total (or rto, refreshTotal): Number of refreshes, for example: 16.
    • search.fetch_current (or sfc, searchFetchCurrent): Current fetch phase operations, for example: 0.
    • search.fetch_time (or sfti, searchFetchTime): Time spent in fetch phase, for example: 37ms.
    • search.fetch_total (or sfto, searchFetchTotal): Number of fetch operations, for example: 7.
    • search.open_contexts (or so, searchOpenContexts): Open search contexts, for example: 0.
    • search.query_current (or sqc, searchQueryCurrent): Current query phase operations, for example: 0.
    • search.query_time (or sqti, searchQueryTime): Time spent in query phase, for example: 43ms.
    • search.query_total (or sqto, searchQueryTotal): Number of query operations, for example: 9.
    • search.scroll_current (or scc, searchScrollCurrent): Open scroll contexts, for example: 2.
    • search.scroll_time (or scti, searchScrollTime): Time scroll contexts held open, for example: 2m.
    • search.scroll_total (or scto, searchScrollTotal): Completed scroll contexts, for example: 1.
    • segments.count (or sc, segmentsCount): Number of segments, for example: 4.
    • segments.fixed_bitset_memory (or sfbm, fixedBitsetMemory): Memory used by fixed bit sets for nested object field types and type filters for types referred in join fields, for example: 1.0kb.
    • segments.index_writer_memory (or siwm, segmentsIndexWriterMemory): Memory used by index writer, for example: 18mb.
    • segments.memory (or sm, segmentsMemory): Memory used by segments, for example: 1.4kb.
    • segments.version_map_memory (or svmm, segmentsVersionMapMemory): Memory used by version map, for example: 1.0kb.
    • seq_no.global_checkpoint (or sqg, globalCheckpoint): Global checkpoint.
    • seq_no.local_checkpoint (or sql, localCheckpoint): Local checkpoint.
    • seq_no.max (or sqm, maxSeqNo): Maximum sequence number.
    • shard (or s, sh): Name of the shard.
    • dsparse_vector.value_count (or svc, sparseVectorCount): Number of indexed sparse vectors.
    • state (or st): State of the shard. Returned values are:
      • INITIALIZING: The shard is recovering from a peer shard or gateway.
      • RELOCATING: The shard is relocating.
      • STARTED: The shard has started.
      • UNASSIGNED: The shard is not assigned to any node.
    • store (or sto): Disk space used by the shard, for example: 5kb.
    • suggest.current (or suc, suggestCurrent): Number of current suggest operations, for example: 0.
    • suggest.time (or suti, suggestTime): Time spent in suggest, for example: 0.
    • suggest.total (or suto, suggestTotal): Number of suggest operations, for example: 0.
    • sync_id: Sync ID of the shard.
    • unassigned.at (or ua): Time at which the shard became unassigned in Coordinated Universal Time (UTC).
    • unassigned.details (or ud): Details about why the shard became unassigned. This does not explain why the shard is currently unassigned. To understand why a shard is not assigned, use the Cluster allocation explain API.
    • unassigned.for (or uf): Time at which the shard was requested to be unassigned in Coordinated Universal Time (UTC).
    • unassigned.reason (or ur): Indicates the reason for the last change to the state of this unassigned shard. This does not explain why the shard is currently unassigned. To understand why a shard is not assigned, use the Cluster allocation explain API. Returned values include:

      • ALLOCATION_FAILED: Unassigned as a result of a failed allocation of the shard.
      • CLUSTER_RECOVERED: Unassigned as a result of a full cluster recovery.
      • DANGLING_INDEX_IMPORTED: Unassigned as a result of importing a dangling index.
      • EXISTING_INDEX_RESTORED: Unassigned as a result of restoring into a closed index.
      • FORCED_EMPTY_PRIMARY: The shard’s allocation was last modified by forcing an empty primary using the Cluster reroute API.
      • INDEX_CLOSED: Unassigned because the index was closed.
      • INDEX_CREATED: Unassigned as a result of an API creation of an index.
      • INDEX_REOPENED: Unassigned as a result of opening a closed index.
      • MANUAL_ALLOCATION: The shard’s allocation was last modified by the Cluster reroute API.
      • NEW_INDEX_RESTORED: Unassigned as a result of restoring into a new index.
      • NODE_LEFT: Unassigned as a result of the node hosting it leaving the cluster.
      • NODE_RESTARTING: Similar to NODE_LEFT, except that the node was registered as restarting using the Node shutdown API.
      • PRIMARY_FAILED: The shard was initializing as a replica, but the primary shard failed before the initialization completed.
      • REALLOCATED_REPLICA: A better replica location is identified and causes the existing replica allocation to be cancelled.
      • REINITIALIZED: When a shard moves from started back to initializing.
      • REPLICA_ADDED: Unassigned as a result of explicit addition of a replica.
      • REROUTE_CANCELLED: Unassigned as a result of explicit cancel reroute command.
  • s string | array[string]

    A comma-separated list of column names or aliases that determines the sort order. Sorting defaults to ascending and can be changed by setting :asc or :desc as a suffix to the column name.

  • master_timeout string

    The period to wait for a connection to the master node.

    Values are -1 or 0.

  • time string

    The unit used to display time values.

    Values are nanos, micros, ms, s, m, h, or d.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • index string

      The index name.

    • shard string

      The shard name.

    • prirep string

      The shard type: primary or replica.

    • state string

      The shard state. Returned values include: INITIALIZING: The shard is recovering from a peer shard or gateway. RELOCATING: The shard is relocating. STARTED: The shard has started. UNASSIGNED: The shard is not assigned to any node.

    • docs string | null

      The number of documents in the shard.

    • store string | null

      The disk space used by the shard.

    • dataset string | null

      total size of dataset (including the cache for partially mounted indices)

    • ip string | null

      The IP address of the node.

    • id string

      The unique identifier for the node.

    • node string | null

      The name of node.

    • sync_id string

      The sync identifier.

    • unassigned.reason string

      The reason for the last change to the state of an unassigned shard. It does not explain why the shard is currently unassigned; use the cluster allocation explain API for that information. Returned values include: ALLOCATION_FAILED: Unassigned as a result of a failed allocation of the shard. CLUSTER_RECOVERED: Unassigned as a result of a full cluster recovery. DANGLING_INDEX_IMPORTED: Unassigned as a result of importing a dangling index. EXISTING_INDEX_RESTORED: Unassigned as a result of restoring into a closed index. FORCED_EMPTY_PRIMARY: The shard’s allocation was last modified by forcing an empty primary using the cluster reroute API. INDEX_CLOSED: Unassigned because the index was closed. INDEX_CREATED: Unassigned as a result of an API creation of an index. INDEX_REOPENED: Unassigned as a result of opening a closed index. MANUAL_ALLOCATION: The shard’s allocation was last modified by the cluster reroute API. NEW_INDEX_RESTORED: Unassigned as a result of restoring into a new index. NODE_LEFT: Unassigned as a result of the node hosting it leaving the cluster. NODE_RESTARTING: Similar to NODE_LEFT, except that the node was registered as restarting using the node shutdown API. PRIMARY_FAILED: The shard was initializing as a replica, but the primary shard failed before the initialization completed. REALLOCATED_REPLICA: A better replica location is identified and causes the existing replica allocation to be cancelled. REINITIALIZED: When a shard moves from started back to initializing. REPLICA_ADDED: Unassigned as a result of explicit addition of a replica. REROUTE_CANCELLED: Unassigned as a result of explicit cancel reroute command.

    • unassigned.at string

      The time at which the shard became unassigned in Coordinated Universal Time (UTC).

    • unassigned.for string

      The time at which the shard was requested to be unassigned in Coordinated Universal Time (UTC).

    • unassigned.details string

      Additional details as to why the shard became unassigned. It does not explain why the shard is not assigned; use the cluster allocation explain API for that information.

    • recoverysource.type string

      The type of recovery source.

    • completion.size string

      The size of completion.

    • fielddata.memory_size string

      The used fielddata cache memory.

    • fielddata.evictions string

      The fielddata cache evictions.

    • query_cache.memory_size string

      The used query cache memory.

    • query_cache.evictions string

      The query cache evictions.

    • flush.total string

      The number of flushes.

    • flush.total_time string

      The time spent in flush.

    • get.current string

      The number of current get operations.

    • get.time string

      The time spent in get operations.

    • get.total string

      The number of get operations.

    • get.exists_time string

      The time spent in successful get operations.

    • get.exists_total string

      The number of successful get operations.

    • get.missing_time string

      The time spent in failed get operations.

    • get.missing_total string

      The number of failed get operations.

    • indexing.delete_current string

      The number of current deletion operations.

    • indexing.delete_time string

      The time spent in deletion operations.

    • indexing.delete_total string

      The number of delete operations.

    • indexing.index_current string

      The number of current indexing operations.

    • indexing.index_time string

      The time spent in indexing operations.

    • indexing.index_total string

      The number of indexing operations.

    • indexing.index_failed string

      The number of failed indexing operations.

    • merges.current string

      The number of current merge operations.

    • merges.current_docs string

      The number of current merging documents.

    • merges.current_size string

      The size of current merge operations.

    • merges.total string

      The number of completed merge operations.

    • merges.total_docs string

      The nuber of merged documents.

    • merges.total_size string

      The size of current merges.

    • merges.total_time string

      The time spent merging documents.

    • refresh.total string

      The total number of refreshes.

    • refresh.time string

      The time spent in refreshes.

    • refresh.external_total string

      The total nunber of external refreshes.

    • refresh.external_time string

      The time spent in external refreshes.

    • refresh.listeners string

      The number of pending refresh listeners.

    • search.fetch_current string

      The current fetch phase operations.

    • search.fetch_time string

      The time spent in fetch phase.

    • search.fetch_total string

      The total number of fetch operations.

    • search.open_contexts string

      The number of open search contexts.

    • search.query_current string

      The current query phase operations.

    • search.query_time string

      The time spent in query phase.

    • search.query_total string

      The total number of query phase operations.

    • search.scroll_current string

      The open scroll contexts.

    • search.scroll_time string

      The time scroll contexts were held open.

    • search.scroll_total string

      The number of completed scroll contexts.

    • segments.count string

      The number of segments.

    • segments.memory string

      The memory used by segments.

    • segments.index_writer_memory string

      The memory used by the index writer.

    • segments.version_map_memory string

      The memory used by the version map.

    • segments.fixed_bitset_memory string

      The memory used by fixed bit sets for nested object field types and export type filters for types referred in _parent fields.

    • seq_no.max string

      The maximum sequence number.

    • seq_no.local_checkpoint string

      The local checkpoint.

    • seq_no.global_checkpoint string

      The global checkpoint.

    • warmer.current string

      The number of current warmer operations.

    • warmer.total string

      The total number of warmer operations.

    • warmer.total_time string

      The time spent in warmer operations.

    • path.data string

      The shard data path.

    • path.state string

      The shard state path.

    • bulk.total_operations string

      The number of bulk shard operations.

    • bulk.total_time string

      The time spent in shard bulk operations.

    • bulk.total_size_in_bytes string

      The total size in bytes of shard bulk operations.

    • bulk.avg_time string

      The average time spent in shard bulk operations.

    • bulk.avg_size_in_bytes string

      The average size in bytes of shard bulk operations.

GET /_cat/shards/{index}
GET _cat/shards?format=json
resp = client.cat.shards(
    format="json",
)
const response = await client.cat.shards({
  format: "json",
});
response = client.cat.shards(
  format: "json"
)
$resp = $client->cat()->shards([
    "format" => "json",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_cat/shards?format=json"
client.cat().shards();
Response examples (200)
A successful response from `GET _cat/shards?format=json`.
[
  {
    "index": "my-index-000001",
    "shard": "0",
    "prirep": "p",
    "state": "STARTED",
    "docs": "3014",
    "store": "31.1mb",
    "dataset": "249b",
    "ip": "192.168.56.10",
    "node": "H5dfFeA"
  }
]
A successful response from `GET _cat/shards/my-index-*?format=json`. It returns information for any data streams or indices beginning with `my-index-`.
[
  {
    "index": "my-index-000001",
    "shard": "0",
    "prirep": "p",
    "state": "STARTED",
    "docs": "3014",
    "store": "31.1mb",
    "dataset": "249b",
    "ip": "192.168.56.10",
    "node": "H5dfFeA"
  }
]
A successful response from `GET _cat/shards?format=json`. The `RELOCATING` value in the `state` column indicates the index shard is relocating.
[
  {
    "index": "my-index-000001",
    "shard": "0",
    "prirep": "p",
    "state": "RELOCATING",
    "docs": "3014",
    "store": "31.1mb",
    "dataset": "249b",
    "ip": "192.168.56.10",
    "node": "H5dfFeA -> -> 192.168.56.30 bGG90GE"
  }
]
A successful response from `GET _cat/shards?format=json`. Before a shard is available for use, it goes through an `INITIALIZING` state. You can use the cat shards API to see which shards are initializing.
[
  {
    "index": "my-index-000001",
    "shard": "0",
    "prirep": "p",
    "state": "STARTED",
    "docs": "3014",
    "store": "31.1mb",
    "dataset": "249b",
    "ip": "192.168.56.10",
    "node": "H5dfFeA"
  },
  {
    "index": "my-index-000001",
    "shard": "0",
    "prirep": "r",
    "state": "INITIALIZING",
    "docs": "0",
    "store": "14.3mb",
    "dataset": "249b",
    "ip": "192.168.56.30",
    "node": "bGG90GE"
  }
]
A successful response from `GET _cat/shards?h=index,shard,prirep,state,unassigned.reason&format=json`. It includes the `unassigned.reason` column, which indicates why a shard is unassigned.
[
  {
    "index": "my-index-000001",
    "shard": "0",
    "prirep": "p",
    "state": "STARTED",
    "unassigned.reason": "3014 31.1mb 192.168.56.10 H5dfFeA"
  },
  {
    "index": "my-index-000001",
    "shard": "0",
    "prirep": "r",
    "state": "STARTED",
    "unassigned.reason": "3014 31.1mb 192.168.56.30 bGG90GE"
  },
  {
    "index": "my-index-000001",
    "shard": "0",
    "prirep": "r",
    "state": "STARTED",
    "unassigned.reason": "3014 31.1mb 192.168.56.20 I8hydUG"
  },
  {
    "index": "my-index-000001",
    "shard": "0",
    "prirep": "r",
    "state": "UNASSIGNED",
    "unassigned.reason": "ALLOCATION_FAILED"
  }
]

























Update voting configuration exclusions Generally available; Added in 7.0.0

POST /_cluster/voting_config_exclusions

Update the cluster voting config exclusions by node IDs or node names. By default, if there are more than three master-eligible nodes in the cluster and you remove fewer than half of the master-eligible nodes in the cluster at once, the voting configuration automatically shrinks. If you want to shrink the voting configuration to contain fewer than three nodes or to remove half or more of the master-eligible nodes in the cluster at once, use this API to remove departing nodes from the voting configuration manually. The API adds an entry for each specified node to the cluster’s voting configuration exclusions list. It then waits until the cluster has reconfigured its voting configuration to exclude the specified nodes.

Clusters should have no voting configuration exclusions in normal operation. Once the excluded nodes have stopped, clear the voting configuration exclusions with DELETE /_cluster/voting_config_exclusions. This API waits for the nodes to be fully removed from the cluster before it returns. If your cluster has voting configuration exclusions for nodes that you no longer intend to remove, use DELETE /_cluster/voting_config_exclusions?wait_for_removal=false to clear the voting configuration exclusions without waiting for the nodes to leave the cluster.

A response to POST /_cluster/voting_config_exclusions with an HTTP status code of 200 OK guarantees that the node has been removed from the voting configuration and will not be reinstated until the voting configuration exclusions are cleared by calling DELETE /_cluster/voting_config_exclusions. If the call to POST /_cluster/voting_config_exclusions fails or returns a response with an HTTP status code other than 200 OK then the node may not have been removed from the voting configuration. In that case, you may safely retry the call.

NOTE: Voting exclusions are required only when you remove at least half of the master-eligible nodes from a cluster in a short time period. They are not required when removing master-ineligible nodes or when removing fewer than half of the master-eligible nodes.

External documentation

Query parameters

  • node_names string | array[string]

    A comma-separated list of the names of the nodes to exclude from the voting configuration. If specified, you may not also specify node_ids.

  • node_ids string | array[string]

    A comma-separated list of the persistent ids of the nodes to exclude from the voting configuration. If specified, you may not also specify node_names.

  • master_timeout string

    Period to wait for a connection to the master node.

    Values are -1 or 0.

  • timeout string

    When adding a voting configuration exclusion, the API waits for the specified nodes to be excluded from the voting configuration before returning. If the timeout expires before the appropriate condition is satisfied, the request fails and returns an error.

    Values are -1 or 0.

Responses

  • 200 application/json
POST /_cluster/voting_config_exclusions
curl \
 --request POST 'http://api.example.com/_cluster/voting_config_exclusions' \
 --header "Authorization: $API_KEY"




























Reroute the cluster Generally available; Added in 5.0.0

POST /_cluster/reroute

Manually change the allocation of individual shards in the cluster. For example, a shard can be moved from one node to another explicitly, an allocation can be canceled, and an unassigned shard can be explicitly allocated to a specific node.

It is important to note that after processing any reroute commands Elasticsearch will perform rebalancing as normal (respecting the values of settings such as cluster.routing.rebalance.enable) in order to remain in a balanced state. For example, if the requested allocation includes moving a shard from node1 to node2 then this may cause a shard to be moved from node2 back to node1 to even things out.

The cluster can be set to disable allocations using the cluster.routing.allocation.enable setting. If allocations are disabled then the only allocations that will be performed are explicit ones given using the reroute command, and consequent allocations due to rebalancing.

The cluster will attempt to allocate a shard a maximum of index.allocation.max_retries times in a row (defaults to 5), before giving up and leaving the shard unallocated. This scenario can be caused by structural problems such as having an analyzer which refers to a stopwords file which doesn’t exist on all nodes.

Once the problem has been corrected, allocation can be manually retried by calling the reroute API with the ?retry_failed URI query parameter, which will attempt a single retry round for these shards.

Query parameters

  • dry_run boolean

    If true, then the request simulates the operation. It will calculate the result of applying the commands to the current cluster state and return the resulting cluster state after the commands (and rebalancing) have been applied; it will not actually perform the requested changes.

  • explain boolean

    If true, then the response contains an explanation of why the commands can or cannot run.

  • metric string | array[string]

    Limits the information returned to the specified metrics.

  • retry_failed boolean

    If true, then retries allocation of shards that are blocked due to too many subsequent allocation failures.

  • master_timeout string

    Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.

    Values are -1 or 0.

  • timeout string

    Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.

    Values are -1 or 0.

application/json

Body

  • commands array[object]

    Defines the commands to perform.

    Hide commands attributes Show commands attributes object
    • cancel object

      Cancel allocation of a shard (or recovery). Accepts index and shard for index name and shard number, and node for the node to cancel the shard allocation on. This can be used to force resynchronization of existing replicas from the primary shard by cancelling them and allowing them to be reinitialized through the standard recovery process. By default only replica shard allocations can be cancelled. If it is necessary to cancel the allocation of a primary shard then the allow_primary flag must also be included in the request.

      Hide cancel attributes Show cancel attributes object
      • index string Required
      • shard number Required
      • node string Required
      • allow_primary boolean
    • move object

      Move a started shard from one node to another node. Accepts index and shard for index name and shard number, from_node for the node to move the shard from, and to_node for the node to move the shard to.

      Hide move attributes Show move attributes object
      • index string Required
      • shard number Required
      • from_node string Required

        The node to move the shard from

      • to_node string Required

        The node to move the shard to

    • allocate_replica object

      Allocate an unassigned replica shard to a node. Accepts index and shard for index name and shard number, and node to allocate the shard to. Takes allocation deciders into account.

      Hide allocate_replica attributes Show allocate_replica attributes object
      • index string Required
      • shard number Required
      • node string Required
    • allocate_stale_primary object

      Allocate a primary shard to a node that holds a stale copy. Accepts the index and shard for index name and shard number, and node to allocate the shard to. Using this command may lead to data loss for the provided shard id. If a node which has the good copy of the data rejoins the cluster later on, that data will be deleted or overwritten with the data of the stale copy that was forcefully allocated with this command. To ensure that these implications are well-understood, this command requires the flag accept_data_loss to be explicitly set to true.

      Hide allocate_stale_primary attributes Show allocate_stale_primary attributes object
      • index string Required
      • shard number Required
      • node string Required
      • accept_data_loss boolean Required

        If a node which has a copy of the data rejoins the cluster later on, that data will be deleted. To ensure that these implications are well-understood, this command requires the flag accept_data_loss to be explicitly set to true

    • allocate_empty_primary object

      Allocate an empty primary shard to a node. Accepts the index and shard for index name and shard number, and node to allocate the shard to. Using this command leads to a complete loss of all data that was indexed into this shard, if it was previously started. If a node which has a copy of the data rejoins the cluster later on, that data will be deleted. To ensure that these implications are well-understood, this command requires the flag accept_data_loss to be explicitly set to true.

      Hide allocate_empty_primary attributes Show allocate_empty_primary attributes object
      • index string Required
      • shard number Required
      • node string Required
      • accept_data_loss boolean Required

        If a node which has a copy of the data rejoins the cluster later on, that data will be deleted. To ensure that these implications are well-understood, this command requires the flag accept_data_loss to be explicitly set to true

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • acknowledged boolean Required
    • explanations array[object]
      Hide explanations attributes Show explanations attributes object
      • command string Required
      • decisions array[object] Required
        Hide decisions attributes Show decisions attributes object
        • decider string Required
        • decision string Required
        • explanation string Required
      • parameters object Required
        Hide parameters attributes Show parameters attributes object
        • allow_primary boolean Required
        • index string Required
        • node string Required
        • shard number Required
        • from_node string
        • to_node string
    • state object

      There aren't any guarantees on the output/structure of the raw cluster state. Here you will find the internal representation of the cluster, which can differ from the external representation.

POST /_cluster/reroute
POST /_cluster/reroute?metric=none
{
  "commands": [
    {
      "move": {
        "index": "test", "shard": 0,
        "from_node": "node1", "to_node": "node2"
      }
    },
    {
      "allocate_replica": {
        "index": "test", "shard": 1,
        "node": "node3"
      }
    }
  ]
}
resp = client.cluster.reroute(
    metric="none",
    commands=[
        {
            "move": {
                "index": "test",
                "shard": 0,
                "from_node": "node1",
                "to_node": "node2"
            }
        },
        {
            "allocate_replica": {
                "index": "test",
                "shard": 1,
                "node": "node3"
            }
        }
    ],
)
const response = await client.cluster.reroute({
  metric: "none",
  commands: [
    {
      move: {
        index: "test",
        shard: 0,
        from_node: "node1",
        to_node: "node2",
      },
    },
    {
      allocate_replica: {
        index: "test",
        shard: 1,
        node: "node3",
      },
    },
  ],
});
response = client.cluster.reroute(
  metric: "none",
  body: {
    "commands": [
      {
        "move": {
          "index": "test",
          "shard": 0,
          "from_node": "node1",
          "to_node": "node2"
        }
      },
      {
        "allocate_replica": {
          "index": "test",
          "shard": 1,
          "node": "node3"
        }
      }
    ]
  }
)
$resp = $client->cluster()->reroute([
    "metric" => "none",
    "body" => [
        "commands" => array(
            [
                "move" => [
                    "index" => "test",
                    "shard" => 0,
                    "from_node" => "node1",
                    "to_node" => "node2",
                ],
            ],
            [
                "allocate_replica" => [
                    "index" => "test",
                    "shard" => 1,
                    "node" => "node3",
                ],
            ],
        ),
    ],
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"commands":[{"move":{"index":"test","shard":0,"from_node":"node1","to_node":"node2"}},{"allocate_replica":{"index":"test","shard":1,"node":"node3"}}]}' "$ELASTICSEARCH_URL/_cluster/reroute?metric=none"
Request example
Run `POST /_cluster/reroute?metric=none` to changes the allocation of shards in a cluster.
{
  "commands": [
    {
      "move": {
        "index": "test", "shard": 0,
        "from_node": "node1", "to_node": "node2"
      }
    },
    {
      "allocate_replica": {
        "index": "test", "shard": 1,
        "node": "node3"
      }
    }
  ]
}
























Get node information Generally available; Added in 1.3.0

GET /_nodes/{node_id}/{metric}

All methods and paths for this operation:

GET /_nodes

GET /_nodes/{metric}
GET /_nodes/{node_id}
GET /_nodes/{node_id}/{metric}

By default, the API returns all attributes and core settings for cluster nodes.

Path parameters

  • node_id string | array[string] Required

    Comma-separated list of node IDs or names used to limit returned information.

  • metric string | array[string] Required

    Limits the information returned to the specific metrics. Supports a comma-separated list, such as http,ingest.

Query parameters

  • flat_settings boolean

    If true, returns settings in flat format.

  • timeout string

    Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.

    Values are -1 or 0.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • _nodes object

      Contains statistics about the number of nodes selected by the request’s node filters.

      Hide _nodes attributes Show _nodes attributes object
      • failures array[object]

        Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

        Hide failures attributes Show failures attributes object
        • type string Required

          The type of error

        • reason
        • stack_trace string

          The server stack trace. Present only if the error_trace=true parameter was sent with the request.

        • caused_by
        • root_cause array[object]
        • suppressed array[object]
      • total number Required

        Total number of nodes selected by the request.

      • successful number Required

        Number of nodes that responded successfully to the request.

      • failed number Required

        Number of nodes that rejected the request or failed to respond. If this value is not 0, a reason for the rejection or failure is included in the response.

    • cluster_name string Required
    • nodes object Required
      Hide nodes attribute Show nodes attribute object
      • * object Additional properties
        Hide * attributes Show * attributes object
        • attributes object Required
          Hide attributes attribute Show attributes attribute object
          • * string Additional properties
        • build_flavor string Required
        • build_hash string Required

          Short hash of the last git commit in this release.

        • build_type string Required
        • component_versions object Required
          Hide component_versions attribute Show component_versions attribute object
          • * number Additional properties
        • host string Required

          The node’s host name.

        • http object
          Hide http attributes Show http attributes object
          • bound_address array[string] Required
          • max_content_length_in_bytes number Required
          • publish_address string Required
        • index_version number Required
        • ip string Required

          The node’s IP address.

        • jvm object
          Hide jvm attributes Show jvm attributes object
          • gc_collectors array[string] Required
          • memory_pools array[string] Required
          • pid number Required
          • vm_vendor string Required
          • using_bundled_jdk boolean Required
          • using_compressed_ordinary_object_pointers
          • input_arguments array[string] Required
        • name string Required

          The node's name

        • os object
          Hide os attributes Show os attributes object
          • arch string Required

            Name of the JVM architecture (ex: amd64, x86)

          • available_processors number Required

            Number of processors available to the Java virtual machine

          • allocated_processors number

            The number of processors actually used to calculate thread pool size. This number can be set with the node.processors setting of a node and defaults to the number of processors reported by the OS.

        • plugins array[object]
          Hide plugins attributes Show plugins attributes object
          • classname string Required
          • description string Required
          • elasticsearch_version
          • extended_plugins array[string] Required
          • has_native_controller boolean Required
          • java_version
          • name
          • version
          • licensed boolean Required
        • process object
          Hide process attributes Show process attributes object
          • id number Required

            Process identifier (PID)

          • mlockall boolean Required

            Indicates if the process address space has been successfully locked in memory

        • roles array[string] Required

          Values are master, data, data_cold, data_content, data_frozen, data_hot, data_warm, client, ingest, ml, voting_only, transform, remote_cluster_client, or coordinating_only.

        • settings object
        • thread_pool object
          Hide thread_pool attribute Show thread_pool attribute object
          • * object Additional properties
            Hide * attributes Show * attributes object
            • core number
            • max number
            • queue_size number Required
            • size number
            • type string Required
        • total_indexing_buffer number

          Total heap allowed to be used to hold recently indexed documents before they must be written to disk. This size is a shared pool across all shards on this node, and is controlled by Indexing Buffer settings.

        • total_indexing_buffer_in_bytes number | string

          Same as total_indexing_buffer, but expressed in bytes.

          One of:

          Same as total_indexing_buffer, but expressed in bytes.

        • transport object
          Hide transport attributes Show transport attributes object
          • bound_address array[string] Required
          • publish_address string Required
          • profiles object Required
        • transport_address string Required

          Host and port where transport HTTP connections are accepted.

        • transport_version number Required
        • version string Required

          Elasticsearch version running on this node.

        • modules array[object]
          Hide modules attributes Show modules attributes object
          • classname string Required
          • description string Required
          • elasticsearch_version
          • extended_plugins array[string] Required
          • has_native_controller boolean Required
          • java_version
          • name
          • version
          • licensed boolean Required
        • ingest object
          Hide ingest attribute Show ingest attribute object
          • processors array[object] Required
        • aggregations object
          Hide aggregations attribute Show aggregations attribute object
          • * object Additional properties
            Hide * attribute Show * attribute object
            • types array[string] Required
        • remote_cluster_server object
          Hide remote_cluster_server attribute Show remote_cluster_server attribute object
          • bound_address array[string] Required
GET /_nodes/{node_id}/{metric}
GET _nodes/_all/jvm
resp = client.nodes.info(
    node_id="_all",
    metric="jvm",
)
const response = await client.nodes.info({
  node_id: "_all",
  metric: "jvm",
});
response = client.nodes.info(
  node_id: "_all",
  metric: "jvm"
)
$resp = $client->nodes()->info([
    "node_id" => "_all",
    "metric" => "jvm",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_nodes/_all/jvm"
client.nodes().info(i -> i
    .metric("jvm")
    .nodeId("_all")
);
Response examples (200)
An abbreviated response when requesting cluster nodes information.
{
    "_nodes": {},
    "cluster_name": "elasticsearch",
    "nodes": {
      "USpTGYaBSIKbgSUJR2Z9lg": {
        "name": "node-0",
        "transport_address": "192.168.17:9300",
        "host": "node-0.elastic.co",
        "ip": "192.168.17",
        "version": "{version}",
        "transport_version": 100000298,
        "index_version": 100000074,
        "component_versions": {
          "ml_config_version": 100000162,
          "transform_config_version": 100000096
        },
        "build_flavor": "default",
        "build_type": "{build_type}",
        "build_hash": "587409e",
        "roles": [
          "master",
          "data",
          "ingest"
        ],
        "attributes": {},
        "plugins": [
          {
            "name": "analysis-icu",
            "version": "{version}",
            "description": "The ICU Analysis plugin integrates Lucene ICU
  module into elasticsearch, adding ICU relates analysis components.",
            "classname":
  "org.elasticsearch.plugin.analysis.icu.AnalysisICUPlugin",
            "has_native_controller": false
          }
        ],
        "modules": [
          {
            "name": "lang-painless",
            "version": "{version}",
            "description": "An easy, safe and fast scripting language for
  Elasticsearch",
            "classname": "org.elasticsearch.painless.PainlessPlugin",
            "has_native_controller": false
          }
        ]
      }
    }
}

Reload the keystore on nodes in the cluster Generally available; Added in 6.5.0

POST /_nodes/{node_id}/reload_secure_settings

All methods and paths for this operation:

POST /_nodes/reload_secure_settings

POST /_nodes/{node_id}/reload_secure_settings

Secure settings are stored in an on-disk keystore. Certain of these settings are reloadable. That is, you can change them on disk and reload them without restarting any nodes in the cluster. When you have updated reloadable secure settings in your keystore, you can use this API to reload those settings on each node.

When the Elasticsearch keystore is password protected and not simply obfuscated, you must provide the password for the keystore when you reload the secure settings. Reloading the settings for the whole cluster assumes that the keystores for all nodes are protected with the same password; this method is allowed only when inter-node communications are encrypted. Alternatively, you can reload the secure settings on each node by locally accessing the API and passing the node-specific Elasticsearch keystore password.

Path parameters

  • node_id string | array[string] Required

    The names of particular nodes in the cluster to target.

Query parameters

  • timeout string

    Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.

    Values are -1 or 0.

application/json

Body

  • secure_settings_password string

    The password for the Elasticsearch keystore.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • _nodes object

      Contains statistics about the number of nodes selected by the request’s node filters.

      Hide _nodes attributes Show _nodes attributes object
      • failures array[object]

        Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

        Hide failures attributes Show failures attributes object
        • type string Required

          The type of error

        • reason
        • stack_trace string

          The server stack trace. Present only if the error_trace=true parameter was sent with the request.

        • caused_by
        • root_cause array[object]
        • suppressed array[object]
      • total number Required

        Total number of nodes selected by the request.

      • successful number Required

        Number of nodes that responded successfully to the request.

      • failed number Required

        Number of nodes that rejected the request or failed to respond. If this value is not 0, a reason for the rejection or failure is included in the response.

    • cluster_name string Required
    • nodes object Required
      Hide nodes attribute Show nodes attribute object
      • * object Additional properties
        Hide * attributes Show * attributes object
        • name string Required
        • reload_exception object

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

          Hide reload_exception attributes Show reload_exception attributes object
          • type string Required

            The type of error

          • reason
          • stack_trace string

            The server stack trace. Present only if the error_trace=true parameter was sent with the request.

          • root_cause array[object]
          • suppressed array[object]
POST /_nodes/{node_id}/reload_secure_settings
POST _nodes/reload_secure_settings
{
  "secure_settings_password": "keystore-password"
}
resp = client.nodes.reload_secure_settings(
    secure_settings_password="keystore-password",
)
const response = await client.nodes.reloadSecureSettings({
  secure_settings_password: "keystore-password",
});
response = client.nodes.reload_secure_settings(
  body: {
    "secure_settings_password": "keystore-password"
  }
)
$resp = $client->nodes()->reloadSecureSettings([
    "body" => [
        "secure_settings_password" => "keystore-password",
    ],
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"secure_settings_password":"keystore-password"}' "$ELASTICSEARCH_URL/_nodes/reload_secure_settings"
client.nodes().reloadSecureSettings(r -> r
    .secureSettingsPassword("keystore-password")
);
Request example
Run `POST _nodes/reload_secure_settings` to reload the keystore on nodes in the cluster.
{
  "secure_settings_password": "keystore-password"
}
Response examples (200)
A successful response when reloading keystore on nodes in your cluster.
{
  "_nodes": {
    "total": 1,
    "successful": 1,
    "failed": 0
  },
  "cluster_name": "my_cluster",
  "nodes": {
    "pQHNt5rXTTWNvUgOrdynKg": {
      "name": "node-0"
    }
  }
}














Check in a connector Technical preview; Added in 8.12.0

PUT /_connector/{connector_id}/_check_in

Update the last_seen field in the connector and set it to the current timestamp.

Path parameters

  • connector_id string Required

    The unique identifier of the connector to be checked in

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • result string Required

      Values are created, updated, deleted, not_found, or noop.

PUT /_connector/{connector_id}/_check_in
PUT _connector/my-connector/_check_in
resp = client.connector.check_in(
    connector_id="my-connector",
)
const response = await client.connector.checkIn({
  connector_id: "my-connector",
});
response = client.connector.check_in(
  connector_id: "my-connector"
)
$resp = $client->connector()->checkIn([
    "connector_id" => "my-connector",
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_connector/my-connector/_check_in"
client.connector().checkIn(c -> c
    .connectorId("my-connector")
);
Response examples (200)
{
    "result": "updated"
}

































































































































Get follower information Generally available; Added in 6.7.0

GET /{index}/_ccr/info

Get information about all cross-cluster replication follower indices. For example, the results include follower index names, leader index names, replication options, and whether the follower indices are active or paused.

Required authorization

  • Cluster privileges: monitor
External documentation

Path parameters

  • index string | array[string] Required

    A comma-delimited list of follower index patterns.

Query parameters

  • master_timeout string

    The period to wait for a connection to the master node. If the master node is not available before the timeout expires, the request fails and returns an error. It can also be set to -1 to indicate that the request should never timeout.

    Values are -1 or 0.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • follower_indices array[object] Required
      Hide follower_indices attributes Show follower_indices attributes object
      • follower_index string Required

        The name of the follower index.

      • leader_index string Required

        The name of the index in the leader cluster that is followed.

      • parameters object

        An object that encapsulates cross-cluster replication parameters. If the follower index's status is paused, this object is omitted.

        Hide parameters attributes Show parameters attributes object
        • max_outstanding_read_requests number

          The maximum number of outstanding reads requests from the remote cluster.

        • max_outstanding_write_requests number

          The maximum number of outstanding write requests on the follower.

        • max_read_request_operation_count number

          The maximum number of operations to pull per read from the remote cluster.

        • max_read_request_size
        • max_retry_delay string

          The maximum time to wait before retrying an operation that failed exceptionally. An exponential backoff strategy is employed when retrying.

        • max_write_buffer_count number

          The maximum number of operations that can be queued for writing. When this limit is reached, reads from the remote cluster will be deferred until the number of queued operations goes below the limit.

        • max_write_buffer_size
        • max_write_request_operation_count number

          The maximum number of operations per bulk write request executed on the follower.

        • max_write_request_size
        • read_poll_timeout string

          The maximum time to wait for new operations on the remote cluster when the follower index is synchronized with the leader index. When the timeout has elapsed, the poll for operations will return to the follower so that it can update some statistics. Then the follower will immediately attempt to read from the leader again.

      • remote_cluster string Required

        The remote cluster that contains the leader index.

      • status string Required

        The status of the index following: active or paused.

        Values are active or paused.

GET /{index}/_ccr/info
GET /follower_index/_ccr/info
resp = client.ccr.follow_info(
    index="follower_index",
)
const response = await client.ccr.followInfo({
  index: "follower_index",
});
response = client.ccr.follow_info(
  index: "follower_index"
)
$resp = $client->ccr()->followInfo([
    "index" => "follower_index",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/follower_index/_ccr/info"
client.ccr().followInfo(f -> f
    .index("follower_index")
);
Response examples (200)
A successful response from `GET /follower_index/_ccr/info` when the follower index is active.
{
  "follower_indices": [
    {
      "follower_index": "follower_index",
      "remote_cluster": "remote_cluster",
      "leader_index": "leader_index",
      "status": "active",
      "parameters": {
        "max_read_request_operation_count": 5120,
        "max_read_request_size": "32mb",
        "max_outstanding_read_requests": 12,
        "max_write_request_operation_count": 5120,
        "max_write_request_size": "9223372036854775807b",
        "max_outstanding_write_requests": 9,
        "max_write_buffer_count": 2147483647,
        "max_write_buffer_size": "512mb",
        "max_retry_delay": "500ms",
        "read_poll_timeout": "1m"
      }
    }
  ]
}
A successful response from `GET /follower_index/_ccr/info` when the follower index is paused.
{
  "follower_indices": [
    {
      "follower_index": "follower_index",
      "remote_cluster": "remote_cluster",
      "leader_index": "leader_index",
      "status": "paused"
    }
  ]
}

























































































Get a document by its ID Generally available

GET /{index}/_doc/{id}

Get a document and its source or stored fields from an index.

By default, this API is realtime and is not affected by the refresh rate of the index (when data will become visible for search). In the case where stored fields are requested with the stored_fields parameter and the document has been updated but is not yet refreshed, the API will have to parse and analyze the source to extract the stored fields. To turn off realtime behavior, set the realtime parameter to false.

Source filtering

By default, the API returns the contents of the _source field unless you have used the stored_fields parameter or the _source field is turned off. You can turn off _source retrieval by using the _source parameter:

GET my-index-000001/_doc/0?_source=false

If you only need one or two fields from the _source, use the _source_includes or _source_excludes parameters to include or filter out particular fields. This can be helpful with large documents where partial retrieval can save on network overhead Both parameters take a comma separated list of fields or wildcard expressions. For example:

GET my-index-000001/_doc/0?_source_includes=*.id&_source_excludes=entities

If you only want to specify includes, you can use a shorter notation:

GET my-index-000001/_doc/0?_source=*.id

Routing

If routing is used during indexing, the routing value also needs to be specified to retrieve a document. For example:

GET my-index-000001/_doc/2?routing=user1

This request gets the document with ID 2, but it is routed based on the user. The document is not fetched if the correct routing is not specified.

Distributed

The GET operation is hashed into a specific shard ID. It is then redirected to one of the replicas within that shard ID and returns the result. The replicas are the primary shard and its replicas within that shard ID group. This means that the more replicas you have, the better your GET scaling will be.

Versioning support

You can use the version parameter to retrieve the document only if its current version is equal to the specified one.

Internally, Elasticsearch has marked the old document as deleted and added an entirely new document. The old version of the document doesn't disappear immediately, although you won't be able to access it. Elasticsearch cleans up deleted documents in the background as you continue to index more data.

Required authorization

  • Index privileges: read

Path parameters

  • index string Required

    The name of the index that contains the document.

  • id string Required

    A unique document identifier.

Query parameters

  • preference string

    The node or shard the operation should be performed on. By default, the operation is randomized between the shard replicas.

    If it is set to _local, the operation will prefer to be run on a local allocated shard when possible. If it is set to a custom value, the value is used to guarantee that the same shards will be used for the same custom value. This can help with "jumping values" when hitting different shards in different refresh states. A sample value can be something like the web session ID or the user name.

  • realtime boolean

    If true, the request is real-time as opposed to near-real-time.

  • refresh boolean

    If true, the request refreshes the relevant shards before retrieving the document. Setting it to true should be done after careful thought and verification that this does not cause a heavy load on the system (and slow down indexing).

  • routing string

    A custom value used to route operations to a specific shard.

  • _source boolean | string | array[string]

    Indicates whether to return the _source field (true or false) or lists the fields to return.

  • _source_excludes string | array[string]

    A comma-separated list of source fields to exclude from the response. You can also use this parameter to exclude fields from the subset specified in _source_includes query parameter. If the _source parameter is false, this parameter is ignored.

  • _source_includes string | array[string]

    A comma-separated list of source fields to include in the response. If this parameter is specified, only these source fields are returned. You can exclude fields from this subset using the _source_excludes query parameter. If the _source parameter is false, this parameter is ignored.

  • stored_fields string | array[string]

    A comma-separated list of stored fields to return as part of a hit. If no fields are specified, no stored fields are included in the response. If this field is specified, the _source parameter defaults to false. Only leaf fields can be retrieved with the stored_field option. Object fields can't be returned;​if specified, the request fails.

  • version number

    The version number for concurrency control. It must match the current version of the document for the request to succeed.

  • version_type string

    The version type.

    Supported values include:

    • internal: Use internal versioning that starts at 1 and increments with each update or delete.
    • external: Only index the document if the specified version is strictly higher than the version of the stored document or if there is no existing document.
    • external_gte: Only index the document if the specified version is equal or higher than the version of the stored document or if there is no existing document. NOTE: The external_gte version type is meant for special use cases and should be used with care. If used incorrectly, it can result in loss of data.
    • force: This option is deprecated because it can cause primary and replica shards to diverge.

    Values are internal, external, external_gte, or force.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • _index string Required

      The name of the index the document belongs to.

    • fields object

      If the stored_fields parameter is set to true and found is true, it contains the document fields stored in the index.

      Hide fields attribute Show fields attribute object
      • * object Additional properties
    • _ignored array[string]
    • found boolean Required

      Indicates whether the document exists.

    • _id string Required

      The unique identifier for the document.

    • _primary_term number

      The primary term assigned to the document for the indexing operation.

    • _routing string

      The explicit routing, if set.

    • _seq_no number

      The sequence number assigned to the document for the indexing operation. Sequence numbers are used to ensure an older version of a document doesn't overwrite a newer version.

    • _source object

      If found is true, it contains the document data formatted in JSON. If the _source parameter is set to false or the stored_fields parameter is set to true, it is excluded.

    • _version number

      The document version, which is ncremented each time the document is updated.

GET /{index}/_doc/{id}
GET my-index-000001/_doc/1?stored_fields=tags,counter
resp = client.get(
    index="my-index-000001",
    id="1",
    stored_fields="tags,counter",
)
const response = await client.get({
  index: "my-index-000001",
  id: 1,
  stored_fields: "tags,counter",
});
response = client.get(
  index: "my-index-000001",
  id: "1",
  stored_fields: "tags,counter"
)
$resp = $client->get([
    "index" => "my-index-000001",
    "id" => "1",
    "stored_fields" => "tags,counter",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/my-index-000001/_doc/1?stored_fields=tags,counter"
Response examples (200)
A successful response from `GET my-index-000001/_doc/0`. It retrieves the JSON document with the `_id` 0 from the `my-index-000001` index.
{
  "_index": "my-index-000001",
  "_id": "0",
  "_version": 1,
  "_seq_no": 0,
  "_primary_term": 1,
  "found": true,
  "_source": {
    "@timestamp": "2099-11-15T14:12:12",
    "http": {
      "request": {
        "method": "get"
      },
      "response": {
        "status_code": 200,
        "bytes": 1070000
      },
      "version": "1.1"
    },
    "source": {
      "ip": "127.0.0.1"
    },
    "message": "GET /search HTTP/1.1 200 1070000",
    "user": {
      "id": "kimchy"
    }
  }
}
A successful response from `GET my-index-000001/_doc/1?stored_fields=tags,counter`, which retrieves a set of stored fields. Field values fetched from the document itself are always returned as an array. Any requested fields that are not stored (such as the counter field in this example) are ignored.
{
  "_index": "my-index-000001",
  "_id": "1",
  "_version": 1,
  "_seq_no" : 22,
  "_primary_term" : 1,
  "found": true,
  "fields": {
      "tags": [
        "production"
      ]
  }
}
A successful response from `GET my-index-000001/_doc/2?routing=user1&stored_fields=tags,counter`, which retrieves the `_routing` metadata field.
{
  "_index": "my-index-000001",
  "_id": "2",
  "_version": 1,
  "_seq_no" : 13,
  "_primary_term" : 1,
  "_routing": "user1",
  "found": true,
  "fields": {
      "tags": [
        "env2"
      ]
  }
}




Delete a document Generally available

DELETE /{index}/_doc/{id}

Remove a JSON document from the specified index.

NOTE: You cannot send deletion requests directly to a data stream. To delete a document in a data stream, you must target the backing index containing the document.

Optimistic concurrency control

Delete operations can be made conditional and only be performed if the last modification to the document was assigned the sequence number and primary term specified by the if_seq_no and if_primary_term parameters. If a mismatch is detected, the operation will result in a VersionConflictException and a status code of 409.

Versioning

Each document indexed is versioned. When deleting a document, the version can be specified to make sure the relevant document you are trying to delete is actually being deleted and it has not changed in the meantime. Every write operation run on a document, deletes included, causes its version to be incremented. The version number of a deleted document remains available for a short time after deletion to allow for control of concurrent operations. The length of time for which a deleted document's version remains available is determined by the index.gc_deletes index setting.

Routing

If routing is used during indexing, the routing value also needs to be specified to delete a document.

If the _routing mapping is set to required and no routing value is specified, the delete API throws a RoutingMissingException and rejects the request.

For example:

DELETE /my-index-000001/_doc/1?routing=shard-1

This request deletes the document with ID 1, but it is routed based on the user. The document is not deleted if the correct routing is not specified.

Distributed

The delete operation gets hashed into a specific shard ID. It then gets redirected into the primary shard within that ID group and replicated (if needed) to shard replicas within that ID group.

Required authorization

  • Index privileges: delete

Path parameters

  • index string Required

    The name of the target index.

  • id string Required

    A unique identifier for the document.

Query parameters

  • if_primary_term number

    Only perform the operation if the document has this primary term.

  • if_seq_no number

    Only perform the operation if the document has this sequence number.

  • refresh string

    If true, Elasticsearch refreshes the affected shards to make this operation visible to search. If wait_for, it waits for a refresh to make this operation visible to search. If false, it does nothing with refreshes.

    Values are true, false, or wait_for.

  • routing string

    A custom value used to route operations to a specific shard.

  • timeout string

    The period to wait for active shards.

    This parameter is useful for situations where the primary shard assigned to perform the delete operation might not be available when the delete operation runs. Some reasons for this might be that the primary shard is currently recovering from a store or undergoing relocation. By default, the delete operation will wait on the primary shard to become available for up to 1 minute before failing and responding with an error.

    Values are -1 or 0.

  • version number

    An explicit version number for concurrency control. It must match the current version of the document for the request to succeed.

  • version_type string

    The version type.

    Supported values include:

    • internal: Use internal versioning that starts at 1 and increments with each update or delete.
    • external: Only index the document if the specified version is strictly higher than the version of the stored document or if there is no existing document.
    • external_gte: Only index the document if the specified version is equal or higher than the version of the stored document or if there is no existing document. NOTE: The external_gte version type is meant for special use cases and should be used with care. If used incorrectly, it can result in loss of data.
    • force: This option is deprecated because it can cause primary and replica shards to diverge.

    Values are internal, external, external_gte, or force.

  • wait_for_active_shards number | string

    The minimum number of shard copies that must be active before proceeding with the operation. You can set it to all or any positive integer up to the total number of shards in the index (number_of_replicas+1). The default value of 1 means it waits for each primary shard to be active.

    Values are all or index-setting.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • _id string Required

      The unique identifier for the added document.

    • _index string Required

      The name of the index the document was added to.

    • _primary_term number

      The primary term assigned to the document for the indexing operation.

    • result string Required

      The result of the indexing operation: created or updated.

      Values are created, updated, deleted, not_found, or noop.

    • _seq_no number

      The sequence number assigned to the document for the indexing operation. Sequence numbers are used to ensure an older version of a document doesn't overwrite a newer version.

    • _shards object Required

      Information about the replication process of the operation.

      Hide _shards attributes Show _shards attributes object
      • failed number Required

        The number of shards the operation or search attempted to run on but failed.

      • successful number Required

        The number of shards the operation or search succeeded on.

      • total number Required

        The number of shards the operation or search will run on overall.

      • failures array[object]
        Hide failures attributes Show failures attributes object
        • index string
        • node string
        • reason object Required

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

        • shard number
        • status string
        • primary boolean
      • skipped number
    • _version number Required

      The document version, which is incremented each time the document is updated.

    • forced_refresh boolean
DELETE /{index}/_doc/{id}
DELETE /my-index-000001/_doc/1
resp = client.delete(
    index="my-index-000001",
    id="1",
)
const response = await client.delete({
  index: "my-index-000001",
  id: 1,
});
response = client.delete(
  index: "my-index-000001",
  id: "1"
)
$resp = $client->delete([
    "index" => "my-index-000001",
    "id" => "1",
]);
curl -X DELETE -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/my-index-000001/_doc/1"
client.delete(d -> d
    .id("1")
    .index("my-index-000001")
);
Response examples (200)
A successful response from `DELETE /my-index-000001/_doc/1`, which deletes the JSON document 1 from the `my-index-000001` index.
{
  "_shards": {
    "total": 2,
    "failed": 0,
    "successful": 2
  },
  "_index": "my-index-000001",
  "_id": "1",
  "_version": 2,
  "_primary_term": 1,
  "_seq_no": 5,
  "result": "deleted"
}




































Get term vector information Generally available

POST /{index}/_termvectors/{id}

All methods and paths for this operation:

GET /{index}/_termvectors

POST /{index}/_termvectors
GET /{index}/_termvectors/{id}
POST /{index}/_termvectors/{id}

Get information and statistics about terms in the fields of a particular document.

You can retrieve term vectors for documents stored in the index or for artificial documents passed in the body of the request. You can specify the fields you are interested in through the fields parameter or by adding the fields to the request body. For example:

GET /my-index-000001/_termvectors/1?fields=message

Fields can be specified using wildcards, similar to the multi match query.

Term vectors are real-time by default, not near real-time. This can be changed by setting realtime parameter to false.

You can request three types of values: term information, term statistics, and field statistics. By default, all term information and field statistics are returned for all fields but term statistics are excluded.

Term information

  • term frequency in the field (always returned)
  • term positions (positions: true)
  • start and end offsets (offsets: true)
  • term payloads (payloads: true), as base64 encoded bytes

If the requested information wasn't stored in the index, it will be computed on the fly if possible. Additionally, term vectors could be computed for documents not even existing in the index, but instead provided by the user.


Start and end offsets assume UTF-16 encoding is being used. If you want to use these offsets in order to get the original text that produced this token, you should make sure that the string you are taking a sub-string of is also encoded using UTF-16.

Behaviour

The term and field statistics are not accurate. Deleted documents are not taken into account. The information is only retrieved for the shard the requested document resides in. The term and field statistics are therefore only useful as relative measures whereas the absolute numbers have no meaning in this context. By default, when requesting term vectors of artificial documents, a shard to get the statistics from is randomly selected. Use routing only to hit a particular shard.

Required authorization

  • Index privileges: read

Path parameters

  • index string Required

    The name of the index that contains the document.

  • id string Required

    A unique identifier for the document.

Query parameters

  • fields string | array[string]

    A comma-separated list or wildcard expressions of fields to include in the statistics. It is used as the default list unless a specific field list is provided in the completion_fields or fielddata_fields parameters.

  • field_statistics boolean

    If true, the response includes:

    • The document count (how many documents contain this field).
    • The sum of document frequencies (the sum of document frequencies for all terms in this field).
    • The sum of total term frequencies (the sum of total term frequencies of each term in this field).
  • offsets boolean

    If true, the response includes term offsets.

  • payloads boolean

    If true, the response includes term payloads.

  • positions boolean

    If true, the response includes term positions.

  • preference string

    The node or shard the operation should be performed on. It is random by default.

  • realtime boolean

    If true, the request is real-time as opposed to near-real-time.

  • routing string

    A custom value that is used to route operations to a specific shard.

  • term_statistics boolean

    If true, the response includes:

    • The total term frequency (how often a term occurs in all documents).
    • The document frequency (the number of documents containing the current term).

    By default these values are not returned since term statistics can have a serious performance impact.

  • version number

    If true, returns the document version as part of a hit.

  • version_type string

    The version type.

    Supported values include:

    • internal: Use internal versioning that starts at 1 and increments with each update or delete.
    • external: Only index the document if the specified version is strictly higher than the version of the stored document or if there is no existing document.
    • external_gte: Only index the document if the specified version is equal or higher than the version of the stored document or if there is no existing document. NOTE: The external_gte version type is meant for special use cases and should be used with care. If used incorrectly, it can result in loss of data.
    • force: This option is deprecated because it can cause primary and replica shards to diverge.

    Values are internal, external, external_gte, or force.

application/json

Body

  • doc object

    An artificial document (a document not present in the index) for which you want to retrieve term vectors.

  • filter object

    Filter terms based on their tf-idf scores. This could be useful in order find out a good characteristic vector of a document. This feature works in a similar manner to the second phase of the More Like This Query.

    Hide filter attributes Show filter attributes object
    • max_doc_freq number

      Ignore words which occur in more than this many docs. Defaults to unbounded.

    • max_num_terms number

      The maximum number of terms that must be returned per field.

      Default value is 25.

    • max_term_freq number

      Ignore words with more than this frequency in the source doc. It defaults to unbounded.

    • max_word_length number

      The maximum word length above which words will be ignored. Defaults to unbounded.

      Default value is 0.

    • min_doc_freq number

      Ignore terms which do not occur in at least this many docs.

      Default value is 1.

    • min_term_freq number

      Ignore words with less than this frequency in the source doc.

      Default value is 1.

    • min_word_length number

      The minimum word length below which words will be ignored.

      Default value is 0.

  • per_field_analyzer object

    Override the default per-field analyzer. This is useful in order to generate term vectors in any fashion, especially when using artificial documents. When providing an analyzer for a field that already stores term vectors, the term vectors will be regenerated.

    Hide per_field_analyzer attribute Show per_field_analyzer attribute object
    • * string Additional properties
  • fields array[string]

    A list of fields to include in the statistics. It is used as the default list unless a specific field list is provided in the completion_fields or fielddata_fields parameters.

  • field_statistics boolean

    If true, the response includes:

    • The document count (how many documents contain this field).
    • The sum of document frequencies (the sum of document frequencies for all terms in this field).
    • The sum of total term frequencies (the sum of total term frequencies of each term in this field).

    Default value is true.

  • offsets boolean

    If true, the response includes term offsets.

    Default value is true.

  • payloads boolean

    If true, the response includes term payloads.

    Default value is true.

  • positions boolean

    If true, the response includes term positions.

    Default value is true.

  • term_statistics boolean

    If true, the response includes:

    • The total term frequency (how often a term occurs in all documents).
    • The document frequency (the number of documents containing the current term).

    By default these values are not returned since term statistics can have a serious performance impact.

    Default value is false.

  • routing string

    A custom value that is used to route operations to a specific shard.

  • version number

    If true, returns the document version as part of a hit.

  • version_type string

    The version type.

    Supported values include:

    • internal: Use internal versioning that starts at 1 and increments with each update or delete.
    • external: Only index the document if the specified version is strictly higher than the version of the stored document or if there is no existing document.
    • external_gte: Only index the document if the specified version is equal or higher than the version of the stored document or if there is no existing document. NOTE: The external_gte version type is meant for special use cases and should be used with care. If used incorrectly, it can result in loss of data.
    • force: This option is deprecated because it can cause primary and replica shards to diverge.

    Values are internal, external, external_gte, or force.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • found boolean Required
    • _id string
    • _index string Required
    • term_vectors object
      Hide term_vectors attribute Show term_vectors attribute object
      • * object Additional properties
        Hide * attributes Show * attributes object
        • field_statistics object
          Hide field_statistics attributes Show field_statistics attributes object
          • doc_count number Required
          • sum_doc_freq number Required
          • sum_ttf number Required
        • terms object Required
          Hide terms attribute Show terms attribute object
          • * object Additional properties
            Hide * attributes Show * attributes object
            • doc_freq number
            • score number
            • term_freq number Required
            • tokens array[object]
            • ttf number
    • took number Required
    • _version number Required
POST /{index}/_termvectors/{id}
GET /my-index-000001/_termvectors/1
{
  "fields" : ["text"],
  "offsets" : true,
  "payloads" : true,
  "positions" : true,
  "term_statistics" : true,
  "field_statistics" : true
}
resp = client.termvectors(
    index="my-index-000001",
    id="1",
    fields=[
        "text"
    ],
    offsets=True,
    payloads=True,
    positions=True,
    term_statistics=True,
    field_statistics=True,
)
const response = await client.termvectors({
  index: "my-index-000001",
  id: 1,
  fields: ["text"],
  offsets: true,
  payloads: true,
  positions: true,
  term_statistics: true,
  field_statistics: true,
});
response = client.termvectors(
  index: "my-index-000001",
  id: "1",
  body: {
    "fields": [
      "text"
    ],
    "offsets": true,
    "payloads": true,
    "positions": true,
    "term_statistics": true,
    "field_statistics": true
  }
)
$resp = $client->termvectors([
    "index" => "my-index-000001",
    "id" => "1",
    "body" => [
        "fields" => array(
            "text",
        ),
        "offsets" => true,
        "payloads" => true,
        "positions" => true,
        "term_statistics" => true,
        "field_statistics" => true,
    ],
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"fields":["text"],"offsets":true,"payloads":true,"positions":true,"term_statistics":true,"field_statistics":true}' "$ELASTICSEARCH_URL/my-index-000001/_termvectors/1"
client.termvectors(t -> t
    .fieldStatistics(true)
    .fields("text")
    .id("1")
    .index("my-index-000001")
    .offsets(true)
    .payloads(true)
    .positions(true)
    .termStatistics(true)
);
Request examples
Run `GET /my-index-000001/_termvectors/1` to return all information and statistics for field `text` in document 1.
{
  "fields" : ["text"],
  "offsets" : true,
  "payloads" : true,
  "positions" : true,
  "term_statistics" : true,
  "field_statistics" : true
}
Run `GET /my-index-000001/_termvectors/1` to set per-field analyzers. A different analyzer than the one at the field may be provided by using the `per_field_analyzer` parameter.
{
  "doc" : {
    "fullname" : "John Doe",
    "text" : "test test test"
  },
  "fields": ["fullname"],
  "per_field_analyzer" : {
    "fullname": "keyword"
  }
}
Run `GET /imdb/_termvectors` to filter the terms returned based on their tf-idf scores. It returns the three most "interesting" keywords from the artificial document having the given "plot" field value. Notice that the keyword "Tony" or any stop words are not part of the response, as their tf-idf must be too low.
{
  "doc": {
    "plot": "When wealthy industrialist Tony Stark is forced to build an armored suit after a life-threatening incident, he ultimately decides to use its technology to fight against evil."
  },
  "term_statistics": true,
  "field_statistics": true,
  "positions": false,
  "offsets": false,
  "filter": {
    "max_num_terms": 3,
    "min_term_freq": 1,
    "min_doc_freq": 1
  }
}
Run `GET /my-index-000001/_termvectors/1`. Term vectors which are not explicitly stored in the index are automatically computed on the fly. This request returns all information and statistics for the fields in document 1, even though the terms haven't been explicitly stored in the index. Note that for the field text, the terms are not regenerated.
{
  "fields" : ["text", "some_field_without_term_vectors"],
  "offsets" : true,
  "positions" : true,
  "term_statistics" : true,
  "field_statistics" : true
}
Run `GET /my-index-000001/_termvectors`. Term vectors can be generated for artificial documents, that is for documents not present in the index. If dynamic mapping is turned on (default), the document fields not in the original mapping will be dynamically created.
{
  "doc" : {
    "fullname" : "John Doe",
    "text" : "test test test"
  }
}
Response examples (200)
A successful response from `GET /my-index-000001/_termvectors/1`.
{
  "_index": "my-index-000001",
  "_id": "1",
  "_version": 1,
  "found": true,
  "took": 6,
  "term_vectors": {
    "text": {
      "field_statistics": {
        "sum_doc_freq": 4,
        "doc_count": 2,
        "sum_ttf": 6
      },
      "terms": {
        "test": {
          "doc_freq": 2,
          "ttf": 4,
          "term_freq": 3,
          "tokens": [
            {
              "position": 0,
              "start_offset": 0,
              "end_offset": 4,
              "payload": "d29yZA=="
            },
            {
              "position": 1,
              "start_offset": 5,
              "end_offset": 9,
              "payload": "d29yZA=="
            },
            {
              "position": 2,
              "start_offset": 10,
              "end_offset": 14,
              "payload": "d29yZA=="
            }
          ]
        }
      }
    }
  }
}
A successful response from `GET /my-index-000001/_termvectors` with `per_field_analyzer` in the request body.
{
  "_index": "my-index-000001",
  "_version": 0,
  "found": true,
  "took": 6,
  "term_vectors": {
    "fullname": {
      "field_statistics": {
          "sum_doc_freq": 2,
          "doc_count": 4,
          "sum_ttf": 4
      },
      "terms": {
          "John Doe": {
            "term_freq": 1,
            "tokens": [
                {
                  "position": 0,
                  "start_offset": 0,
                  "end_offset": 8
                }
            ]
          }
      }
    }
  }
}
A successful response from `GET /my-index-000001/_termvectors` with a `filter` in the request body.
{
  "_index": "imdb",
  "_version": 0,
  "found": true,
  "term_vectors": {
      "plot": {
        "field_statistics": {
            "sum_doc_freq": 3384269,
            "doc_count": 176214,
            "sum_ttf": 3753460
        },
        "terms": {
            "armored": {
              "doc_freq": 27,
              "ttf": 27,
              "term_freq": 1,
              "score": 9.74725
            },
            "industrialist": {
              "doc_freq": 88,
              "ttf": 88,
              "term_freq": 1,
              "score": 8.590818
            },
            "stark": {
              "doc_freq": 44,
              "ttf": 47,
              "term_freq": 1,
              "score": 9.272792
            }
        }
      }
  }
}





















Delete an enrich policy Generally available; Added in 7.5.0

DELETE /_enrich/policy/{name}

Deletes an existing enrich policy and its enrich index.

Path parameters

  • name string Required

    Enrich policy to delete.

Query parameters

  • master_timeout string

    Period to wait for a connection to the master node.

    Values are -1 or 0.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

DELETE /_enrich/policy/{name}
DELETE /_enrich/policy/my-policy
resp = client.enrich.delete_policy(
    name="my-policy",
)
const response = await client.enrich.deletePolicy({
  name: "my-policy",
});
response = client.enrich.delete_policy(
  name: "my-policy"
)
$resp = $client->enrich()->deletePolicy([
    "name" => "my-policy",
]);
curl -X DELETE -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_enrich/policy/my-policy"
client.enrich().deletePolicy(d -> d
    .name("my-policy")
);






































Stop async ES|QL query Generally available; Added in 8.18.0

POST /_query/async/{id}/stop

This API interrupts the query execution and returns the results so far. If the Elasticsearch security features are enabled, only the user who first submitted the ES|QL query can stop it.

External documentation

Path parameters

  • id string Required

    The unique identifier of the query. A query ID is provided in the ES|QL async query API response for a query that does not complete in the designated time. A query ID is also provided when the request was submitted with the keep_on_completion parameter set to true.

Query parameters

  • drop_null_columns boolean

    Indicates whether columns that are entirely null will be removed from the columns and values portion of the results. If true, the response will include an extra section under the name all_columns which has the name of all the columns.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • took number

      Time unit for milliseconds

    • is_partial boolean
    • all_columns array[object]
      Hide all_columns attributes Show all_columns attributes object
      • name string Required
      • type string Required
    • columns array[object] Required
      Hide columns attributes Show columns attributes object
      • name string Required
      • type string Required
    • values array[array] Required
    • _clusters object

      Cross-cluster search information. Present if include_ccs_metadata was true in the request and a cross-cluster search was performed.

      Hide _clusters attributes Show _clusters attributes object
      • total number Required
      • successful number Required
      • running number Required
      • skipped number Required
      • partial number Required
      • failed number Required
      • details object Required
        Hide details attribute Show details attribute object
        • * object Additional properties
          Hide * attributes Show * attributes object
          • status string Required

            Values are running, successful, partial, skipped, or failed.

          • indices string Required
          • _shards object
    • profile object

      Profiling information. Present if profile was true in the request. The contents of this field are currently unstable.

POST /_query/async/{id}/stop
POST /_query/async/FkpMRkJGS1gzVDRlM3g4ZzMyRGlLbkEaTXlJZHdNT09TU2VTZVBoNDM3cFZMUToxMDM=/stop
resp = client.esql.async_query_stop(
    id="FkpMRkJGS1gzVDRlM3g4ZzMyRGlLbkEaTXlJZHdNT09TU2VTZVBoNDM3cFZMUToxMDM=",
)
const response = await client.esql.asyncQueryStop({
  id: "FkpMRkJGS1gzVDRlM3g4ZzMyRGlLbkEaTXlJZHdNT09TU2VTZVBoNDM3cFZMUToxMDM=",
});
response = client.esql.async_query_stop(
  id: "FkpMRkJGS1gzVDRlM3g4ZzMyRGlLbkEaTXlJZHdNT09TU2VTZVBoNDM3cFZMUToxMDM="
)
$resp = $client->esql()->asyncQueryStop([
    "id" => "FkpMRkJGS1gzVDRlM3g4ZzMyRGlLbkEaTXlJZHdNT09TU2VTZVBoNDM3cFZMUToxMDM=",
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_query/async/FkpMRkJGS1gzVDRlM3g4ZzMyRGlLbkEaTXlJZHdNT09TU2VTZVBoNDM3cFZMUToxMDM=/stop"




Features

The feature APIs enable you to introspect and manage features provided by Elasticsearch and Elasticsearch plugins.





Reset the features Technical preview; Added in 7.12.0

POST /_features/_reset

Clear all of the state information stored in system indices by Elasticsearch features, including the security and machine learning indices.

WARNING: Intended for development and testing use only. Do not reset features on a production cluster.

Return a cluster to the same state as a new installation by resetting the feature state for all Elasticsearch features. This deletes all state information stored in system indices.

The response code is HTTP 200 if the state is successfully reset for all features. It is HTTP 500 if the reset operation failed for any feature.

Note that select features might provide a way to reset particular system indices. Using this API resets all features, both those that are built-in and implemented as plugins.

To list the features that will be affected, use the get features API.

IMPORTANT: The features installed on the node you submit this request to are the features that will be reset. Run on the master node if you have any doubts about which plugins are installed on individual nodes.

Query parameters

  • master_timeout string

    Period to wait for a connection to the master node.

    Values are -1 or 0.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • features array[object] Required
      Hide features attributes Show features attributes object
      • name string Required
      • description string Required
POST /_features/_reset
POST /_features/_reset
resp = client.features.reset_features()
const response = await client.features.resetFeatures();
response = client.features.reset_features
$resp = $client->features()->resetFeatures();
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_features/_reset"
client.features().resetFeatures(r -> r);
Response examples (200)
A successful response for clearing state information stored in system indices by Elasticsearch features.
{
  "features" : [
    {
      "feature_name" : "security",
      "status" : "SUCCESS"
    },
    {
      "feature_name" : "tasks",
      "status" : "SUCCESS"
    }
  ]
}









The purpose of the fleet search api is to provide a search api where the search will only be executed Technical preview; Added in 7.16.0

POST /{index}/_fleet/_fleet_search

All methods and paths for this operation:

GET /{index}/_fleet/_fleet_search

POST /{index}/_fleet/_fleet_search

after provided checkpoint has been processed and is visible for searches inside of Elasticsearch.

Required authorization

  • Index privileges: read

Path parameters

  • index string Required

    A single target to search. If the target is an index alias, it must resolve to a single index.

Query parameters

  • allow_no_indices boolean
  • analyzer string
  • analyze_wildcard boolean
  • batched_reduce_size number
  • ccs_minimize_roundtrips boolean
  • default_operator string

    Values are and, AND, or, or OR.

  • df string
  • docvalue_fields string | array[string]
  • expand_wildcards string | array[string]

    Supported values include:

    • all: Match any data stream or index, including hidden ones.
    • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
    • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
    • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
    • none: Wildcard expressions are not accepted.

    Values are all, open, closed, hidden, or none.

  • explain boolean
  • ignore_throttled boolean
  • ignore_unavailable boolean
  • lenient boolean
  • max_concurrent_shard_requests number
  • min_compatible_shard_node string
  • preference string
  • pre_filter_shard_size number
  • request_cache boolean
  • routing string
  • scroll string

    A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

    Values are -1 or 0.

  • search_type string

    Supported values include:

    • query_then_fetch: Documents are scored using local term and document frequencies for the shard. This is usually faster but less accurate.
    • dfs_query_then_fetch: Documents are scored using global term and document frequencies across all shards. This is usually slower but more accurate.

    Values are query_then_fetch or dfs_query_then_fetch.

  • stats array[string]
  • stored_fields string | array[string]
  • suggest_field string

    Specifies which field to use for suggestions.

  • suggest_mode string

    Supported values include:

    • missing: Only generate suggestions for terms that are not in the shard.
    • popular: Only suggest terms that occur in more docs on the shard than the original term.
    • always: Suggest any matching suggestions based on terms in the suggest text.

    Values are missing, popular, or always.

  • suggest_size number
  • suggest_text string

    The source text for which the suggestions should be returned.

  • terminate_after number
  • timeout string

    A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

    Values are -1 or 0.

  • track_total_hits boolean | number

    Number of hits matching the query to count accurately. If true, the exact number of hits is returned at the cost of some performance. If false, the response does not include the total number of hits matching the query. Defaults to 10,000 hits.

  • track_scores boolean
  • typed_keys boolean
  • rest_total_hits_as_int boolean
  • version boolean
  • _source boolean | string | array[string]

    Defines how to fetch a source. Fetching can be disabled entirely, or the source can be filtered. Used as a query parameter along with the _source_includes and _source_excludes parameters.

  • _source_excludes string | array[string]
  • _source_includes string | array[string]
  • seq_no_primary_term boolean
  • q string
  • size number
  • from number
  • sort string | array[string]
  • wait_for_checkpoints array[number]

    A comma separated list of checkpoints. When configured, the search API will only be executed on a shard after the relevant checkpoint has become visible for search. Defaults to an empty list which will cause Elasticsearch to immediately execute the search.

  • allow_partial_search_results boolean

    If true, returns partial results if there are shard request timeouts or shard failures. If false, returns an error with no partial results. Defaults to the configured cluster setting search.default_allow_partial_results which is true by default.

application/json

Body

  • aggregations object
  • collapse object
    External documentation
  • explain boolean

    If true, returns detailed information about score computation as part of a hit.

    Default value is false.

  • ext object

    Configuration of search extensions defined by Elasticsearch plugins.

    Hide ext attribute Show ext attribute object
    • * object Additional properties
  • from number

    Starting document offset. By default, you cannot page through more than 10,000 hits using the from and size parameters. To page through more hits, use the search_after parameter.

    Default value is 0.

  • highlight object
    Hide highlight attributes Show highlight attributes object
    • type string

      Supported values include:

      • plain: The plain highlighter uses the standard Lucene highlighter
      • fvh: The fvh highlighter uses the Lucene Fast Vector highlighter.
      • unified: The unified highlighter uses the Lucene Unified Highlighter.
      Any of:

      Supported values include:

      • plain: The plain highlighter uses the standard Lucene highlighter
      • fvh: The fvh highlighter uses the Lucene Fast Vector highlighter.
      • unified: The unified highlighter uses the Lucene Unified Highlighter.

      Values are plain, fvh, or unified.

    • boundary_chars string

      A string that contains each boundary character.

      Default value is .,!? \t\n.

    • boundary_max_scan number

      How far to scan for boundary characters.

      Default value is 20.

    • boundary_scanner string

      Specifies how to break the highlighted fragments: chars, sentence, or word. Only valid for the unified and fvh highlighters. Defaults to sentence for the unified highlighter. Defaults to chars for the fvh highlighter.

      Supported values include:

      • chars: Use the characters specified by boundary_chars as highlighting boundaries. The boundary_max_scan setting controls how far to scan for boundary characters. Only valid for the fvh highlighter.
      • sentence: Break highlighted fragments at the next sentence boundary, as determined by Java’s BreakIterator. You can specify the locale to use with boundary_scanner_locale. When used with the unified highlighter, the sentence scanner splits sentences bigger than fragment_size at the first word boundary next to fragment_size. You can set fragment_size to 0 to never split any sentence.
      • word: Break highlighted fragments at the next word boundary, as determined by Java’s BreakIterator. You can specify the locale to use with boundary_scanner_locale.

      Values are chars, sentence, or word.

    • boundary_scanner_locale string

      Controls which locale is used to search for sentence and word boundaries. This parameter takes a form of a language tag, for example: "en-US", "fr-FR", "ja-JP".

      Default value is Locale.ROOT.

    • force_source boolean Deprecated
    • fragmenter string

      Specifies how text should be broken up in highlight snippets: simple or span. Only valid for the plain highlighter.

      Values are simple or span.

    • fragment_size number

      The size of the highlighted fragment in characters.

      Default value is 100.

    • highlight_filter boolean
    • highlight_query object

      Highlight matches for a query other than the search query. This is especially useful if you use a rescore query because those are not taken into account by highlighting by default.

      External documentation
    • max_fragment_length number
    • max_analyzed_offset number

      If set to a non-negative value, highlighting stops at this defined maximum limit. The rest of the text is not processed, thus not highlighted and no error is returned The max_analyzed_offset query setting does not override the index.highlight.max_analyzed_offset setting, which prevails when it’s set to lower value than the query setting.

    • no_match_size number

      The amount of text you want to return from the beginning of the field if there are no matching fragments to highlight.

      Default value is 0.

    • number_of_fragments number

      The maximum number of fragments to return. If the number of fragments is set to 0, no fragments are returned. Instead, the entire field contents are highlighted and returned. This can be handy when you need to highlight short texts such as a title or address, but fragmentation is not required. If number_of_fragments is 0, fragment_size is ignored.

      Default value is 5.

    • options object
      Hide options attribute Show options attribute object
      • * object Additional properties
    • order string

      Sorts highlighted fragments by score when set to score. By default, fragments will be output in the order they appear in the field (order: none). Setting this option to score will output the most relevant fragments first. Each highlighter applies its own logic to compute relevancy scores.

      Value is score.

    • phrase_limit number

      Controls the number of matching phrases in a document that are considered. Prevents the fvh highlighter from analyzing too many phrases and consuming too much memory. When using matched_fields, phrase_limit phrases per matched field are considered. Raising the limit increases query time and consumes more memory. Only supported by the fvh highlighter.

      Default value is 256.

    • post_tags array[string]

      Use in conjunction with pre_tags to define the HTML tags to use for the highlighted text. By default, highlighted text is wrapped in <em> and </em> tags.

    • pre_tags array[string]

      Use in conjunction with post_tags to define the HTML tags to use for the highlighted text. By default, highlighted text is wrapped in <em> and </em> tags.

    • require_field_match boolean

      By default, only fields that contains a query match are highlighted. Set to false to highlight all fields.

      Default value is true.

    • tags_schema string

      Set to styled to use the built-in tag schema.

      Value is styled.

    • encoder string

      Values are default or html.

    • fields object Required
  • track_total_hits boolean | number

    Number of hits matching the query to count accurately. If true, the exact number of hits is returned at the cost of some performance. If false, the response does not include the total number of hits matching the query. Defaults to 10,000 hits.

  • indices_boost array[object]

    Boosts the _score of documents from specified indices.

    Hide indices_boost attribute Show indices_boost attribute object
    • * number Additional properties
  • docvalue_fields array[object]

    Array of wildcard (*) patterns. The request returns doc values for field names matching these patterns in the hits.fields property of the response.

    A reference to a field with formatting instructions on how to return the value

    Hide docvalue_fields attributes Show docvalue_fields attributes object
    • field string Required

      A wildcard pattern. The request returns values for field names matching this pattern.

    • format string

      The format in which the values are returned.

    • include_unmapped boolean
  • min_score number

    Minimum _score for matching documents. Documents with a lower _score are not included in search results and results collected by aggregations.

  • post_filter object

    An Elasticsearch Query DSL (Domain Specific Language) object that defines a query.

    External documentation
  • profile boolean
  • query object

    Defines the search definition using the Query DSL.

    External documentation
  • rescore object | array[object]

    One of:
    Hide attributes Show attributes
    • window_size number
    • query object
      Hide query attributes Show query attributes object
      • rescore_query object Required

        The query to use for rescoring. This query is only run on the Top-K results returned by the query and post_filter phases.

      • query_weight number

        Relative importance of the original query versus the rescore query.

        Default value is 1.

      • rescore_query_weight number

        Relative importance of the rescore query versus the original query.

        Default value is 1.

      • score_mode string

        Determines how scores are combined.

        Supported values include:

        • avg: Average the original score and the rescore query score.
        • max: Take the max of original score and the rescore query score.
        • min: Take the min of the original score and the rescore query score.
        • multiply: Multiply the original score by the rescore query score. Useful for function query rescores.
        • total: Add the original score and the rescore query score.

        Values are avg, max, min, multiply, or total.

    • learning_to_rank object
      Hide learning_to_rank attributes Show learning_to_rank attributes object
      • model_id string Required

        The unique identifier of the trained model uploaded to Elasticsearch

      • params object

        Named parameters to be passed to the query templates used for feature

        Hide params attribute Show params attribute object
        • * object Additional properties
  • script_fields object

    Retrieve a script evaluation (based on different fields) for each hit.

    Hide script_fields attribute Show script_fields attribute object
    • * object Additional properties
      Hide * attributes Show * attributes object
      • script object Required
        Hide script attributes Show script attributes object
        • source string

          The script source.

        • id string

          The id for a stored script.

        • params object

          Specifies any named parameters that are passed into the script as variables. Use parameters instead of hard-coded values to decrease compile time.

          Hide params attribute Show params attribute object
          • * object Additional properties
        • lang string

          Specifies the language the script is written in.

          Supported values include:

          • painless: Painless scripting language, purpose-built for Elasticsearch.
          • expression: Lucene’s expressions language, compiles a JavaScript expression to bytecode.
          • mustache: Mustache templated, used for templates.
          • java: Expert Java API
          Any of:

          Specifies the language the script is written in.

          Supported values include:

          • painless: Painless scripting language, purpose-built for Elasticsearch.
          • expression: Lucene’s expressions language, compiles a JavaScript expression to bytecode.
          • mustache: Mustache templated, used for templates.
          • java: Expert Java API

          Values are painless, expression, mustache, or java.

        • options object
          Hide options attribute Show options attribute object
          • * string Additional properties
      • ignore_failure boolean
  • search_after array[number | string | boolean | null | object]

    A field value.

  • size number

    The number of hits to return. By default, you cannot page through more than 10,000 hits using the from and size parameters. To page through more hits, use the search_after parameter.

    Default value is 10.

  • slice object
    Hide slice attributes Show slice attributes object
    • field string

      Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

    • id string Required
    • max number Required
  • sort string | object | array[string | object]

    One of:

    Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

  • _source boolean | object

    Indicates which source fields are returned for matching documents. These fields are returned in the hits._source property of the search response.

    One of:

    Indicates which source fields are returned for matching documents. These fields are returned in the hits._source property of the search response.

  • fields array[object]

    Array of wildcard (*) patterns. The request returns values for field names matching these patterns in the hits.fields property of the response.

    A reference to a field with formatting instructions on how to return the value

    Hide fields attributes Show fields attributes object
    • field string Required

      A wildcard pattern. The request returns values for field names matching this pattern.

    • format string

      The format in which the values are returned.

    • include_unmapped boolean
  • suggest object
    Hide suggest attribute Show suggest attribute object
    • text string

      Global suggest text, to avoid repetition when the same text is used in several suggesters

  • terminate_after number

    Maximum number of documents to collect for each shard. If a query reaches this limit, Elasticsearch terminates the query early. Elasticsearch collects documents before sorting. Defaults to 0, which does not terminate query execution early.

    Default value is 0.

  • timeout string

    Specifies the period of time to wait for a response from each shard. If no response is received before the timeout expires, the request fails and returns an error. Defaults to no timeout.

  • track_scores boolean

    If true, calculate and return document scores, even if the scores are not used for sorting.

    Default value is false.

  • version boolean

    If true, returns document version as part of a hit.

    Default value is false.

  • seq_no_primary_term boolean

    If true, returns sequence number and primary term of the last modification of each hit. See Optimistic concurrency control.

  • stored_fields string | array[string]

    List of stored fields to return as part of a hit. If no fields are specified, no stored fields are included in the response. If this field is specified, the _source parameter defaults to false. You can pass _source: true to return both source fields and stored fields in the search response.

  • pit object

    Limits the search to a point in time (PIT). If you provide a PIT, you cannot specify an in the request path.

    Hide pit attributes Show pit attributes object
    • id string Required
    • keep_alive string

      A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

  • runtime_mappings object

    Defines one or more runtime fields in the search request. These fields take precedence over mapped fields with the same name.

    Hide runtime_mappings attribute Show runtime_mappings attribute object
    • * object Additional properties
      Hide * attributes Show * attributes object
      • fields object

        For type composite

        Hide fields attribute Show fields attribute object
        • * object Additional properties
          Hide * attribute Show * attribute object
          • type string Required

            Values are boolean, composite, date, double, geo_point, geo_shape, ip, keyword, long, or lookup.

      • fetch_fields array[object]

        For type lookup

        Hide fetch_fields attributes Show fetch_fields attributes object
        • field string Required

          Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

        • format string
      • format string

        A custom format for date type runtime fields.

      • input_field string

        For type lookup

      • target_field string

        For type lookup

      • target_index string

        For type lookup

      • script object

        Painless script executed at query time.

        Hide script attributes Show script attributes object
        • source string

          The script source.

        • id string

          The id for a stored script.

        • params object

          Specifies any named parameters that are passed into the script as variables. Use parameters instead of hard-coded values to decrease compile time.

          Hide params attribute Show params attribute object
          • * object Additional properties
        • lang
        • options object
          Hide options attribute Show options attribute object
          • * string Additional properties
      • type string Required

        Field type, which can be: boolean, composite, date, double, geo_point, ip,keyword, long, or lookup.

        Values are boolean, composite, date, double, geo_point, geo_shape, ip, keyword, long, or lookup.

  • stats array[string]

    Stats groups to associate with the search. Each group maintains a statistics aggregation for its associated searches. You can retrieve these stats using the indices stats API.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • took number Required
    • timed_out boolean Required
    • _shards object Required
      Hide _shards attributes Show _shards attributes object
      • failed number Required

        The number of shards the operation or search attempted to run on but failed.

      • successful number Required

        The number of shards the operation or search succeeded on.

      • total number Required

        The number of shards the operation or search will run on overall.

      • failures array[object]
        Hide failures attributes Show failures attributes object
        • index string
        • node string
        • reason object Required

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

        • shard number
        • status string
        • primary boolean
      • skipped number
    • hits object Required
      Hide hits attributes Show hits attributes object
      • total object | number

        Total hit count information, present only if track_total_hits wasn't false in the search request.

        One of:
        Hide attributes Show attributes
        • relation string Required

          Supported values include:

          • eq: Accurate
          • gte: Lower bound, including returned events or sequences

          Values are eq or gte.

        • value number Required
      • hits array[object] Required
        Hide hits attributes Show hits attributes object
        • _index string Required
        • _id string
        • _score number | string | null

        • _explanation object
        • fields object
          Hide fields attribute Show fields attribute object
          • * object Additional properties
        • highlight object
          Hide highlight attribute Show highlight attribute object
          • * array[string] Additional properties
        • inner_hits object
          Hide inner_hits attribute Show inner_hits attribute object
          • * object Additional properties
        • matched_queries array[string] | object

        • _nested object
        • _ignored array[string]
        • ignored_field_values object
          Hide ignored_field_values attribute Show ignored_field_values attribute object
          • * array[number | string | boolean | null | object] Additional properties
        • _shard string
        • _node string
        • _routing string
        • _source object
        • _rank number
        • _seq_no number
        • _primary_term number
        • _version number
        • sort array[number | string | boolean | null | object]
      • max_score number | string | null

    • aggregations object
    • _clusters object
      Hide _clusters attributes Show _clusters attributes object
      • skipped number Required
      • successful number Required
      • total number Required
      • running number Required
      • partial number Required
      • failed number Required
      • details object
        Hide details attribute Show details attribute object
        • * object Additional properties
          Hide * attributes Show * attributes object
          • status string Required

            Values are running, successful, partial, skipped, or failed.

          • indices string Required
          • timed_out boolean Required
          • _shards object
          • failures array[object]
    • fields object
      Hide fields attribute Show fields attribute object
      • * object Additional properties
    • max_score number
    • num_reduce_phases number
    • profile object
      Hide profile attribute Show profile attribute object
      • shards array[object] Required
        Hide shards attributes Show shards attributes object
        • aggregations array[object] Required
        • cluster string Required
        • dfs object
        • fetch object
        • id string Required
        • index string Required
        • node_id string Required
        • searches array[object] Required
        • shard_id number Required
    • pit_id string
    • _scroll_id string
    • suggest object
      Hide suggest attribute Show suggest attribute object
      • * array[object] Additional properties
        One of:
        Hide attributes Show attributes
        • length number Required
        • offset number Required
        • text string Required
        • options
    • terminated_early boolean
POST /{index}/_fleet/_fleet_search
curl \
 --request POST 'http://api.example.com/{index}/_fleet/_fleet_search' \
 --header "Authorization: $API_KEY" \
 --header "Content-Type: application/json" \
 --data '{"aggregations":{},"collapse":{},"explain":false,"ext":{"additionalProperty1":{},"additionalProperty2":{}},"from":0,"highlight":{"type":"plain","boundary_chars":".,!? \\t\\n","boundary_max_scan":20,"boundary_scanner":"chars","boundary_scanner_locale":"Locale.ROOT","force_source":true,"fragmenter":"simple","fragment_size":100,"highlight_filter":true,"highlight_query":{},"max_fragment_length":42.0,"max_analyzed_offset":42.0,"no_match_size":0,"number_of_fragments":5,"options":{"additionalProperty1":{},"additionalProperty2":{}},"order":"score","phrase_limit":256,"post_tags":["string"],"pre_tags":["string"],"require_field_match":true,"tags_schema":"styled","encoder":"default","fields":{}},"track_total_hits":true,"indices_boost":[{"additionalProperty1":42.0,"additionalProperty2":42.0}],"docvalue_fields":[{"field":"string","format":"string","include_unmapped":true}],"min_score":42.0,"post_filter":{},"profile":true,"query":{},"rescore":{"window_size":42.0,"query":{"rescore_query":{},"query_weight":1,"rescore_query_weight":1,"score_mode":"avg"},"learning_to_rank":{"model_id":"string","params":{"additionalProperty1":{},"additionalProperty2":{}}}},"script_fields":{"additionalProperty1":{"script":{"source":"string","id":"string","params":{"additionalProperty1":{},"additionalProperty2":{}},"lang":"painless","options":{"additionalProperty1":"string","additionalProperty2":"string"}},"ignore_failure":true},"additionalProperty2":{"script":{"source":"string","id":"string","params":{"additionalProperty1":{},"additionalProperty2":{}},"lang":"painless","options":{"additionalProperty1":"string","additionalProperty2":"string"}},"ignore_failure":true}},"search_after":[42.0],"size":10,"slice":{"field":"string","id":"string","max":42.0},"sort":"string","_source":true,"fields":[{"field":"string","format":"string","include_unmapped":true}],"suggest":{"text":"string"},"terminate_after":0,"timeout":"string","track_scores":false,"version":false,"seq_no_primary_term":true,"stored_fields":"string","pit":{"id":"string","keep_alive":"string"},"runtime_mappings":{"additionalProperty1":{"fields":{"additionalProperty1":{"type":"boolean"},"additionalProperty2":{"type":"boolean"}},"fetch_fields":[{"field":"string","format":"string"}],"format":"string","input_field":"string","target_field":"string","target_index":"string","script":{"source":"string","id":"string","params":{"additionalProperty1":{},"additionalProperty2":{}},"options":{"additionalProperty1":"string","additionalProperty2":"string"}},"type":"boolean"},"additionalProperty2":{"fields":{"additionalProperty1":{"type":"boolean"},"additionalProperty2":{"type":"boolean"}},"fetch_fields":[{"field":"string","format":"string"}],"format":"string","input_field":"string","target_field":"string","target_index":"string","script":{"source":"string","id":"string","params":{"additionalProperty1":{},"additionalProperty2":{}},"options":{"additionalProperty1":"string","additionalProperty2":"string"}},"type":"boolean"}},"stats":["string"]}'


































Add an index block Generally available; Added in 7.9.0

PUT /{index}/_block/{block}

Add an index block to an index. Index blocks limit the operations allowed on an index by blocking specific operation types.

Path parameters

  • index string Required

    A comma-separated list or wildcard expression of index names used to limit the request. By default, you must explicitly name the indices you are adding blocks to. To allow the adding of blocks to indices with _all, *, or other wildcard expressions, change the action.destructive_requires_name setting to false. You can update this setting in the elasticsearch.yml file or by using the cluster update settings API.

  • block string

    The block type to add to the index.

    Supported values include:

    • metadata: Disable metadata changes, such as closing the index.
    • read: Disable read operations.
    • read_only: Disable write operations and metadata changes.
    • write: Disable write operations. However, metadata changes are still allowed.

    Values are metadata, read, read_only, or write.

Query parameters

  • allow_no_indices boolean

    If false, the request returns an error if any wildcard expression, index alias, or _all value targets only missing or closed indices. This behavior applies even if the request targets other open indices. For example, a request targeting foo*,bar* returns an error if an index starts with foo but no index starts with bar.

  • expand_wildcards string | array[string]

    The type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. It supports comma-separated values, such as open,hidden.

    Supported values include:

    • all: Match any data stream or index, including hidden ones.
    • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
    • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
    • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
    • none: Wildcard expressions are not accepted.

    Values are all, open, closed, hidden, or none.

  • ignore_unavailable boolean

    If false, the request returns an error if it targets a missing or closed index.

  • master_timeout string

    The period to wait for the master node. If the master node is not available before the timeout expires, the request fails and returns an error. It can also be set to -1 to indicate that the request should never timeout.

    Values are -1 or 0.

  • timeout string

    The period to wait for a response from all relevant nodes in the cluster after updating the cluster metadata. If no response is received before the timeout expires, the cluster metadata update still applies but the response will indicate that it was not completely acknowledged. It can also be set to -1 to indicate that the request should never timeout.

    Values are -1 or 0.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • acknowledged boolean Required
    • shards_acknowledged boolean Required
    • indices array[object] Required
      Hide indices attributes Show indices attributes object
      • name string Required
      • blocked boolean Required
PUT /{index}/_block/{block}
PUT /my-index-000001/_block/write
resp = client.indices.add_block(
    index="my-index-000001",
    block="write",
)
const response = await client.indices.addBlock({
  index: "my-index-000001",
  block: "write",
});
response = client.indices.add_block(
  index: "my-index-000001",
  block: "write"
)
$resp = $client->indices()->addBlock([
    "index" => "my-index-000001",
    "block" => "write",
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/my-index-000001/_block/write"
client.indices().addBlock(a -> a
    .block(IndicesBlockOptions.Write)
    .index("my-index-000001")
);
Response examples (200)
A successful response from `PUT /my-index-000001/_block/write`, which adds an index block to an index.'
{
  "acknowledged" : true,
  "shards_acknowledged" : true,
  "indices" : [ {
    "name" : "my-index-000001",
    "blocked" : true
  } ]
}








Clone an index Generally available; Added in 7.4.0

POST /{index}/_clone/{target}

All methods and paths for this operation:

PUT /{index}/_clone/{target}

POST /{index}/_clone/{target}

Clone an existing index into a new index. Each original primary shard is cloned into a new primary shard in the new index.

IMPORTANT: Elasticsearch does not apply index templates to the resulting index. The API also does not copy index metadata from the original index. Index metadata includes aliases, index lifecycle management phase definitions, and cross-cluster replication (CCR) follower information. For example, if you clone a CCR follower index, the resulting clone will not be a follower index.

The clone API copies most index settings from the source index to the resulting index, with the exception of index.number_of_replicas and index.auto_expand_replicas. To set the number of replicas in the resulting index, configure these settings in the clone request.

Cloning works as follows:

  • First, it creates a new target index with the same definition as the source index.
  • Then it hard-links segments from the source index into the target index. If the file system does not support hard-linking, all segments are copied into the new index, which is a much more time consuming process.
  • Finally, it recovers the target index as though it were a closed index which had just been re-opened.

IMPORTANT: Indices can only be cloned if they meet the following requirements:

  • The index must be marked as read-only and have a cluster health status of green.
  • The target index must not exist.
  • The source index must have the same number of primary shards as the target index.
  • The node handling the clone process must have sufficient free disk space to accommodate a second copy of the existing index.

The current write index on a data stream cannot be cloned. In order to clone the current write index, the data stream must first be rolled over so that a new write index is created and then the previous write index can be cloned.

NOTE: Mappings cannot be specified in the _clone request. The mappings of the source index will be used for the target index.

Monitor the cloning process

The cloning process can be monitored with the cat recovery API or the cluster health API can be used to wait until all primary shards have been allocated by setting the wait_for_status parameter to yellow.

The _clone API returns as soon as the target index has been added to the cluster state, before any shards have been allocated. At this point, all shards are in the state unassigned. If, for any reason, the target index can't be allocated, its primary shard will remain unassigned until it can be allocated on that node.

Once the primary shard is allocated, it moves to state initializing, and the clone process begins. When the clone operation completes, the shard will become active. At that point, Elasticsearch will try to allocate any replicas and may decide to relocate the primary shard to another node.

Wait for active shards

Because the clone operation creates a new index to clone the shards to, the wait for active shards setting on index creation applies to the clone index action as well.

Required authorization

  • Index privileges: manage

Path parameters

  • index string Required

    Name of the source index to clone.

  • target string Required

    Name of the target index to create.

Query parameters

  • master_timeout string

    Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.

    Values are -1 or 0.

  • timeout string

    Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.

    Values are -1 or 0.

  • wait_for_active_shards number | string

    The number of shard copies that must be active before proceeding with the operation. Set to all or any positive integer up to the total number of shards in the index (number_of_replicas+1).

    Values are all or index-setting.

application/json

Body

  • aliases object

    Aliases for the resulting index.

    Hide aliases attribute Show aliases attribute object
    • * object Additional properties
      Hide * attributes Show * attributes object
      • filter object

        Query used to limit documents the alias can access.

        External documentation
      • index_routing string

        Value used to route indexing operations to a specific shard. If specified, this overwrites the routing value for indexing operations.

      • is_hidden boolean

        If true, the alias is hidden. All indices for the alias must have the same is_hidden value.

        Default value is false.

      • is_write_index boolean

        If true, the index is the write index for the alias.

        Default value is false.

      • routing string

        Value used to route indexing and search operations to a specific shard.

      • search_routing string

        Value used to route search operations to a specific shard. If specified, this overwrites the routing value for search operations.

  • settings object

    Configuration options for the target index.

    Hide settings attribute Show settings attribute object
    • * object Additional properties

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • acknowledged boolean Required
    • index string Required
    • shards_acknowledged boolean Required
POST /{index}/_clone/{target}
POST /my_source_index/_clone/my_target_index
{
  "settings": {
    "index.number_of_shards": 5
  },
  "aliases": {
    "my_search_indices": {}
  }
}
resp = client.indices.clone(
    index="my_source_index",
    target="my_target_index",
    settings={
        "index.number_of_shards": 5
    },
    aliases={
        "my_search_indices": {}
    },
)
const response = await client.indices.clone({
  index: "my_source_index",
  target: "my_target_index",
  settings: {
    "index.number_of_shards": 5,
  },
  aliases: {
    my_search_indices: {},
  },
});
response = client.indices.clone(
  index: "my_source_index",
  target: "my_target_index",
  body: {
    "settings": {
      "index.number_of_shards": 5
    },
    "aliases": {
      "my_search_indices": {}
    }
  }
)
$resp = $client->indices()->clone([
    "index" => "my_source_index",
    "target" => "my_target_index",
    "body" => [
        "settings" => [
            "index.number_of_shards" => 5,
        ],
        "aliases" => [
            "my_search_indices" => new ArrayObject([]),
        ],
    ],
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"settings":{"index.number_of_shards":5},"aliases":{"my_search_indices":{}}}' "$ELASTICSEARCH_URL/my_source_index/_clone/my_target_index"
client.indices().clone(c -> c
    .aliases("my_search_indices", a -> a)
    .index("my_source_index")
    .settings("index.number_of_shards", JsonData.fromJson("5"))
    .target("my_target_index")
);
Request example
Clone `my_source_index` into a new index called `my_target_index` with `POST /my_source_index/_clone/my_target_index`. The API accepts `settings` and `aliases` parameters for the target index.
{
  "settings": {
    "index.number_of_shards": 5
  },
  "aliases": {
    "my_search_indices": {}
  }
}