Get data frame analytics jobs
Added in 7.7.0
Get configuration and usage information about data frame analytics jobs.
IMPORTANT: CAT APIs are only intended for human consumption using the Kibana console or command line. They are not intended for use by applications. For application consumption, use the get data frame analytics jobs statistics API.
Path parameters
-
id
string Required The ID of the data frame analytics to fetch
Query parameters
-
allow_no_match
boolean Whether to ignore if a wildcard expression matches no configs. (This includes
_all
string or when no configs have been specified) -
bytes
string The unit in which to display byte values
Values are
b
,kb
,mb
,gb
,tb
, orpb
. -
h
string | array[string] Comma-separated list of column names to display.
Supported values include:
assignment_explanation
(orae
): Contains messages relating to the selection of a node.create_time
(orct
,createTime
): The time when the data frame analytics job was created.description
(ord
): A description of a job.dest_index
(ordi
,destIndex
): Name of the destination index.failure_reason
(orfr
,failureReason
): Contains messages about the reason why a data frame analytics job failed.id
: Identifier for the data frame analytics job.model_memory_limit
(ormml
,modelMemoryLimit
): The approximate maximum amount of memory resources that are permitted for the data frame analytics job.node.address
(orna
,nodeAddress
): The network address of the node that the data frame analytics job is assigned to.node.ephemeral_id
(orne
,nodeEphemeralId
): The ephemeral ID of the node that the data frame analytics job is assigned to.node.id
(orni
,nodeId
): The unique identifier of the node that the data frame analytics job is assigned to.node.name
(ornn
,nodeName
): The name of the node that the data frame analytics job is assigned to.progress
(orp
): The progress report of the data frame analytics job by phase.source_index
(orsi
,sourceIndex
): Name of the source index.state
(ors
): Current state of the data frame analytics job.type
(ort
): The type of analysis that the data frame analytics job performs.version
(orv
): The Elasticsearch version number in which the data frame analytics job was created.
-
s
string | array[string] Comma-separated list of column names or column aliases used to sort the response.
Supported values include:
assignment_explanation
(orae
): Contains messages relating to the selection of a node.create_time
(orct
,createTime
): The time when the data frame analytics job was created.description
(ord
): A description of a job.dest_index
(ordi
,destIndex
): Name of the destination index.failure_reason
(orfr
,failureReason
): Contains messages about the reason why a data frame analytics job failed.id
: Identifier for the data frame analytics job.model_memory_limit
(ormml
,modelMemoryLimit
): The approximate maximum amount of memory resources that are permitted for the data frame analytics job.node.address
(orna
,nodeAddress
): The network address of the node that the data frame analytics job is assigned to.node.ephemeral_id
(orne
,nodeEphemeralId
): The ephemeral ID of the node that the data frame analytics job is assigned to.node.id
(orni
,nodeId
): The unique identifier of the node that the data frame analytics job is assigned to.node.name
(ornn
,nodeName
): The name of the node that the data frame analytics job is assigned to.progress
(orp
): The progress report of the data frame analytics job by phase.source_index
(orsi
,sourceIndex
): Name of the source index.state
(ors
): Current state of the data frame analytics job.type
(ort
): The type of analysis that the data frame analytics job performs.version
(orv
): The Elasticsearch version number in which the data frame analytics job was created.
-
time
string Unit used to display time values.
Values are
nanos
,micros
,ms
,s
,m
,h
, ord
.
curl \
--request GET 'http://api.example.com/_cat/ml/data_frame/analytics/{id}' \
--header "Authorization: $API_KEY"
[
{
"id": "classifier_job_1",
"type": "classification",
"create_time": "2020-02-12T11:49:09.594Z",
"state": "stopped"
},
{
"id": "classifier_job_2",
"type": "classification",
"create_time": "2020-02-12T11:49:14.479Z",
"state": "stopped"
},
{
"id": "classifier_job_3",
"type": "classification",
"create_time": "2020-02-12T11:49:16.928Z",
"state": "stopped"
},
{
"id": "classifier_job_4",
"type": "classification",
"create_time": "2020-02-12T11:49:19.127Z",
"state": "stopped"
},
{
"id": "classifier_job_5",
"type": "classification",
"create_time": "2020-02-12T11:49:21.349Z",
"state": "stopped"
}
]
Get index template information
Added in 5.2.0
Get information about the index templates in a cluster. You can use index templates to apply index settings and field mappings to new indices at creation. IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the get index template API.
Path parameters
-
name
string Required The name of the template to return. Accepts wildcard expressions. If omitted, all templates are returned.
Query parameters
-
h
string | array[string] List of columns to appear in the response. Supports simple wildcards.
-
s
string | array[string] List of columns that determine how the table should be sorted. Sorting defaults to ascending and can be changed by setting
:asc
or:desc
as a suffix to the column name. -
local
boolean If
true
, the request computes the list of selected nodes from the local cluster state. Iffalse
the list of selected nodes are computed from the cluster state of the master node. In both cases the coordinating node will send requests for further information to each selected node. -
master_timeout
string Period to wait for a connection to the master node.
curl \
--request GET 'http://api.example.com/_cat/templates/{name}' \
--header "Authorization: $API_KEY"
[
{
"name": "my-template-0",
"index_patterns": "[te*]",
"order": "500",
"version": null,
"composed_of": "[]"
},
{
"name": "my-template-1",
"index_patterns": "[tea*]",
"order": "501",
"version": null,
"composed_of": "[]"
},
{
"name": "my-template-2",
"index_patterns": "[teak*]",
"order": "502",
"version": "7",
"composed_of": "[]"
}
]
Explain the shard allocations
Added in 5.0.0
Get explanations for shard allocations in the cluster. For unassigned shards, it provides an explanation for why the shard is unassigned. For assigned shards, it provides an explanation for why the shard is remaining on its current node and has not moved or rebalanced to another node. This API can be very useful when attempting to diagnose why a shard is unassigned or why a shard continues to remain on its current node when you might expect otherwise.
Query parameters
-
include_disk_info
boolean If true, returns information about disk usage and shard sizes.
-
include_yes_decisions
boolean If true, returns YES decisions in explanation.
-
master_timeout
string Period to wait for a connection to the master node.
Body
-
current_node
string Specifies the node ID or the name of the node to only explain a shard that is currently located on the specified node.
-
index
string -
primary
boolean If true, returns explanation for the primary shard for the given shard ID.
-
shard
number Specifies the ID of the shard that you would like an explanation for.
curl \
--request POST 'http://api.example.com/_cluster/allocation/explain' \
--header "Authorization: $API_KEY" \
--header "Content-Type: application/json" \
--data '"{\n \"index\": \"my-index-000001\",\n \"shard\": 0,\n \"primary\": false,\n \"current_node\": \"my-node\"\n}"'
{
"index": "my-index-000001",
"shard": 0,
"primary": false,
"current_node": "my-node"
}
{
"index" : "my-index-000001",
"shard" : 0,
"primary" : true,
"current_state" : "unassigned",
"unassigned_info" : {
"reason" : "INDEX_CREATED",
"at" : "2017-01-04T18:08:16.600Z",
"last_allocation_status" : "no"
},
"can_allocate" : "no",
"allocate_explanation" : "Elasticsearch isn't allowed to allocate this shard to any of the nodes in the cluster. Choose a node to which you expect this shard to be allocated, find this node in the node-by-node explanation, and address the reasons which prevent Elasticsearch from allocating this shard there.",
"node_allocation_decisions" : [
{
"node_id" : "8qt2rY-pT6KNZB3-hGfLnw",
"node_name" : "node-0",
"transport_address" : "127.0.0.1:9401",
"roles" : ["data", "data_cold", "data_content", "data_frozen", "data_hot", "data_warm", "ingest", "master", "ml", "remote_cluster_client", "transform"],
"node_attributes" : {},
"node_decision" : "no",
"weight_ranking" : 1,
"deciders" : [
{
"decider" : "filter",
"decision" : "NO",
"explanation" : "node does not match index setting [index.routing.allocation.include] filters [_name:\"nonexistent_node\"]"
}
]
}
]
}
{
"index" : "my-index-000001",
"shard" : 0,
"primary" : true,
"current_state" : "unassigned",
"unassigned_info" : {
"at" : "2017-01-04T18:03:28.464Z",
"failed shard on node [mEKjwwzLT1yJVb8UxT6anw]: failed recovery, failure RecoveryFailedException",
"reason": "ALLOCATION_FAILED",
"failed_allocation_attempts": 5,
"last_allocation_status": "no",
},
"can_allocate": "no",
"allocate_explanation": "cannot allocate because allocation is not permitted to any of the nodes",
"node_allocation_decisions" : [
{
"node_id" : "3sULLVJrRneSg0EfBB-2Ew",
"node_name" : "node_t0",
"transport_address" : "127.0.0.1:9400",
"roles" : ["data_content", "data_hot"],
"node_decision" : "no",
"store" : {
"matching_size" : "4.2kb",
"matching_size_in_bytes" : 4325
},
"deciders" : [
{
"decider": "max_retry",
"decision" : "NO",
"explanation": "shard has exceeded the maximum number of retries [5] on failed allocation attempts - manually call [POST /_cluster/reroute?retry_failed] to retry, [unassigned_info[[reason=ALLOCATION_FAILED], at[2024-07-30T21:04:12.166Z], failed_attempts[5], failed_nodes[[mEKjwwzLT1yJVb8UxT6anw]], delayed=false, details[failed shard on node [mEKjwwzLT1yJVb8UxT6anw]: failed recovery, failure RecoveryFailedException], allocation_status[deciders_no]]]"
}
]
}
]
}
Get cluster statistics
Added in 1.3.0
Get basic index metrics (shard numbers, store size, memory usage) and information about the current nodes that form the cluster (number, roles, os, jvm versions, memory usage, cpu and installed plugins).
Path parameters
-
node_id
string | array[string] Required Comma-separated list of node filters used to limit returned information. Defaults to all nodes in the cluster.
Query parameters
-
include_remotes
boolean Include remote cluster data into the response
-
timeout
string Period to wait for each node to respond. If a node does not respond before its timeout expires, the response does not include its stats. However, timed out nodes are included in the response’s
_nodes.failed
property. Defaults to no timeout.
curl \
--request GET 'http://api.example.com/_cluster/stats/nodes/{node_id}' \
--header "Authorization: $API_KEY"
Get node statistics
Get statistics for nodes in a cluster. By default, all stats are returned. You can limit the returned information by using metrics.
Path parameters
-
node_id
string | array[string] Required Comma-separated list of node IDs or names used to limit returned information.
Query parameters
-
completion_fields
string | array[string] Comma-separated list or wildcard expressions of fields to include in fielddata and suggest statistics.
-
fielddata_fields
string | array[string] Comma-separated list or wildcard expressions of fields to include in fielddata statistics.
-
fields
string | array[string] Comma-separated list or wildcard expressions of fields to include in the statistics.
-
groups
boolean Comma-separated list of search groups to include in the search statistics.
-
include_segment_file_sizes
boolean If true, the call reports the aggregated disk usage of each one of the Lucene index files (only applies if segment stats are requested).
-
level
string Indicates whether statistics are aggregated at the cluster, index, or shard level.
Values are
cluster
,indices
, orshards
. -
timeout
string Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
-
types
array[string] A comma-separated list of document types for the indexing index metric.
-
include_unloaded_segments
boolean If
true
, the response includes information from segments that are not loaded into memory.
curl \
--request GET 'http://api.example.com/_nodes/{node_id}/stats' \
--header "Authorization: $API_KEY"
Connector
The connector and sync jobs APIs provide a convenient way to create and manage Elastic connectors and sync jobs in an internal index.
Connectors are Elasticsearch integrations for syncing content from third-party data sources, which can be deployed on Elastic Cloud or hosted on your own infrastructure.
This API provides an alternative to relying solely on Kibana UI for connector and sync job management. The API comes with a set of validations and assertions to ensure that the state representation in the internal index remains valid.
This API requires the manage_connector
privilege or, for read-only endpoints, the monitor_connector
privilege.
Set the connector sync job stats
Technical preview
Stats include: deleted_document_count
, indexed_document_count
, indexed_document_volume
, and total_document_count
.
You can also update last_seen
.
This API is mainly used by the connector service for updating sync job information.
To sync data using self-managed connectors, you need to deploy the Elastic connector service on your own infrastructure. This service runs automatically on Elastic Cloud for Elastic managed connectors.
Path parameters
-
connector_sync_job_id
string Required The unique identifier of the connector sync job.
Body
Required
-
deleted_document_count
number Required The number of documents the sync job deleted.
-
indexed_document_count
number Required The number of documents the sync job indexed.
-
indexed_document_volume
number Required The total size of the data (in MiB) the sync job indexed.
-
last_seen
string A duration. Units can be
nanos
,micros
,ms
(milliseconds),s
(seconds),m
(minutes),h
(hours) andd
(days). Also accepts "0" without a unit and "-1" to indicate an unspecified value. -
metadata
object -
total_document_count
number The total number of documents in the target index after the sync job finished.
curl \
--request PUT 'http://api.example.com/_connector/_sync_job/{connector_sync_job_id}/_stats' \
--header "Authorization: $API_KEY" \
--header "Content-Type: application/json" \
--data '{"deleted_document_count":42.0,"indexed_document_count":42.0,"indexed_document_volume":42.0,"last_seen":"string","metadata":{"additionalProperty1":{},"additionalProperty2":{}},"total_document_count":42.0}'
Update the connector index name
Beta
Update the index_name
field of a connector, specifying the index where the data ingested by the connector is stored.
Path parameters
-
connector_id
string Required The unique identifier of the connector to be updated
Body
Required
index_name
string | null
curl \
--request PUT 'http://api.example.com/_connector/{connector_id}/_index_name' \
--header "Authorization: $API_KEY" \
--header "Content-Type: application/json" \
--data '"{\n \"index_name\": \"data-from-my-google-drive\"\n}"'
{
"index_name": "data-from-my-google-drive"
}
{
"result": "updated"
}
Update the connector pipeline
Beta
When you create a new connector, the configuration of an ingest pipeline is populated with default settings.
Path parameters
-
connector_id
string Required The unique identifier of the connector to be updated
curl \
--request PUT 'http://api.example.com/_connector/{connector_id}/_pipeline' \
--header "Authorization: $API_KEY" \
--header "Content-Type: application/json" \
--data '"{\n \"pipeline\": {\n \"extract_binary_content\": true,\n \"name\": \"my-connector-pipeline\",\n \"reduce_whitespace\": true,\n \"run_ml_inference\": true\n }\n}"'
{
"pipeline": {
"extract_binary_content": true,
"name": "my-connector-pipeline",
"reduce_whitespace": true,
"run_ml_inference": true
}
}
{
"result": "updated"
}
Update the connector scheduling
Beta
Path parameters
-
connector_id
string Required The unique identifier of the connector to be updated
Body
Required
-
scheduling
object Required
curl \
--request PUT 'http://api.example.com/_connector/{connector_id}/_scheduling' \
--header "Authorization: $API_KEY" \
--header "Content-Type: application/json" \
--data '"{\n \"scheduling\": {\n \"access_control\": {\n \"enabled\": true,\n \"interval\": \"0 10 0 * * ?\"\n },\n \"full\": {\n \"enabled\": true,\n \"interval\": \"0 20 0 * * ?\"\n },\n \"incremental\": {\n \"enabled\": false,\n \"interval\": \"0 30 0 * * ?\"\n }\n }\n}"'
{
"scheduling": {
"access_control": {
"enabled": true,
"interval": "0 10 0 * * ?"
},
"full": {
"enabled": true,
"interval": "0 20 0 * * ?"
},
"incremental": {
"enabled": false,
"interval": "0 30 0 * * ?"
}
}
}
{
"scheduling": {
"full": {
"enabled": true,
"interval": "0 10 0 * * ?"
}
}
}
{
"result": "updated"
}
Get follower information
Added in 6.7.0
Get information about all cross-cluster replication follower indices. For example, the results include follower index names, leader index names, replication options, and whether the follower indices are active or paused.
Path parameters
-
index
string | array[string] Required A comma-delimited list of follower index patterns.
Query parameters
-
master_timeout
string The period to wait for a connection to the master node. If the master node is not available before the timeout expires, the request fails and returns an error. It can also be set to
-1
to indicate that the request should never timeout.
curl \
--request GET 'http://api.example.com/{index}/_ccr/info' \
--header "Authorization: $API_KEY"
{
"follower_indices": [
{
"follower_index": "follower_index",
"remote_cluster": "remote_cluster",
"leader_index": "leader_index",
"status": "active",
"parameters": {
"max_read_request_operation_count": 5120,
"max_read_request_size": "32mb",
"max_outstanding_read_requests": 12,
"max_write_request_operation_count": 5120,
"max_write_request_size": "9223372036854775807b",
"max_outstanding_write_requests": 9,
"max_write_buffer_count": 2147483647,
"max_write_buffer_size": "512mb",
"max_retry_delay": "500ms",
"read_poll_timeout": "1m"
}
}
]
}
{
"follower_indices": [
{
"follower_index": "follower_index",
"remote_cluster": "remote_cluster",
"leader_index": "leader_index",
"status": "paused"
}
]
}
Delete an enrich policy
Added in 7.5.0
Deletes an existing enrich policy and its enrich index.
Path parameters
-
name
string Required Enrich policy to delete.
Query parameters
-
master_timeout
string Period to wait for a connection to the master node.
curl \
--request DELETE 'http://api.example.com/_enrich/policy/{name}' \
--header "Authorization: $API_KEY"
Query parameters
-
master_timeout
string Period to wait for a connection to the master node.
curl \
--request GET 'http://api.example.com/_enrich/policy' \
--header "Authorization: $API_KEY"
Delete component templates
Added in 7.8.0
Component templates are building blocks for constructing index templates that specify index mappings, settings, and aliases.
Path parameters
-
name
string | array[string] Required Comma-separated list or wildcard expression of component template names used to limit the request.
Query parameters
-
master_timeout
string Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.
-
timeout
string Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
curl \
--request DELETE 'http://api.example.com/_component_template/{name}' \
--header "Authorization: $API_KEY"
Get component templates
Added in 7.8.0
Get information about component templates.
Query parameters
-
flat_settings
boolean If
true
, returns settings in flat format. -
include_defaults
boolean Return all default configurations for the component template (default: false)
-
local
boolean If
true
, the request retrieves information from the local node only. Iffalse
, information is retrieved from the master node. -
master_timeout
string Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.
curl \
--request GET 'http://api.example.com/_component_template' \
--header "Authorization: $API_KEY"
Query parameters
-
allow_no_indices
boolean If
false
, the request returns an error if any wildcard expression, index alias, or_all
value targets only missing or closed indices. This behavior applies even if the request targets other open indices. -
expand_wildcards
string | array[string] Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports comma-separated values, such as
open,hidden
. Valid values are:all
,open
,closed
,hidden
,none
.Supported values include:
all
: Match any data stream or index, including hidden ones.open
: Match open, non-hidden indices. Also matches any non-hidden data stream.closed
: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.hidden
: Match hidden data streams and hidden indices. Must be combined withopen
,closed
, orboth
.none
: Wildcard expressions are not accepted.
-
master_timeout
string Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.
curl \
--request GET 'http://api.example.com/_alias' \
--header "Authorization: $API_KEY"
Path parameters
-
index
string | array[string] Required Comma-separated list of data streams or indices used to limit the request. Supports wildcards (
*
). To target all data streams and indices, omit this parameter or use*
or_all
.
Query parameters
-
allow_no_indices
boolean If
false
, the request returns an error if any wildcard expression, index alias, or_all
value targets only missing or closed indices. This behavior applies even if the request targets other open indices. -
expand_wildcards
string | array[string] Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports comma-separated values, such as
open,hidden
. Valid values are:all
,open
,closed
,hidden
,none
.Supported values include:
all
: Match any data stream or index, including hidden ones.open
: Match open, non-hidden indices. Also matches any non-hidden data stream.closed
: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.hidden
: Match hidden data streams and hidden indices. Must be combined withopen
,closed
, orboth
.none
: Wildcard expressions are not accepted.
-
master_timeout
string Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.
curl \
--request GET 'http://api.example.com/{index}/_alias' \
--header "Authorization: $API_KEY"
Get index settings
Get setting information for one or more indices. For data streams, it returns setting information for the stream's backing indices.
Query parameters
-
allow_no_indices
boolean If
false
, the request returns an error if any wildcard expression, index alias, or_all
value targets only missing or closed indices. This behavior applies even if the request targets other open indices. For example, a request targetingfoo*,bar*
returns an error if an index starts with foo but no index starts withbar
. -
expand_wildcards
string | array[string] Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports comma-separated values, such as
open,hidden
.Supported values include:
all
: Match any data stream or index, including hidden ones.open
: Match open, non-hidden indices. Also matches any non-hidden data stream.closed
: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.hidden
: Match hidden data streams and hidden indices. Must be combined withopen
,closed
, orboth
.none
: Wildcard expressions are not accepted.
-
flat_settings
boolean If
true
, returns settings in flat format. -
include_defaults
boolean If
true
, return all default settings in the response. -
local
boolean If
true
, the request retrieves information from the local node only. Iffalse
, information is retrieved from the master node. -
master_timeout
string Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.
curl \
--request GET 'http://api.example.com/_settings' \
--header "Authorization: $API_KEY"
Get index recovery information
Get information about ongoing and completed shard recoveries for one or more indices. For data streams, the API returns information for the stream's backing indices.
All recoveries, whether ongoing or complete, are kept in the cluster state and may be reported on at any time.
Shard recovery is the process of initializing a shard copy, such as restoring a primary shard from a snapshot or creating a replica shard from a primary shard. When a shard recovery completes, the recovered shard is available for search and indexing.
Recovery automatically occurs during the following processes:
- When creating an index for the first time.
- When a node rejoins the cluster and starts up any missing primary shard copies using the data that it holds in its data path.
- Creation of new replica shard copies from the primary.
- Relocation of a shard copy to a different node in the same cluster.
- A snapshot restore operation.
- A clone, shrink, or split operation.
You can determine the cause of a shard recovery using the recovery or cat recovery APIs.
The index recovery API reports information about completed recoveries only for shard copies that currently exist in the cluster. It only reports the last recovery for each shard copy and does not report historical information about earlier recoveries, nor does it report information about the recoveries of shard copies that no longer exist. This means that if a shard copy completes a recovery and then Elasticsearch relocates it onto a different node then the information about the original recovery will not be shown in the recovery API.
Path parameters
-
index
string | array[string] Required Comma-separated list of data streams, indices, and aliases used to limit the request. Supports wildcards (
*
). To target all data streams and indices, omit this parameter or use*
or_all
.
Query parameters
-
active_only
boolean If
true
, the response only includes ongoing shard recoveries. -
detailed
boolean If
true
, the response includes detailed information about shard recoveries.
curl \
--request GET 'http://api.example.com/{index}/_recovery' \
--header "Authorization: $API_KEY"
{
"index1" : {
"shards" : [ {
"id" : 0,
"type" : "SNAPSHOT",
"stage" : "INDEX",
"primary" : true,
"start_time" : "2014-02-24T12:15:59.716",
"start_time_in_millis": 1393244159716,
"stop_time" : "0s",
"stop_time_in_millis" : 0,
"total_time" : "2.9m",
"total_time_in_millis" : 175576,
"source" : {
"repository" : "my_repository",
"snapshot" : "my_snapshot",
"index" : "index1",
"version" : "{version}",
"restoreUUID": "PDh1ZAOaRbiGIVtCvZOMww"
},
"target" : {
"id" : "ryqJ5lO5S4-lSFbGntkEkg",
"host" : "my.fqdn",
"transport_address" : "my.fqdn",
"ip" : "10.0.1.7",
"name" : "my_es_node"
},
"index" : {
"size" : {
"total" : "75.4mb",
"total_in_bytes" : 79063092,
"reused" : "0b",
"reused_in_bytes" : 0,
"recovered" : "65.7mb",
"recovered_in_bytes" : 68891939,
"recovered_from_snapshot" : "0b",
"recovered_from_snapshot_in_bytes" : 0,
"percent" : "87.1%"
},
"files" : {
"total" : 73,
"reused" : 0,
"recovered" : 69,
"percent" : "94.5%"
},
"total_time" : "0s",
"total_time_in_millis" : 0,
"source_throttle_time" : "0s",
"source_throttle_time_in_millis" : 0,
"target_throttle_time" : "0s",
"target_throttle_time_in_millis" : 0
},
"translog" : {
"recovered" : 0,
"total" : 0,
"percent" : "100.0%",
"total_on_start" : 0,
"total_time" : "0s",
"total_time_in_millis" : 0
},
"verify_index" : {
"check_index_time" : "0s",
"check_index_time_in_millis" : 0,
"total_time" : "0s",
"total_time_in_millis" : 0
}
} ]
}
}
{
"index1" : {
"shards" : [ {
"id" : 0,
"type" : "EXISTING_STORE",
"stage" : "DONE",
"primary" : true,
"start_time" : "2014-02-24T12:38:06.349",
"start_time_in_millis" : "1393245486349",
"stop_time" : "2014-02-24T12:38:08.464",
"stop_time_in_millis" : "1393245488464",
"total_time" : "2.1s",
"total_time_in_millis" : 2115,
"source" : {
"id" : "RGMdRc-yQWWKIBM4DGvwqQ",
"host" : "my.fqdn",
"transport_address" : "my.fqdn",
"ip" : "10.0.1.7",
"name" : "my_es_node"
},
"target" : {
"id" : "RGMdRc-yQWWKIBM4DGvwqQ",
"host" : "my.fqdn",
"transport_address" : "my.fqdn",
"ip" : "10.0.1.7",
"name" : "my_es_node"
},
"index" : {
"size" : {
"total" : "24.7mb",
"total_in_bytes" : 26001617,
"reused" : "24.7mb",
"reused_in_bytes" : 26001617,
"recovered" : "0b",
"recovered_in_bytes" : 0,
"recovered_from_snapshot" : "0b",
"recovered_from_snapshot_in_bytes" : 0,
"percent" : "100.0%"
},
"files" : {
"total" : 26,
"reused" : 26,
"recovered" : 0,
"percent" : "100.0%",
"details" : [ {
"name" : "segments.gen",
"length" : 20,
"recovered" : 20
}, {
"name" : "_0.cfs",
"length" : 135306,
"recovered" : 135306,
"recovered_from_snapshot": 0
}, {
"name" : "segments_2",
"length" : 251,
"recovered" : 251,
"recovered_from_snapshot": 0
}
]
},
"total_time" : "2ms",
"total_time_in_millis" : 2,
"source_throttle_time" : "0s",
"source_throttle_time_in_millis" : 0,
"target_throttle_time" : "0s",
"target_throttle_time_in_millis" : 0
},
"translog" : {
"recovered" : 71,
"total" : 0,
"percent" : "100.0%",
"total_on_start" : 0,
"total_time" : "2.0s",
"total_time_in_millis" : 2025
},
"verify_index" : {
"check_index_time" : 0,
"check_index_time_in_millis" : 0,
"total_time" : "88ms",
"total_time_in_millis" : 88
}
} ]
}
}
Refresh an index
A refresh makes recent operations performed on one or more indices available for search. For data streams, the API runs the refresh operation on the stream’s backing indices.
By default, Elasticsearch periodically refreshes indices every second, but only on indices that have received one search request or more in the last 30 seconds.
You can change this default interval with the index.refresh_interval
setting.
Refresh requests are synchronous and do not return a response until the refresh operation completes.
Refreshes are resource-intensive. To ensure good cluster performance, it's recommended to wait for Elasticsearch's periodic refresh rather than performing an explicit refresh when possible.
If your application workflow indexes documents and then runs a search to retrieve the indexed document, it's recommended to use the index API's refresh=wait_for
query parameter option.
This option ensures the indexing operation waits for a periodic refresh before running the search.
Query parameters
-
allow_no_indices
boolean If
false
, the request returns an error if any wildcard expression, index alias, or_all
value targets only missing or closed indices. This behavior applies even if the request targets other open indices. -
expand_wildcards
string | array[string] Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports comma-separated values, such as
open,hidden
. Valid values are:all
,open
,closed
,hidden
,none
.Supported values include:
all
: Match any data stream or index, including hidden ones.open
: Match open, non-hidden indices. Also matches any non-hidden data stream.closed
: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.hidden
: Match hidden data streams and hidden indices. Must be combined withopen
,closed
, orboth
.none
: Wildcard expressions are not accepted.
curl \
--request GET 'http://api.example.com/_refresh' \
--header "Authorization: $API_KEY"
Get index segments
Get low-level information about the Lucene segments in index shards. For data streams, the API returns information about the stream's backing indices.
Path parameters
-
index
string | array[string] Required Comma-separated list of data streams, indices, and aliases used to limit the request. Supports wildcards (
*
). To target all data streams and indices, omit this parameter or use*
or_all
.
Query parameters
-
allow_no_indices
boolean If
false
, the request returns an error if any wildcard expression, index alias, or_all
value targets only missing or closed indices. This behavior applies even if the request targets other open indices. -
expand_wildcards
string | array[string] Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports comma-separated values, such as
open,hidden
. Valid values are:all
,open
,closed
,hidden
,none
.Supported values include:
all
: Match any data stream or index, including hidden ones.open
: Match open, non-hidden indices. Also matches any non-hidden data stream.closed
: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.hidden
: Match hidden data streams and hidden indices. Must be combined withopen
,closed
, orboth
.none
: Wildcard expressions are not accepted.
curl \
--request GET 'http://api.example.com/{index}/_segments' \
--header "Authorization: $API_KEY"
{
"acknowledged": true,
"shards_acknowledged": true,
"old_index": ".ds-my-data-stream-2099.05.06-000001",
"new_index": ".ds-my-data-stream-2099.05.07-000002",
"rolled_over": true,
"dry_run": false,
"lazy": false,
"conditions": {
"[max_age: 7d]": false,
"[max_docs: 1000]": true,
"[max_primary_shard_size: 50gb]": false,
"[max_primary_shard_docs: 2000]": false
}
}
Get index shard stores
Get store information about replica shards in one or more indices. For data streams, the API retrieves store information for the stream's backing indices.
The index shard stores API returns the following information:
- The node on which each replica shard exists.
- The allocation ID for each replica shard.
- A unique ID for each replica shard.
- Any errors encountered while opening the shard index or from an earlier failure.
By default, the API returns store information only for primary shards that are unassigned or have one or more unassigned replica shards.
Path parameters
-
index
string | array[string] Required List of data streams, indices, and aliases used to limit the request.
Query parameters
-
allow_no_indices
boolean If false, the request returns an error if any wildcard expression, index alias, or _all value targets only missing or closed indices. This behavior applies even if the request targets other open indices.
-
expand_wildcards
string | array[string] Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams.
Supported values include:
all
: Match any data stream or index, including hidden ones.open
: Match open, non-hidden indices. Also matches any non-hidden data stream.closed
: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.hidden
: Match hidden data streams and hidden indices. Must be combined withopen
,closed
, orboth
.none
: Wildcard expressions are not accepted.
-
status
string | array[string] List of shard health statuses used to limit the request.
Supported values include:
green
: The primary shard and all replica shards are assigned.yellow
: One or more replica shards are unassigned.red
: The primary shard is unassigned.all
: Return all shards, regardless of health status.
curl \
--request GET 'http://api.example.com/{index}/_shard_stores' \
--header "Authorization: $API_KEY"
{
"indices": {
"my-index-000001": {
"shards": {
"0": {
"stores": [
{
"sPa3OgxLSYGvQ4oPs-Tajw": {
"name": "node_t0",
"ephemeral_id": "9NlXRFGCT1m8tkvYCMK-8A",
"transport_address": "local[1]",
"external_id": "node_t0",
"attributes": {},
"roles": [],
"version": "8.10.0",
"min_index_version": 7000099,
"max_index_version": 8100099
},
"allocation_id": "2iNySv_OQVePRX-yaRH_lQ",
"allocation": "primary",
"store_exception": {}
}
]
}
}
}
}
}
Path parameters
-
index
string | array[string] Required Comma-separated list of data streams, indices, and aliases to search. Supports wildcards (
*
). To search all data streams or indices, omit this parameter or use*
or_all
.
Query parameters
-
allow_no_indices
boolean If
false
, the request returns an error if any wildcard expression, index alias, or_all
value targets only missing or closed indices. This behavior applies even if the request targets other open indices. -
all_shards
boolean If
true
, the validation is executed on all shards instead of one random shard per index. -
analyzer
string Analyzer to use for the query string. This parameter can only be used when the
q
query string parameter is specified. -
analyze_wildcard
boolean If
true
, wildcard and prefix queries are analyzed. -
default_operator
string The default operator for query string query:
AND
orOR
.Values are
and
,AND
,or
, orOR
. -
df
string Field to use as default where no field prefix is given in the query string. This parameter can only be used when the
q
query string parameter is specified. -
expand_wildcards
string | array[string] Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports comma-separated values, such as
open,hidden
. Valid values are:all
,open
,closed
,hidden
,none
.Supported values include:
all
: Match any data stream or index, including hidden ones.open
: Match open, non-hidden indices. Also matches any non-hidden data stream.closed
: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.hidden
: Match hidden data streams and hidden indices. Must be combined withopen
,closed
, orboth
.none
: Wildcard expressions are not accepted.
-
explain
boolean If
true
, the response returns detailed information if an error has occurred. -
lenient
boolean If
true
, format-based query failures (such as providing text to a numeric field) in the query string will be ignored. -
rewrite
boolean If
true
, returns a more detailed explanation showing the actual Lucene query that will be executed. -
q
string Query in the Lucene query string syntax.
Body
-
query
object An Elasticsearch Query DSL (Domain Specific Language) object that defines a query.
External documentation
curl \
--request GET 'http://api.example.com/{index}/_validate/query' \
--header "Authorization: $API_KEY" \
--header "Content-Type: application/json" \
--data '{"query":{}}'
Explain the lifecycle state
Added in 6.6.0
Get the current lifecycle status for one or more indices. For data streams, the API retrieves the current lifecycle status for the stream's backing indices.
The response indicates when the index entered each lifecycle state, provides the definition of the running phase, and information about any failures.
Path parameters
-
index
string Required Comma-separated list of data streams, indices, and aliases to target. Supports wildcards (
*
). To target all data streams and indices, use*
or_all
.
Query parameters
-
only_errors
boolean Filters the returned indices to only indices that are managed by ILM and are in an error state, either due to an encountering an error while executing the policy, or attempting to use a policy that does not exist.
-
only_managed
boolean Filters the returned indices to only indices that are managed by ILM.
-
master_timeout
string Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.
curl \
--request GET 'http://api.example.com/{index}/_ilm/explain' \
--header "Authorization: $API_KEY"
{
"indices": {
"my-index-000001": {
"index": "my-index-000001",
"index_creation_date_millis": 1538475653281,
"index_creation_date": "2018-10-15T13:45:21.981Z",
"time_since_index_creation": "15s",
"managed": true,
"policy": "my_policy",
"lifecycle_date_millis": 1538475653281,
"lifecycle_date": "2018-10-15T13:45:21.981Z",
"age": "15s",
"phase": "new",
"phase_time_millis": 1538475653317,
"phase_time": "2018-10-15T13:45:22.577Z",
"action": "complete"
"action_time_millis": 1538475653317,
"action_time": "2018-10-15T13:45:22.577Z",
"step": "complete",
"step_time_millis": 1538475653317,
"step_time": "2018-10-15T13:45:22.577Z"
}
}
}
Move to a lifecycle step
Added in 6.6.0
Manually move an index into a specific step in the lifecycle policy and run that step.
WARNING: This operation can result in the loss of data. Manually moving an index into a specific step runs that step even if it has already been performed. This is a potentially destructive action and this should be considered an expert level API.
You must specify both the current step and the step to be executed in the body of the request. The request will fail if the current step does not match the step currently running for the index This is to prevent the index from being moved from an unexpected step into the next step.
When specifying the target (next_step
) to which the index will be moved, either the name or both the action and name fields are optional.
If only the phase is specified, the index will move to the first step of the first action in the target phase.
If the phase and action are specified, the index will move to the first step of the specified action in the specified phase.
Only actions specified in the ILM policy are considered valid.
An index cannot move to a step that is not part of its policy.
Path parameters
-
index
string Required The name of the index whose lifecycle step is to change
Body
-
current_step
object Required -
next_step
object Required
curl \
--request POST 'http://api.example.com/_ilm/move/{index}' \
--header "Authorization: $API_KEY" \
--header "Content-Type: application/json" \
--data '"{\n \"current_step\": {\n \"phase\": \"new\",\n \"action\": \"complete\",\n \"name\": \"complete\"\n },\n \"next_step\": {\n \"phase\": \"warm\",\n \"action\": \"forcemerge\",\n \"name\": \"forcemerge\"\n }\n}"'
{
"current_step": {
"phase": "new",
"action": "complete",
"name": "complete"
},
"next_step": {
"phase": "warm",
"action": "forcemerge",
"name": "forcemerge"
}
}
{
"current_step": {
"phase": "hot",
"action": "complete",
"name": "complete"
},
"next_step": {
"phase": "warm"
}
}
{
"acknowledged": true
}
Inference
Inference APIs enable you to use certain services, such as built-in machine learning models (ELSER, E5), models uploaded through Eland, Cohere, OpenAI, Azure, Google AI Studio or Hugging Face. For built-in models and models uploaded through Eland, the inference APIs offer an alternative way to use and manage trained models. However, if you do not plan to use the inference APIs to use these models or if you want to use non-NLP models, use the machine learning trained model APIs.
Create an OpenAI inference endpoint
Added in 8.12.0
Create an inference endpoint to perform an inference task with the openai
service.
When you create an inference endpoint, the associated machine learning model is automatically deployed if it is not already running.
After creating the endpoint, wait for the model deployment to complete before using it.
To verify the deployment status, use the get trained model statistics API.
Look for "state": "fully_allocated"
in the response and ensure that the "allocation_count"
matches the "target_allocation_count"
.
Avoid creating multiple endpoints for the same model unless required, as each endpoint consumes significant resources.
Path parameters
-
task_type
string Required The type of the inference task that the model will perform. NOTE: The
chat_completion
task type only supports streaming and only through the _stream API.Values are
chat_completion
,completion
, ortext_embedding
. -
openai_inference_id
string Required The unique identifier of the inference endpoint.
Body
-
chunking_settings
object -
service
string Required Value is
openai
. -
service_settings
object Required -
task_settings
object
curl \
--request PUT 'http://api.example.com/_inference/{task_type}/{openai_inference_id}' \
--header "Authorization: $API_KEY" \
--header "Content-Type: application/json" \
--data '"{\n \"service\": \"openai\",\n \"service_settings\": {\n \"api_key\": \"OpenAI-API-Key\",\n \"model_id\": \"text-embedding-3-small\",\n \"dimensions\": 128\n }\n}"'
{
"service": "openai",
"service_settings": {
"api_key": "OpenAI-API-Key",
"model_id": "text-embedding-3-small",
"dimensions": 128
}
}
{
"service": "amazonbedrock",
"service_settings": {
"access_key": "AWS-access-key",
"secret_key": "AWS-secret-key",
"region": "us-east-1",
"provider": "amazontitan",
"model": "amazon.titan-text-premier-v1:0"
}
}
Delete a Logstash pipeline
Added in 7.12.0
Delete a pipeline that is used for Logstash Central Management. If the request succeeds, you receive an empty response with an appropriate status code.
Path parameters
-
id
string Required An identifier for the pipeline.
curl \
--request DELETE 'http://api.example.com/_logstash/pipeline/{id}' \
--header "Authorization: $API_KEY"
Get machine learning memory usage info
Added in 8.2.0
Get information about how machine learning jobs and trained models are using memory, on each node, both within the JVM heap, and natively, outside of the JVM.
Query parameters
-
master_timeout
string Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.
-
timeout
string Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
curl \
--request GET 'http://api.example.com/_ml/memory/_stats' \
--header "Authorization: $API_KEY"
Create a filter
Added in 5.4.0
A filter contains a list of strings. It can be used by one or more anomaly detection jobs.
Specifically, filters are referenced in the custom_rules
property of detector configuration objects.
Path parameters
-
filter_id
string Required A string that uniquely identifies a filter.
Body
Required
-
description
string A description of the filter.
-
items
array[string] The items of the filter. A wildcard
*
can be used at the beginning or the end of an item. Up to 10000 items are allowed in each filter.
curl \
--request PUT 'http://api.example.com/_ml/filters/{filter_id}' \
--header "Authorization: $API_KEY" \
--header "Content-Type: application/json" \
--data '{"description":"string","items":["string"]}'
Delete an anomaly detection job
Added in 5.4.0
All job configuration, model state and results are deleted. It is not currently possible to delete multiple jobs using wildcards or a comma separated list. If you delete a job that has a datafeed, the request first tries to delete the datafeed. This behavior is equivalent to calling the delete datafeed API with the same timeout and force parameters as the delete job request.
Path parameters
-
job_id
string Required Identifier for the anomaly detection job.
Query parameters
-
force
boolean Use to forcefully delete an opened job; this method is quicker than closing and deleting the job.
-
delete_user_annotations
boolean Specifies whether annotations that have been added by the user should be deleted along with any auto-generated annotations when the job is reset.
-
wait_for_completion
boolean Specifies whether the request should return immediately or wait until the job deletion completes.
curl \
--request DELETE 'http://api.example.com/_ml/anomaly_detectors/{job_id}' \
--header "Authorization: $API_KEY"
{
"acknowledged": true
}
{
"task": "oTUltX4IQMOUUVeiohTt8A:39"
}
Get datafeed stats
Added in 5.5.0
You can get statistics for multiple datafeeds in a single API request by
using a comma-separated list of datafeeds or a wildcard expression. You can
get statistics for all datafeeds by using _all
, by specifying *
as the
<feed_id>
, or by omitting the <feed_id>
. If the datafeed is stopped, the
only information you receive is the datafeed_id
and the state
.
This API returns a maximum of 10,000 datafeeds.
Query parameters
-
allow_no_match
boolean Specifies what to do when the request:
- Contains wildcard expressions and there are no datafeeds that match.
- Contains the
_all
string or no identifiers and there are no matches. - Contains wildcard expressions and there are only partial matches.
The default value is
true
, which returns an emptydatafeeds
array when there are no matches and the subset of results when there are partial matches. If this parameter isfalse
, the request returns a404
status code when there are no matches or only partial matches.
curl \
--request GET 'http://api.example.com/_ml/datafeeds/_stats' \
--header "Authorization: $API_KEY"
Get model snapshots info
Added in 5.4.0
Path parameters
-
job_id
string Required Identifier for the anomaly detection job.
Query parameters
-
desc
boolean If true, the results are sorted in descending order.
-
end
string | number Returns snapshots with timestamps earlier than this time.
-
from
number Skips the specified number of snapshots.
-
size
number Specifies the maximum number of snapshots to obtain.
-
sort
string Specifies the sort field for the requested snapshots. By default, the snapshots are sorted by their timestamp.
-
start
string | number Returns snapshots with timestamps after this time.
Body
-
desc
boolean Refer to the description for the
desc
query parameter. end
string | number A date and time, either as a string whose format can depend on the context (defaulting to ISO 8601), or a number of milliseconds since the Epoch. Elasticsearch accepts both as input, but will generally output a string representation.
-
page
object -
sort
string Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.
start
string | number A date and time, either as a string whose format can depend on the context (defaulting to ISO 8601), or a number of milliseconds since the Epoch. Elasticsearch accepts both as input, but will generally output a string representation.
curl \
--request GET 'http://api.example.com/_ml/anomaly_detectors/{job_id}/model_snapshots' \
--header "Authorization: $API_KEY" \
--header "Content-Type: application/json" \
--data '{"desc":true,"":"string","page":{"from":42.0,"size":42.0},"sort":"string"}'
Explain data frame analytics config
Added in 7.3.0
This API provides explanations for a data frame analytics config that either exists already or one that has not been created yet. The following explanations are provided:
- which fields are included or not in the analysis and why,
- how much memory is estimated to be required. The estimate can be used when deciding the appropriate value for model_memory_limit setting later on. If you have object fields or fields that are excluded via source filtering, they are not included in the explanation.
Path parameters
-
id
string Required Identifier for the data frame analytics job. This identifier can contain lowercase alphanumeric characters (a-z and 0-9), hyphens, and underscores. It must start and end with alphanumeric characters.
Body
-
source
object -
dest
object -
analysis
object -
description
string A description of the job.
-
model_memory_limit
string The approximate maximum amount of memory resources that are permitted for analytical processing. If your
elasticsearch.yml
file contains anxpack.ml.max_model_memory_limit
setting, an error occurs when you try to create data frame analytics jobs that havemodel_memory_limit
values greater than that setting. -
max_num_threads
number The maximum number of threads to be used by the analysis. Using more threads may decrease the time necessary to complete the analysis at the cost of using more CPU. Note that the process may use additional threads for operational functionality other than the analysis itself.
-
analyzed_fields
object -
allow_lazy_start
boolean Specifies whether this job can start when there is insufficient machine learning node capacity for it to be immediately assigned to a node.
curl \
--request GET 'http://api.example.com/_ml/data_frame/analytics/{id}/_explain' \
--header "Authorization: $API_KEY" \
--header "Content-Type: application/json" \
--data '"{\n \"source\": {\n \"index\": \"houses_sold_last_10_yrs\"\n },\n \"analysis\": {\n \"regression\": {\n \"dependent_variable\": \"price\"\n }\n }\n}"'
{
"source": {
"index": "houses_sold_last_10_yrs"
},
"analysis": {
"regression": {
"dependent_variable": "price"
}
}
}
{
"field_selection": [
{
"field": "number_of_bedrooms",
"mappings_types": [
"integer"
],
"is_included": true,
"is_required": false,
"feature_type": "numerical"
},
{
"field": "postcode",
"mappings_types": [
"text"
],
"is_included": false,
"is_required": false,
"reason": "[postcode.keyword] is preferred because it is aggregatable"
},
{
"field": "postcode.keyword",
"mappings_types": [
"keyword"
],
"is_included": true,
"is_required": false,
"feature_type": "categorical"
},
{
"field": "price",
"mappings_types": [
"float"
],
"is_included": true,
"is_required": true,
"feature_type": "numerical"
}
],
"memory_estimation": {
"expected_memory_without_disk": "128MB",
"expected_memory_with_disk": "32MB"
}
}
Get a query ruleset
Added in 8.10.0
Get details about a query ruleset.
Path parameters
-
ruleset_id
string Required The unique identifier of the query ruleset
curl \
--request GET 'http://api.example.com/_query_rules/{ruleset_id}' \
--header "Authorization: $API_KEY"
{
"ruleset_id": "my-ruleset",
"rules": [
{
"rule_id": "my-rule1",
"type": "pinned",
"criteria": [
{
"type": "contains",
"metadata": "query_string",
"values": [ "pugs", "puggles" ]
}
],
"actions": {
"ids": [
"id1",
"id2"
]
}
},
{
"rule_id": "my-rule2",
"type": "pinned",
"criteria": [
{
"type": "fuzzy",
"metadata": "query_string",
"values": [ "rescue dogs" ]
}
],
"actions": {
"docs": [
{
"_index": "index1",
"_id": "id3"
},
{
"_index": "index2",
"_id": "id4"
}
]
}
}
]
}