Start a data frame analytics job
Generally available; Added in 7.3.0
A data frame analytics job can be started and stopped multiple times
throughout its lifecycle.
If the destination index does not exist, it is created automatically the
first time you start the data frame analytics job. The
index.number_of_shards
and index.number_of_replicas
settings for the
destination index are copied from the source index. If there are multiple
source indices, the destination index copies the highest setting values. The
mappings for the destination index are also copied from the source indices.
If there are any mapping conflicts, the job fails to start.
If the destination index exists, it is used as is. You can therefore set up
the destination index in advance with custom settings and mappings.
Required authorization
- Index privileges:
create_index
,index
,manage
,read
,view_index_metadata
- Cluster privileges:
manage_ml
Path parameters
-
id
string Required Identifier for the data frame analytics job. This identifier can contain lowercase alphanumeric characters (a-z and 0-9), hyphens, and underscores. It must start and end with alphanumeric characters.
Query parameters
-
timeout
string Controls the amount of time to wait until the data frame analytics job starts.
Values are
-1
or0
.
POST _ml/data_frame/analytics/loganalytics/_start
curl \
--request POST 'http://api.example.com/_ml/data_frame/analytics/{id}/_start' \
--header "Authorization: $API_KEY"