Create data frame analytics jobs APIedit

Instantiates a data frame analytics job.


This functionality is experimental and may be changed or removed completely in a future release. Elastic will take a best effort approach to fix any issues, but experimental features are not subject to the support SLA of official GA features.


PUT _ml/data_frame/analytics/<data_frame_analytics_id>


  • You must have machine_learning_admin built-in role to use this API. You must also have read and view_index_metadata privileges on the source index and read, create_index, and index privileges on the destination index. For more information, see Security privileges and Built-in roles.


This API creates a data frame analytics job that performs an analysis on the source index and stores the outcome in a destination index.

The destination index will be automatically created if it does not exist. The index.number_of_shards and index.number_of_replicas settings of the source index will be copied over the destination index. When the source index matches multiple indices, these settings will be set to the maximum values found in the source indices.

The mappings of the source indices are also attempted to be copied over to the destination index, however, if the mappings of any of the fields don’t match among the source indices, the attempt will fail with an error message.

If the destination index already exists, then it will be use as is. This makes it possible to set up the destination index in advance with custom settings and mappings.

Path parametersedit

(Required, string) A numerical character string that uniquely identifies the data frame analytics job. This identifier can contain lowercase alphanumeric characters (a-z and 0-9), hyphens, and underscores. It must start and end with alphanumeric characters.

Request bodyedit

(Required, object) Defines the type of data frame analytics you want to perform on your source index. For example: outlier_detection. See Analysis objects.
(Optional, object) You can specify both includes and/or excludes patterns. If analyzed_fields is not set, only the relevant fields will be included. For example, all the numeric fields for outlier detection.
(Required, object) The destination configuration, consisting of index and optionally results_field (ml by default). See data frame analytics properties.
(Optional, string) The approximate maximum amount of memory resources that are permitted for analytical processing. The default value for data frame analytics jobs is 1gb. If your elasticsearch.yml file contains an setting, an error occurs when you try to create data frame analytics jobs that have model_memory_limit values greater than that setting. For more information, see Machine learning settings.
(Required, object) The source configuration, consisting of index and optionally a query. See data frame analytics properties.


The following example creates the loganalytics data frame analytics job, the analysis type is outlier_detection:

PUT _ml/data_frame/analytics/loganalytics
  "source": {
    "index": "logdata"
  "dest": {
    "index": "logdata_out"
  "analysis": {
    "outlier_detection": {

The API returns the following result:

  "id" : "loganalytics",
  "source" : {
    "index" : [
    "query" : {
      "match_all" : { }
  "dest" : {
    "index" : "logdata_out",
    "results_field" : "ml"
  "analysis" : {
    "outlier_detection" : { }
  "model_memory_limit" : "1gb",
  "create_time" : 1562351429434,
  "version" : "7.3.0"