All methods and paths for this operation:
This API returns the first "page" of search results from a datafeed. You can preview an existing datafeed or provide configuration details for a datafeed and anomaly detection job in the API. The preview shows the structure of the data that will be passed to the anomaly detection engine. IMPORTANT: When Elasticsearch security features are enabled, the preview uses the credentials of the user that called the API. However, when the datafeed starts it uses the roles of the last user that created or updated the datafeed. To get a preview that accurately reflects the behavior of the datafeed, use the appropriate credentials. You can also use secondary authorization headers to supply the credentials.
readmanage_mlA numerical character string that uniquely identifies the datafeed. This identifier can contain lowercase alphanumeric characters (a-z and 0-9), hyphens, and underscores. It must start and end with alphanumeric characters. NOTE: If you use this path parameter, you cannot provide datafeed or anomaly detection job configuration details in the request body.
The start time from where the datafeed preview should begin
The end time when the datafeed preview should stop
The datafeed definition to preview.
If set, the datafeed performs aggregation searches. Support for aggregations is limited and should be used only with low cardinality data.
Datafeeds might be required to search over long time periods, for several months or years. This search is split into time chunks in order to ensure the load on Elasticsearch is managed. Chunking configuration controls how the size of these time chunks are calculated and is an advanced configuration option.
If the mode is auto, the chunk size is dynamically calculated;
this is the recommended value when the datafeed does not use aggregations.
If the mode is manual, chunking is applied according to the specified time_span;
use this mode when the datafeed uses aggregations. If the mode is off, no chunking is applied.
Values are auto, manual, or off.
The time span that each search will be querying. This setting is applicable only when the mode is set to manual.
A numerical character string that uniquely identifies the datafeed. This identifier can contain lowercase alphanumeric characters (a-z and 0-9), hyphens, and underscores. It must start and end with alphanumeric characters. The default value is the job identifier.
Specifies whether the datafeed checks for missing data and the size of the window. The datafeed can optionally search over indices that have already been read in an effort to determine whether any data has subsequently been added to the index. If missing data is found, it is a good indication that the query_delay option is set too low and the data is being indexed after the datafeed has passed that moment in time. This check runs only on real-time datafeeds.
The window of time that is searched for late data. This window of time ends with the latest finalized bucket.
It defaults to null, which causes an appropriate check_window to be calculated when the real-time datafeed runs.
In particular, the default check_window span calculation is based on the maximum of 2h or 8 * bucket_span.
Specifies whether the datafeed periodically checks for delayed data.
The interval at which scheduled queries are made while the datafeed runs in real time. The default value is either the bucket span for short bucket spans, or, for longer bucket spans, a sensible fraction of the bucket span. For example: 150s. When frequency is shorter than the bucket span, interim results for the last (partial) bucket are written then eventually overwritten by the full bucket results. If the datafeed uses aggregations, this value must be divisible by the interval of the date histogram aggregation.
An array of index names. Wildcards are supported. If any indices are in remote clusters, the machine learning nodes must have the remote_cluster_client role.
Specifies index expansion options that are used during search.
If false, the request returns an error if any wildcard expression, index alias, or _all value targets only
missing or closed indices. This behavior applies even if the request targets other open indices. For example,
a request targeting foo*,bar* returns an error if an index starts with foo but no index starts with bar.
Type of index that wildcard patterns can match. If the request can target data streams, this argument
determines whether wildcard expressions match hidden data streams. Supports comma-separated values,
such as open,hidden.
Supported values include:
all: Match any data stream or index, including hidden ones.open: Match open, non-hidden indices. Also matches any non-hidden data stream.closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.none: Wildcard expressions are not accepted.If true, missing or closed indices are not included in the response.
Default value is false.
If true, concrete, expanded or aliased indices are ignored when frozen.
Default value is true.
If a real-time datafeed has never seen any data (including during any initial training period) then it will automatically stop itself and close its associated job after this many real-time searches that return no documents. In other words, it will stop after frequency times max_empty_searches of real-time operation. If not set then a datafeed with no end time that sees no data will remain started until it is explicitly stopped.
The Elasticsearch query domain-specific language (DSL). This value corresponds to the query object in an Elasticsearch search POST body. All the options that are supported by Elasticsearch can be used, as this object is passed verbatim to Elasticsearch.
The number of seconds behind real time that data is queried. For example, if data from 10:04 a.m. might not be searchable in Elasticsearch until 10:06 a.m., set this property to 120 seconds. The default value is randomly selected between 60s and 120s. This randomness improves the query performance when there are multiple jobs running on the same node.
Specifies runtime fields for the datafeed search.
For type composite
For type lookup
A custom format for date type runtime fields.
For type lookup
For type lookup
For type lookup
Painless script executed at query time.
Field type, which can be: boolean, composite, date, double, geo_point, ip,keyword, long, or lookup.
Values are boolean, composite, date, double, geo_point, geo_shape, ip, keyword, long, or lookup.
Specifies scripts that evaluate custom expressions and returns script fields to the datafeed. The detector configuration objects in a job can contain functions that use these script fields.
The size parameter that is used in Elasticsearch searches when the datafeed does not use aggregations. The maximum value is the value of index.max_result_window, which is 10,000 by default.
Default value is 1000.
The configuration details for the anomaly detection job that is associated with the datafeed. If the
datafeed_config object does not include a job_id that references an existing anomaly detection job, you must
supply this job_config object. If you include both a job_id and a job_config, the latter information is
used. You cannot specify a job_config object unless you also supply a datafeed_config object.
Advanced configuration option. Specifies whether this job can open when there is insufficient machine learning node capacity for it to be immediately assigned to a node.
Default value is false.
The analysis configuration, which specifies how to analyze the data. After you create a job, you cannot change the analysis configuration; all the properties are informational.
The size of the interval that the analysis is aggregated into, typically between 5m and 1h. This value should be either a whole number of days or equate to a
whole number of buckets in one day. If the anomaly detection job uses a datafeed with aggregations, this value must also be divisible by the interval of the date histogram aggregation.
If categorization_field_name is specified, you can also define the analyzer that is used to interpret the categorization field. This property cannot be used at the same time as categorization_filters. The categorization analyzer specifies how the categorization_field is interpreted by the categorization process. The categorization_analyzer field can be specified either as a string or as an object. If it is a string, it must refer to a built-in analyzer or one added by another plugin.
If categorization_field_name is specified, you can also define the analyzer that is used to interpret the categorization field. This property cannot be used at the same time as categorization_filters. The categorization analyzer specifies how the categorization_field is interpreted by the categorization process. The categorization_analyzer field can be specified either as a string or as an object. If it is a string, it must refer to a built-in analyzer or one added by another plugin.
If categorization_field_name is specified, you can also define the analyzer that is used to interpret the categorization field. This property cannot be used at the same time as categorization_filters. The categorization analyzer specifies how the categorization_field is interpreted by the categorization process. The categorization_analyzer field can be specified either as a string or as an object. If it is a string, it must refer to a built-in analyzer or one added by another plugin.
If this property is specified, the values of the specified field will be categorized. The resulting categories must be used in a detector by setting by_field_name, over_field_name, or partition_field_name to the keyword mlcategory.
If categorization_field_name is specified, you can also define optional filters. This property expects an array of regular expressions. The expressions are used to filter out matching sequences from the categorization field values. You can use this functionality to fine tune the categorization by excluding sequences from consideration when categories are defined. For example, you can exclude SQL statements that appear in your log files. This property cannot be used at the same time as categorization_analyzer. If you only want to define simple regular expression filters that are applied prior to tokenization, setting this property is the easiest method. If you also want to customize the tokenizer or post-tokenization filtering, use the categorization_analyzer property instead and include the filters as pattern_replace character filters. The effect is exactly the same.
Detector configuration objects specify which data fields a job analyzes. They also specify which analytical functions are used. You can specify multiple detectors for a job. If the detectors array does not contain at least one detector, no analysis can occur and an error is returned.
Custom rules enable you to customize the way detectors operate. For example, a rule may dictate conditions under which results should be skipped. Kibana refers to custom rules as job rules.
A description of the detector.
A unique identifier for the detector. This identifier is based on the order of the detectors in the analysis_config, starting at zero. If you specify a value for this property, it is ignored.
The analysis function that is used. For example, count, rare, mean, min, max, or sum.
Defines whether a new series is used as the null series when there is no value for the by or partition fields.
Default value is false.
A comma separated list of influencer field names. Typically these can be the by, over, or partition fields that are used in the detector configuration. You might also want to use a field name that is not specifically named in a detector, but is available as part of the input data. When you use multiple detectors, the use of influencers is recommended as it aggregates results for each influencer entity.
The size of the window in which to expect data that is out of time order. If you specify a non-zero value, it must be greater than or equal to one second. NOTE: Latency is applicable only when you send data by using the post data API.
Advanced configuration option. Affects the pruning of models that have not been updated for the given time duration. The value must be set to a multiple of the bucket_span. If set too low, important information may be removed from the model. For jobs created in 8.1 and later, the default value is the greater of 30d or 20 times bucket_span.
This functionality is reserved for internal use. It is not supported for use in customer environments and is not subject to the support SLA of official GA features. If set to true, the analysis will automatically find correlations between metrics for a given by field value and report anomalies when those correlations cease to hold. For example, suppose CPU and memory usage on host A is usually highly correlated with the same metrics on host B. Perhaps this correlation occurs because they are running a load-balanced application. If you enable this property, anomalies will be reported when, for example, CPU usage on host A is high and the value of CPU usage on host B is low. That is to say, you’ll see an anomaly when the CPU of host A is unusual given the CPU of host B. To use the multivariate_by_fields property, you must also specify by_field_name in your detector.
Settings related to how categorization interacts with partition fields.
To enable this setting, you must also set the partition_field_name property to the same value in every detector that uses the keyword mlcategory. Otherwise, job creation fails.
This setting can be set to true only if per-partition categorization is enabled. If true, both categorization and subsequent anomaly detection stops for partitions where the categorization status changes to warn. This setting makes it viable to have a job where it is expected that categorization works well for some partitions but not others; you do not pay the cost of bad categorization forever in the partitions where it works badly.
If this property is specified, the data that is fed to the job is expected to be pre-summarized. This property value is the name of the field that contains the count of raw data points that have been summarized. The same summary_count_field_name applies to all detectors in the job. NOTE: The summary_count_field_name property cannot be used with the metric function.
Limits can be applied for the resources required to hold the mathematical models in memory. These limits are approximate and can be set per job. They do not control the memory used by other processes, for example the Elasticsearch Java processes.
The maximum number of examples stored per category in memory and in the results data store. If you increase this value, more examples are available, however it requires that you have more storage available. If you set this value to 0, no examples are stored. NOTE: The categorization_examples_limit applies only to analysis that uses categorization.
Default value is 4.
The approximate maximum amount of memory resources that are required for analytical processing. Once this limit is approached, data pruning becomes more aggressive. Upon exceeding this limit, new entities are not modeled. If the xpack.ml.max_model_memory_limit setting has a value greater than 0 and less than 1024mb, that value is used instead of the default. The default value is relatively small to ensure that high resource usage is a conscious decision. If you have jobs that are expected to analyze high cardinality fields, you will likely need to use a higher value. If you specify a number instead of a string, the units are assumed to be MiB. Specifying a string is recommended for clarity. If you specify a byte size unit of b or kb and the number does not equate to a discrete number of megabytes, it is rounded down to the closest MiB. The minimum valid value is 1 MiB. If you specify a value less than 1 MiB, an error occurs. If you specify a value for the xpack.ml.max_model_memory_limit setting, an error occurs when you try to create jobs that have model_memory_limit values greater than that setting value.
The approximate maximum amount of memory resources that are required for analytical processing. Once this limit is approached, data pruning becomes more aggressive. Upon exceeding this limit, new entities are not modeled. If the xpack.ml.max_model_memory_limit setting has a value greater than 0 and less than 1024mb, that value is used instead of the default. The default value is relatively small to ensure that high resource usage is a conscious decision. If you have jobs that are expected to analyze high cardinality fields, you will likely need to use a higher value. If you specify a number instead of a string, the units are assumed to be MiB. Specifying a string is recommended for clarity. If you specify a byte size unit of b or kb and the number does not equate to a discrete number of megabytes, it is rounded down to the closest MiB. The minimum valid value is 1 MiB. If you specify a value less than 1 MiB, an error occurs. If you specify a value for the xpack.ml.max_model_memory_limit setting, an error occurs when you try to create jobs that have model_memory_limit values greater than that setting value.
The approximate maximum amount of memory resources that are required for analytical processing. Once this limit is approached, data pruning becomes more aggressive. Upon exceeding this limit, new entities are not modeled. If the xpack.ml.max_model_memory_limit setting has a value greater than 0 and less than 1024mb, that value is used instead of the default. The default value is relatively small to ensure that high resource usage is a conscious decision. If you have jobs that are expected to analyze high cardinality fields, you will likely need to use a higher value. If you specify a number instead of a string, the units are assumed to be MiB. Specifying a string is recommended for clarity. If you specify a byte size unit of b or kb and the number does not equate to a discrete number of megabytes, it is rounded down to the closest MiB. The minimum valid value is 1 MiB. If you specify a value less than 1 MiB, an error occurs. If you specify a value for the xpack.ml.max_model_memory_limit setting, an error occurs when you try to create jobs that have model_memory_limit values greater than that setting value.
Advanced configuration option. The time between each periodic persistence of the model. The default value is a randomized value between 3 to 4 hours, which avoids all jobs persisting at exactly the same time. The smallest allowed value is 1 hour.
Advanced configuration option. Contains custom metadata about the job.
Advanced configuration option, which affects the automatic removal of old model snapshots for this job. It specifies a period of time (in days) after which only the first snapshot per day is retained. This period is relative to the timestamp of the most recent snapshot for this job.
Default value is 1.
The data description defines the format of the input data when you send data to the job by using the post data API. Note that when configure a datafeed, these properties are automatically set.
Only JSON format is supported at this time.
The name of the field that contains the timestamp.
The time format, which can be epoch, epoch_ms, or a custom pattern. The value epoch refers to UNIX or Epoch time (the number of seconds since 1 Jan 1970). The value epoch_ms indicates that time is measured in milliseconds since the epoch. The epoch and epoch_ms time formats accept either integer or real values. Custom patterns must conform to the Java DateTimeFormatter class. When you use date-time formatting patterns, it is recommended that you provide the full date, time and time zone. For example: yyyy-MM-dd'T'HH:mm:ssX. If the pattern that you specify is not sufficient to produce a complete timestamp, job creation fails.
Default value is epoch.
The datafeed, which retrieves data from Elasticsearch for analysis by the job. You can associate only one datafeed with each anomaly detection job.
If set, the datafeed performs aggregation searches. Support for aggregations is limited and should be used only with low cardinality data.
Datafeeds might be required to search over long time periods, for several months or years. This search is split into time chunks in order to ensure the load on Elasticsearch is managed. Chunking configuration controls how the size of these time chunks are calculated and is an advanced configuration option.
A numerical character string that uniquely identifies the datafeed. This identifier can contain lowercase alphanumeric characters (a-z and 0-9), hyphens, and underscores. It must start and end with alphanumeric characters. The default value is the job identifier.
Specifies whether the datafeed checks for missing data and the size of the window. The datafeed can optionally search over indices that have already been read in an effort to determine whether any data has subsequently been added to the index. If missing data is found, it is a good indication that the query_delay option is set too low and the data is being indexed after the datafeed has passed that moment in time. This check runs only on real-time datafeeds.
The interval at which scheduled queries are made while the datafeed runs in real time. The default value is either the bucket span for short bucket spans, or, for longer bucket spans, a sensible fraction of the bucket span. For example: 150s. When frequency is shorter than the bucket span, interim results for the last (partial) bucket are written then eventually overwritten by the full bucket results. If the datafeed uses aggregations, this value must be divisible by the interval of the date histogram aggregation.
An array of index names. Wildcards are supported. If any indices are in remote clusters, the machine learning nodes must have the remote_cluster_client role.
Specifies index expansion options that are used during search.
If false, the request returns an error if any wildcard expression, index alias, or _all value targets only
missing or closed indices. This behavior applies even if the request targets other open indices. For example,
a request targeting foo*,bar* returns an error if an index starts with foo but no index starts with bar.
If true, missing or closed indices are not included in the response.
Default value is false.
If true, concrete, expanded or aliased indices are ignored when frozen.
Default value is true.
If a real-time datafeed has never seen any data (including during any initial training period) then it will automatically stop itself and close its associated job after this many real-time searches that return no documents. In other words, it will stop after frequency times max_empty_searches of real-time operation. If not set then a datafeed with no end time that sees no data will remain started until it is explicitly stopped.
The Elasticsearch query domain-specific language (DSL). This value corresponds to the query object in an Elasticsearch search POST body. All the options that are supported by Elasticsearch can be used, as this object is passed verbatim to Elasticsearch.
The number of seconds behind real time that data is queried. For example, if data from 10:04 a.m. might not be searchable in Elasticsearch until 10:06 a.m., set this property to 120 seconds. The default value is randomly selected between 60s and 120s. This randomness improves the query performance when there are multiple jobs running on the same node.
Specifies runtime fields for the datafeed search.
Specifies scripts that evaluate custom expressions and returns script fields to the datafeed. The detector configuration objects in a job can contain functions that use these script fields.
The size parameter that is used in Elasticsearch searches when the datafeed does not use aggregations. The maximum value is the value of index.max_result_window, which is 10,000 by default.
Default value is 1000.
A description of the job.
A list of job groups. A job can belong to no groups or many.
Identifier for the anomaly detection job. This identifier can contain lowercase alphanumeric characters (a-z and 0-9), hyphens, and underscores. It must start and end with alphanumeric characters.
This advanced configuration option stores model information along with the results. It provides a more detailed view into anomaly detection. Model plot provides a simplified and indicative view of the model and its bounds.
If true, enables calculation and storage of the model change annotations for each entity that is being analyzed.
Default value is true.
If true, enables calculation and storage of the model bounds for each entity that is being analyzed.
Default value is false.
Limits data collection to this comma separated list of partition or by field values. If terms are not specified or it is an empty string, no filtering is applied. Wildcards are not supported. Only the specified terms can be viewed when using the Single Metric Viewer.
Advanced configuration option, which affects the automatic removal of old model snapshots for this job.
It specifies the maximum period of time (in days) that snapshots are retained.
This period is relative to the timestamp of the most recent snapshot for this job.
The default value is 10, which means snapshots ten days older than the newest snapshot are deleted.
Default value is 10.
Advanced configuration option.
The period over which adjustments to the score are applied, as new data is seen.
The default value is the longer of 30 days or 100 bucket_spans.
A text string that affects the name of the machine learning results index.
The default value is shared, which generates an index named .ml-anomalies-shared.
Advanced configuration option. The period of time (in days) that results are retained. Age is calculated relative to the timestamp of the latest bucket result. If this property has a non-null value, once per day at 00:30 (server time), results that are the specified number of days older than the latest bucket result are deleted from Elasticsearch. The default value is null, which means all results are retained. Annotations generated by the system also count as results for retention purposes; they are deleted after the same number of days as results. Annotations added by users are retained forever.
GET _ml/datafeeds/datafeed-high_sum_total_sales/_preview
resp = client.ml.preview_datafeed(
datafeed_id="datafeed-high_sum_total_sales",
)
const response = await client.ml.previewDatafeed({
datafeed_id: "datafeed-high_sum_total_sales",
});
response = client.ml.preview_datafeed(
datafeed_id: "datafeed-high_sum_total_sales"
)
$resp = $client->ml()->previewDatafeed([
"datafeed_id" => "datafeed-high_sum_total_sales",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_ml/datafeeds/datafeed-high_sum_total_sales/_preview"
client.ml().previewDatafeed(p -> p
.datafeedId("datafeed-high_sum_total_sales")
);