This functionality is in beta and is subject to change. The design and code is less mature than official GA features and is being provided as-is with no warranties. Beta features are not subject to the support SLA of official GA features.
Anomaly detection alerts run scheduled checks on an anomaly detection job or a group of jobs to detect anomalies with certain conditions. If an anomaly meets the conditions, the alert triggers the defined action. For example, you can create an alert that checks an anomaly detection job every fifteen minutes for critical anomalies and notifies you in an email. This page helps you to configure an anomaly detection alert. To learn more about alerts in the Elastic Stack, refer to Alerting and Actions.
You can create anomaly detection alerts in the anomaly detection job wizard after you start the job, from the job list, or under Stack Management > Alerts and Actions. On the Create alert window, select Anomaly detection alert under the Machine learning section, then give a name to the alert and optionally provide tags.
Specify the time interval for the alert to check detected anomalies. It is
recommended to select an interval that is close to the bucket span of the
associated job. You can also select a notification option by using the Notify
selector. An alert instance remains active as long as anomalies are found for a
particular anomaly detection job during the check interval. When there is no anomaly
found in the next interval, the
Recovered action group is invoked and the
status of the alert instance changes to
OK. For more details, refer to the
general alert details.
Select the anomaly detection job or the group of anomaly detection jobs that is checked by the alert. If you assign additional jobs to the group, the alert automatically checks the new jobs the next time when the alert runs.
You can select the result type of the anomaly detection job that triggers the alert. In particular, you can create alerts based on bucket, record, or influencer results.
For each alert, you can configure the
anomaly_score that triggers it. The
anomaly_score indicates the significance of a given anomaly compared to
previous anomalies. The default severity threshold is 75 which means every
anomaly with an
anomaly_score of 75 or higher triggers the alert.
You can select whether you want the alert to include interim results. Interim results are created by the anomaly detection job before a bucket is finalized. These results might disappear after the bucket is fully processed. Include interim results if you want to be notified earlier about a potential anomaly even if it might be a false positive. If you want to get notified only about anomalies of fully processed buckets, do not include interim results.
You can also test the configured conditions against your existing data and check the sample results by providing a valid interval for your data. The generated preview contains the number of potentially created alert instances during the relative time range you defined.
As a next step, connect your alert to actions that use supported built-in integrations. Actions are Kibana services or third-party integrations that run when the alert conditions are met.
For example, you can choose Slack as an action type and configure it to send a message to a channel you selected. You can also create an index connector that writes the JSON object you configure to a specific index. It’s also possible to customize the notification messages. A list of variables is available to include in the message, like job ID, anomaly score, time, or top influencers.
After you save the configurations, the alert appears in the Alerts and Actions list where you can check its status and see the overview of its configuration information.
The name of an alert instance is always the same as the job ID of the associated anomaly detection job that triggered the alert. You can mute the notifications for a particular anomaly detection job on the page of the alert that lists the individual alert instances. You can open it via Alerts and Actions by selecting the alert name.