Monitor and troubleshoot rule executions


Monitor and troubleshoot rule executionsedit

To view a summary of all rule executions, such as failures and last execution times, select the Rule Monitoring tab on the Rules page (DetectRulesRule Monitoring).

monitor table

For detailed information on a rule, its generated alerts, and its errors, click on a rule name in the All rules table.

Troubleshoot missing alertsedit

When a rule fails to run close to its scheduled time, some alerts may be missing. There are a number of ways to try to resolve this issue:

You can also use Task Manager in Kibana to troubleshoot background tasks and processes that may be related to missing alerts:

Troubleshoot gapsedit

If you see values in the Gaps column in the All rules table or on the Rule details page for a small number of rules, you can increase those rules' Additional look-back time (DetectRules → the rule’s All actions button (…​) → Edit rule settingsScheduleAdditional look-back time).

It’s recommended to set the Additional look-back time to at least 1 minute. This ensures there are no missing alerts when a rule doesn’t run exactly at its scheduled time.

Elastic Security prevents duplication. Any duplicate alerts that are discovered during the Additional look-back time are not created.

If the rule that experiences gaps is an indicator match rule, see how to tune indicator match rules. Also please note that Elastic Security provides limited support for indicator match rules.

If you see gaps for numerous rules:

  • If you restarted Kibana when many rules were activated, try deactivating them and then reactivating them in small batches at staggered intervals. This ensures Kibana does not attempt to run all the rules at the same time.
  • Consider adding another Kibana instance to your environment.

Troubleshoot ingestion pipeline delayedit

Even if your rule runs at its scheduled time, there might still be missing alerts if your ingestion pipeline delay is greater than your rule interval + additional look-back time. Prebuilt rules have a minimum interval + additional look-back time of 6 minutes in Elastic Stack version >=7.11.0. To avoid missed alerts for prebuilt rules, use caution to ensure that ingestion pipeline delays remain below 6 minutes.

In addition, use caution when creating custom rule schedules to ensure that the specified interval + additional look-back time is greater than your deployment’s ingestion pipeline delay.

You can reduce the number of missed alerts due to ingestion pipeline delay by specifying the Timestamp override field value to event.ingested in advanced settings during rule creation or editing. The detection engine uses the value from the event.ingested field as the timestamp when executing the rule.

For example, say an event occurred at 10:00 but wasn’t ingested into Elasticsearch until 10:10 due to an ingestion pipeline delay. If you created a rule to detect that event with an interval + additional look-back time of 6 minutes, and the rule executes at 10:12, it would still detect the event because the event.ingested timestamp was from 10:10, only 2 minutes before the rule executed and well within the rule’s 6-minute interval + additional look-back time.

timestamp override

Troubleshoot missing alerts for machine learning jobsedit

The prebuilt machine learning jobs have dependencies on data fields that are populated by Beats and Elastic Agent integrations. In version 7.11, new machine learning jobs (Security: Linux and Security: Windows) were provided, which operate on newer ECS fields than the previous Security: Winlogbeat and Security: Auditbeat jobs. However, the prebuilt rules were not updated to use the new machine learning jobs.


  • If you have only 7.10 or earlier versions of Beats, you can continue using the Security:Auditbeat and Security:Winlogbeat machine learning jobs and the prebuilt machine learning rules that have been in the Elastic Security app since version 7.5.
  • If you have only 7.11 or later versions of Beats, use the Security:Linux and Security:Windows machine learning jobs. If you want to generate alerts for anomalies in these jobs, make clones of the existing machine learning rules and update them to use the new jobs.
  • If you have a mix of old and new versions of Beats or you have a mix of Beats and Elastic Endpoint integrations, use both the old and new machine learning jobs. If you want alerts for anomalies in the new jobs, make clones of the existing machine learning rules and update them to use the new jobs.
  • If you have a non-Elastic data shipper that gathers ECS-compatible Windows events, use the Security:Windows machine learning jobs. If you want alerts for anomalies in these jobs, make clones of the existing machine learning rules and update them to use these jobs.

If you are cloning prebuilt machine learning rules to generate alerts for the new machine learning jobs, the following rules are affected: