To view a summary of all rule executions, such as failures and last execution times, select the Rule Monitoring tab in the All rules table (Security → Detections → Manage detection rules).
For detailed information on a rule, its generated alerts and errors, click on a rule name in the All rules table.
Troubleshoot missing alertsedit
When a rule fails to run close to its scheduled time, some alerts may be missing. There are a number of steps you can perform to try and resolve this issue.
If you see
Gaps in the All rules table or on the Rule details page
for a small number of rules, you can increase those rules'
Additional look-back time (Detection rules page → the rule’s All actions button (…) → Edit rule settings → Schedule → Additional look-back time).
If you see gaps for numerous rules:
- If you restarted Kibana when many rules were activated, try deactivating them and then reactivating them in small batches at staggered intervals. This ensures Kibana does not attempt to run all the rules at the same time.
- Consider adding another Kibana instance to your environment.
Even if your rule runs at its scheduled time, there might still be missing alerts if your ingestion pipeline delay is greater than your rule interval + additional look-back time. Prebuilt rules have a minimum interval + additional look-back time of 6 minutes in Elastic Stack version >=7.11.0. To avoid missed alerts for prebuilt rules, use caution to ensure that ingestion pipeline delays remain below 6 minutes.
In addition, use caution when creating custom rule schedules to ensure that the specified interval + additional look-back time is greater than your deployment’s ingestion pipeline delay.
You can eliminate the risk of missed alerts due to ingestion pipeline delay by specifying the
Timestamp override field value to
event.ingested in advanced settings during rule creation or editing. The detection engine uses the value from the
event.ingested field as the timestamp when executing the rule.
For example, if an event occurred at 10:01, and due to a 9-minute ingestion pipeline delay, was ingested into Elasticsearch at 10:10, a rule created to detect that event, with an interval + additional look-back time of 6 minutes, even if executing at 10:12 — 11 minutes after the event occurred — would still detect the event since the
event.ingested timestamp was only two minutes earlier, well within the 6-minute rule query.
Troubleshoot missing alerts for machine learning jobsedit
The prebuilt machine learning jobs have dependencies on data fields that are populated by Beats and Elastic Agent integrations. In version 7.11, new machine learning jobs (Security: Linux and Security: Windows) were provided, which operate on newer ECS fields than the previous Security: Winlogbeat and Security: Auditbeat jobs. However, the prebuilt rules were not updated to use the new machine learning jobs.
- If you have only 7.10 or earlier versions of Beats, you can continue using the Security:Auditbeat and Security:Winlogbeat machine learning jobs and the prebuilt machine learning rules that have been in the Elastic Security app since version 7.5.
- If you have only 7.11 or later versions of Beats, use the Security:Linux and Security:Windows machine learning jobs. If you want to generate alerts for anomalies in these jobs, make clones of the existing machine learning rules and update them to use the new jobs.
- If you have a mix of old and new versions of Beats or you have a mix of Beats and Elastic Endpoint integrations, use both the old and new machine learning jobs. If you want alerts for anomalies in the new jobs, make clones of the existing machine learning rules and update them to use the new jobs.
- If you have a non-Elastic data shipper that gathers ECS-compatible Windows events, use the Security:Windows machine learning jobs. If you want alerts for anomalies in these jobs, make clones of the existing machine learning rules and update them to use these jobs.
If you are cloning prebuilt machine learning rules to generate alerts for the new machine learning jobs, the following rules are affected:
Unusual Linux Network Port Activity: Use
Anomalous Process For a Linux Population: Use
Unusual Linux Username: Use
Unusual Linux Process Calling the Metadata Service: Use
Unusual Linux User Calling the Metadata Service: Use
Unusual Process For a Linux Host: Use
Unusual Process For a Windows Host: Use
Unusual Windows Network Activity: Use
Unusual Windows Path Activity: Use
Anomalous Windows Process Creation: Use
Anomalous Process For a Windows Population: Use
Unusual Windows Username: Use
Unusual Windows Process Calling the Metadata Service: Use
Unusual Windows User Calling the Metadata Service: Use
Intro to Kibana
ELK for Logs & Metrics