Validate and test rules
Before enabling a new detection rule in production, validate that it detects what you intend, at a volume your team can handle, without generating excessive false positives. The steps below apply to any rule type.
Use the rule preview feature to test your rule's query against a historical time range before enabling it. This shows you what the rule would have detected without creating actual alerts.
While creating or editing a rule in the UI, use the rule preview. You can select a time range that represents normal activity in your environment. For example, a range of 7 to 14 days captures both weekday and weekend patterns.
When reviewing the rule preview results, look for:
- Expected true positives. Does the rule detect the activity it's designed to catch? If you have known-good test data (for example, from red team exercises), confirm those events appear in the results.
- Obvious false positives. Do any results represent legitimate activity? If so, refine the query or plan to add exceptions before enabling the rule.
- Missing detections. If the rule produces no results and you expected it to, check that the required data sources are being ingested and that your index patterns are correct.
If the rule uses alert suppression, use the rule preview to visualize how suppression affects the alert output. This helps you confirm that suppression is grouping events as expected before the rule goes live.
For rules that are already enabled, you can manually run them over a specific time range to test behavior against real data. Unlike preview, manual runs create actual alerts and trigger rule actions.
Manual runs are useful when:
- You want to test a rule against a specific incident window where you know what happened.
- You need to fill a gap in rule coverage after a rule was temporarily not running.
- You want to verify that a rule change produces the expected results in production.
Manual runs activate all configured rule actions except Summary of alerts actions that run at a custom frequency. If you want to test without sending notifications, snooze the rule's actions first.
For more isolated testing, create a dedicated Kibana space for rule development. This lets you test rules with manual runs without affecting production alerts or triggering notifications to your team.
- Create a new space for testing.
- Copy or recreate the rule in the test space.
- Run manual tests and review alerts without impacting production workflows.
- Once validated, recreate or import the rule into your production space.
This approach is especially useful when testing rule changes that might generate high alert volumes or when multiple team members are developing rules simultaneously.
If you manage rules outside of the Kibana UI, you can use Detection-as-Code (DaC) workflows to test rules before deploying them. The Elastic Security Labs team maintains the detection-rules repo, which provides tooling for developing, testing, and releasing detection rules programmatically.
DaC workflows let you:
- Validate rule syntax and schema before deployment.
- Run unit tests against rule logic in a CI/CD pipeline.
- Track rule changes in version control for auditability.
To get started, refer to the DaC documentation. For managing rules through the API, refer to Using the API.