At Elastic, we operate a large and diverse set of behavior detection rules across multiple datasets, environments, and severity levels. Most of these rules are atomic, each designed to detect a specific behavior, signal, or attack pattern. In addition, we ingest and promote external alerts from security integrations such as firewalls, EDR, WAF, and other security controls.
The result is powerful visibility but also significant alert volume. From our telemetry, even when considering only non Building Block Rules, 65 unique detection rules generate nearly 8000 alerts per day per production cluster. Analyzing each alert in isolation is neither scalable nor cost-effective.
This is where Higher-Order Rules come into play.
Higher-order rules do not detect a single behavior. Instead, they correlate related alerts over time, across data sources, or within a shared context (such as host, user, IP, or process). By grouping signals into meaningful patterns, we can prioritize what truly matters and reduce the need for deep, expensive analysis on every individual alert whether performed manually, automated, or augmented by AI.
In this blog, we’ll walk through our approach to building Higher-Order Rules in Elastic, share practical examples, and highlight key lessons learned along the way.
What Are Higher-Order Rules?
Higher-Order Rules (HOR) are detections that use alerts as input, either correlating alerts with other alerts (alert-on-alert) or combining alerts with additional data such as raw events, metrics, or contextual telemetry.
Unlike atomic rules that detect a single behavior, Higher-Order Rules identify patterns across signals. Their purpose is not to replace base detections, but to elevate combinations of findings that are more likely to represent real attack activity. In practice, they surface higher-confidence findings and improve triage prioritization. Higher-Order rules are designed to work alongside Building Block Rules. Building block rules generate alerts that do not appear in the default alerts view, reducing noise while still feeding correlated detections. Many of the base rules referenced in this article can be also configured as building block rules, so that only Higher-Order correlations surface for analyst review.
The core insight is that independent detections converging on the same entity compound confidence, where each additional signal multiplies the likelihood that the activity is real, not benign.These three design principles operationalize that insight:
1. Entity-Based Correlation
Rules correlate activity by shared entities such as host, user, source IP, destination IP, or process - allowing analysts to quickly see when multiple findings converge on the same asset or identity.
2. Cross–Data Source Visibility
Some rules operate within a single integration (for example, endpoint-only detections from Elastic Defend or third-party EDR). Others intentionally combine signals across domains endpoint with network (PANW, FortiGate, Suricata), endpoint with email, or endpoint with system metrics to capture multi-stage or cross-surface activity.
3. Time and Prevalence Awareness
Temporal logic plays a key role.
Newly observed rules highlight the first occurrence of a given alert within a defined lookback window (for example, five days), ensuring that even a single rare alert is surfaced for review.
Prevalence-based logic (such as using INLINE STATS) filters for alerts that occur on only a small number of hosts globally, helping reduce noise and emphasize anomalous behavior.
The full set of Higher-Order Rules spans endpoint-only correlations, cross-domain detections (endpoint + network, endpoint + email), lateral movement patterns (for example, alert_1 host.ip = alert_2 source.ip), ATT&CK-aligned groupings (single or multi-tactic activity), newly observed alerts, and alert-to-event correlation (such as alerts combined with abnormal CPU metrics). The following sections walk through representative examples from these categories.
Correlation and Newly Observed Higher-Order Rules
In practice, high-risk activity does not always look the same.
Sometimes compromise reveals itself through multiple converging signals. Other times, it appears as a single alert that has never been seen before.
To handle both realities, we organize our Higher-Order Rules into three complementary patterns:
- Correlation rules multiple alerts or events linked to a shared entity (host, user, IP, or process).
- Newly observed rules a single alert that is rare or first-seen within a defined time window.
- Hybrid patterns combining correlation with first-seen logic, which can further elevate suspicion and surface particularly interesting activity.
Correlation rules raise confidence through signal density and diversity: when several independent detections point to the same entity, the likelihood of real malicious activity increases.
Newly observed rules address the opposite case, low volume but high novelty. They prioritize alerts based on rarity over time, ensuring that first-time or highly unusual detections are not overlooked simply because they occur once.
Together, these approaches form the foundation of an efficient and scalable triage strategy.
Let’s dive into examples and explore the differences, strengths, and trade-offs of each pattern.
Endpoint Alerts Correlation
A significant portion of real-world attack discovery comes from endpoint telemetry. It provides rich context process activity, command lines, file behavior, and user actions making it one of the most powerful detection sources.
At the same time, endpoint environments are dynamic. Legitimate software, admin tools, and third-party applications (and recently GenAI endpoint utilities 🥲) can generate high alert volume and false positives, requiring continuous tuning.
Higher-Order correlation helps address this by shifting the focus from individual alerts to multiple distinct signals on the same host or process increasing confidence while reducing unnecessary investigation effort.
The following ES|QL query triggers when there are 3 unique Elastic Defend behavior rules OR alerts from different features (e.g. one shellcode_thread with behavior, malicious_file with behavior) OR more than 2 malware alerts in a 24h time Window from the same host:
from logs-endpoint.alerts-* metadata _id
| eval day = DATE_TRUNC(24 hours, @timestamp)
| where event.code in ("malicious_file", "memory_signature", "shellcode_thread", "behavior") and
agent.id is not null and not rule.name in ("Multi.EICAR.Not-a-virus")
| stats Esql.alerts_count = COUNT(*),
Esql.event_code_distinct_count = count_distinct(event.code),
Esql.rule_name_distinct_count = COUNT_DISTINCT(rule.name),
Esql.file_hash_distinct_count = COUNT_DISTINCT(file.hash.sha256),
Esql.process_entity_id_distinct_count = COUNT_DISTINCT(process.entity_id) by host.id, day
| where (Esql.event_code_distinct_count >= 2 or Esql.rule_name_distinct_count >= 3 or Esql.file_hash_distinct_count >= 2)
To further raise suspicion, we can also correlate Elastic Defend alerts that belong to the same process tree:
from logs-endpoint.alerts-*
| where event.code in ("malicious_file", "memory_signature", "shellcode_thread", "behavior") and
agent.id is not null and not rule.name in ("Multi.EICAR.Not-a-virus") and process.Ext.ancestry is not null
// aggregate alerts by process.Ext.ancestry and agent.id
| stats Esql.alerts_count = COUNT(*),
Esql.rule_name_distinct_count = COUNT_DISTINCT(rule.name),
Esql.event_code_distinct_count = COUNT_DISTINCT(event.code),
Esql.process_id_distinct_count = COUNT_DISTINCT(process.entity_id),
Esql.message_values = VALUES(message),
... by process.Ext.ancestry, agent.id
// filter for at least 3 unique process IDs and 2 or more alert types or rule names.
| where Esql.process_id_distinct_count >= 3 and (Esql.rule_name_distinct_count >= 2 or Esql.event_code_distinct_count >= 2)
// keep unique values
| stats Esql.alert_names = values(Esql.message_values),
Esql.alerts_process_cmdline_values = VALUES(Esql.process_command_line_values),
... by agent.id
| keep Esql.*, agent.id
Example of matches:
To complement our coverage, we will need to also look for rare atomic ones. The following ES|QL is designed to run on a 10-minute schedule with a 5 or 7 day lookback window. The lookback aggregates all alerts by rule name over the full window to compute first-seen time. The final filter (Esql.recent <= 10) ensures only rules whose first-seen time falls within with current 10-minute execution window are surfaced, effectively detecting the moment a rule fires for the first time in the lookback period. This surfaces both rare false positives and stealthy behaviors that might otherwise be lost in volume:
from logs-endpoint.alerts-*
| WHERE event.code == "behavior" and rule.name is not null
| STATS Esql.alerts_count = count(*),
Esql.first_time_seen = MIN(@timestamp),
Esql.last_time_seen = MAX(@timestamp),
Esql.agents_distinct_count = COUNT_DISTINCT(agent.id),
Esql.process_executable = VALUES(process.executable),
Esql.process_parent_executable = VALUES(process.parent.executable),
Esql.process_command_line = VALUES(process.command_line),
Esql.process_hash_sha256 = VALUES(process.hash.sha256),
Esql.host_id_values = VALUES(host.id),
Esql.user_name = VALUES(user.name) by rule.name
// first time seen in the last 5 days - defined in the rule schedule Additional look-back time
| eval Esql.recent = DATE_DIFF("minute", Esql.first_time_seen, now())
// first time seen is within 10m of the rule execution time
| where Esql.recent <= 10 and Esql.agents_distinct_count == 1 and Esql.alerts_count <= 10 and (Esql.last_time_seen == Esql.first_time_seen)
// Move single values to their corresponding ECS fields for alerts exclusion
| eval host.id = mv_min(Esql.host_id_values)
| keep host.id, rule.name, Esql.*
The same logic can be applied to an External Alert from other third party EDRs:
Endpoint with Network Alerts Correlation
A powerful detection approach is correlating endpoint alerts with network alerts. This helps answer the key question:
Which process triggered this network alert?
Network alerts alone often lack process context, such as which user or executable initiated the activity. By combining network alerts with endpoint telemetry (EDR data), you can enrich alerts with:
- Process name and hash
- Command line and parent process
- User and device information
The following query correlates any Elastic Defend alert with suspicious events from network security devices such as Palo Alto Networks (PANW) and Fortinet FortiGate. The join key is the IP address: for network alerts, this is source.ip, for endpoint alerts, it is host.ip. The query normalizes these into a single field using COALESCE, enabling correlation across data sources that use different field names for the same entity. This may indicate that this host is compromised and triggering multi-datasource alerts.
FROM logs-* metadata _id
| WHERE
(event.module == "endpoint" and event.dataset == "endpoint.alerts") or
(event.dataset == "panw.panos" and event.action in ("virus_detected", "wildfire_virus_detected", "c2_communication", ...)) or
(event.dataset == "fortinet_fortigate.log" and (...)) or
(event.dataset == "suricata.eve" and message in ("Command and Control Traffic", "Potentially Bad Traffic", ...))
| eval
fw_alert_source_ip = CASE(event.dataset in ("panw.panos", "fortinet_fortigate.log"), source.ip, null),
elastic_defend_alert_host_ip = CASE(event.module == "endpoint" and event.dataset == "endpoint.alerts", host.ip, null)
| eval Esql.source_ip = COALESCE(fw_alert_source_ip, elastic_defend_alert_host_ip)
| where Esql.source_ip is not null
| stats Esql.alerts_count = COUNT(*),
Esql.event_module_distinct_count = COUNT_DISTINCT(event.module),
Esql.message_values_distinct_count = COUNT_DISTINCT(message),
... by Esql.source_ip
| where Esql.event_module_distinct_count >= 2 AND Esql.message_values_distinct_count >= 2
| eval concat_module_values = MV_CONCAT(Esql.event_module_values, ",")
| where concat_module_values like "*endpoint*"
Example of matches correlating Elastic Defend and Fortigate alerts where the source.ip of the FortiGate alert is equal to the host.ip of the Elastic Defend endpoint alert :
The following EQL query correlates Suricata alerts with Elastic Defend network events to provide context about the source process and host:
sequence by source.port, source.ip, destination.ip with maxspan=5s
// Suricata severithy 3 corresponds to information alerts, which are excluded to reduce noise
[network where event.dataset == "suricata.eve" and event.kind == "alert" and event.severity != 3 and source.ip != null and destination.ip != null]
[network where event.module == "endpoint" and event.action in ("disconnect_received", "connection_attempted")]
Example of matches confirming the Suricata alert and linking it to the target web server process nginx from Elastic Defend events confirming the web-exploitation attempt:
Endpoint Security with Observability
Correlating observability telemetry with security alerts is a powerful detection strategy.
The XZ Utils backdoor incident demonstrated that security-relevant anomalies may first surface as performance regressions rather than traditional security alerts. In that case, unusual behavior in the SSH daemon led to deeper investigation and eventual discovery of malicious code.
This highlights an important principle: operational anomalies can be early indicators of compromise.
With the Elastic Agent, system metrics such as CPU and memory utilization can be collected alongside security telemetry. By correlating abnormal resource spikes with SIEM alerts either by process or by host we can increase detection confidence and surface high-risk activity earlier.
For example, an ES|QL correlation rule can identify a process exhibiting sustained 70% CPU utilization that is also the source of a memory signature alert for a cryptominer from Elastic Defend. Individually, each signal may be low or medium severity. Correlated together, they represent high-confidence malicious activity.
We developed over 30 Higher-Order detections covering various types of relationships. While we can’t cover all of them here, the links below provide enough context to adapt these rules to your environment:
Endpoint Alerts:
Multiple Elastic Defend Alerts by Agent
Multiple Elastic Defend Alerts from a Single Process Tree
Multiple Rare Elastic Defend Behavior Rules by Host
Newly Observed Elastic Defend Behavior Alert
Multiple External EDR Alerts by Host
Endpoint and Network:
Newly Observed Palo Alto Network Alert
Newly Observed High Severity Suricata Alert
FortiGate SOCKS Traffic from an Unusual Process
PANW and Elastic Defend - Command and Control Correlation
Elastic Defend and Network Security Alerts Correlation
Suricata and Elastic Defend Network Correlation
Generic by MITRE ATT&CK:
Alerts in Different ATT&CK Tactics by Host
Multiple Alerts in Same ATT&CK Tactic by Host
Generic multi-integrations correlation:
Alerts From Multiple Integrations by Source Address
Alerts From Multiple Integrations by Destination Address
Alerts From Multiple Integrations by User Name
Newly Observed High Severity Detection Alert
Lateral movement correlation:
Suspected Lateral Movement from Compromised Host
Lateral Movement Alerts from a Newly Observed Source Address
Lateral Movement Alerts from a Newly Observed User
Observability and security correlation:
Detection Alert on a Process Exhibiting CPU Spike
Multiple Alerts on a Host Exhibiting CPU Spike
Newly Observed Process Exhibiting High CPU Usage
Machine Learning correlation:
Multiple Machine Learning Alerts by Influencer Field
Other correlation ideas:
Multiple Vulnerabilities by Asset via Wiz
Elastic Defend and Email Alerts Correlation
Suspicious Kerberos Authentication Ticket Request
Multiple Cloud Secrets Accessed by Source Address
These examples illustrate how correlating alerts across endpoints, network, and observability can enrich context, accelerate investigations, and improve detection confidence. We are actively expanding coverage in this area to support additional correlation scenarios.
You can enable them by filtering for the tag value Rule Type: Higher-Order Rule in the rules management page:
Over a 15-day period, alert counts remained within acceptable volume (~30 alerts/day). Targeted tuning of initial outliers is expected to reduce them to ~20 alerts/day and materially improve overall signal quality.
Considerations and Trade-offs
Higher-Order Rules introduce potential scheduling latency. Since they query alert indices, there is an inherent delay between when base alerts fire and when correlations surface. Rule scheduling intervals and loopback windows should be tuned to balance timeliness against performance cost. Additionally, HOR quality depends directly on the quality of the base detections. A noisy atomic rule will cascade false positives into every correlation that references it. We recommend tuning base rules aggressively before enabling dependent Higher-Order Rules. Finally, ESQL queries over broad index patterns (e.g. logs-*) can be expensive at scale. In high-volume environments, scoping index patterns to specific datasets or using dataviews can significantly reduce query cost.
Fazit
High-Order rules are essential for prioritizing alert triage and managing alert volumes for automation and AI-driven analysis**.** When combined with Entity Risk Scoring, Higher-Order Rules can feed directly into host and user risk profiles, creating a quantitative prioritization layer that further reduces manual triage burden. In our production tests, the majority of these detections produced a medium to low alert volume, making them practical for real-world use. While a small number of noisy rules or false positives may initially surface, excluding these at the atomic rule level quickly leaves a robust set of high-value correlations.
To maximize their effectiveness, two operational practices are critical. First, ensure that input alerts use severity levels that accurately reflect both noise and real-world impact, cleaning and normalizing severity is foundational to meaningful correlation. Second, start small and expand deliberately: avoid trying to correlate every possible alert signal. Exclude inherently noisy tactics (such as discovery), deprioritize low-severity signals, and deprecate rules that disproportionately influence correlation outcomes.
Applied correctly, High-Order rules streamline investigations, improve detection accuracy, and significantly increase the efficiency and trustworthiness of modern security operations.
