Loading

Triage Attack Discovery findings

Learn how to systematically assess open Attack Discovery findings, determine which ones warrant a case, and process them. Following a repeatable triage workflow helps you focus on genuine threats, reduce alert fatigue, and shorten your mean time to respond.

Each Attack Discovery finding groups related alerts into a single attack narrative. Rather than investigating each alert individually, you assess the attack as a unit—evaluating confidence based on alert diversity, detection rule quality, and entity risk context—then decide whether to create a case, investigate further, or acknowledge and move on.

Before you start, make sure you have the following:

Tip

For richer triage context, enable entity analytics. This helps you assess whether the users and hosts in a discovery are already known to be high risk, which can strengthen your assessment. Entity analytics isn't required for triage, but it can improve decision quality.

Start by retrieving all open findings and prioritizing them by risk score. This gives you a ranked list of potential attacks to work through, starting with the most critical.

  1. Go to Attack Discovery from the Elastic Security navigation menu.
  2. Use the Status filter to show only Open findings.
  3. Sort by risk score (highest first) to prioritize the most critical findings.

For each finding, note the following key signals:

  • Risk score: The overall severity assigned to the discovery.
  • Alert count: How many underlying security alerts the discovery groups together.
  • MITRE ATT&CK tactics: Which tactics the discovery maps to—more tactics suggest a broader attack.
  • Entities: Which users and hosts are involved.

You can run ES|QL queries in multiple ways, including from Discover. The following query retrieves open findings from both scheduled and on-demand discovery indices. Replace default with your Kibana space ID if you're using a non-default space:

FROM .alerts-security.attack.discovery.alerts-default, .adhoc.alerts-security.attack.discovery.alerts-default METADATA _id
| WHERE kibana.alert.workflow_status == "open"
  AND @timestamp >= NOW() - 1 day
| KEEP @timestamp, _id,
       kibana.alert.attack_discovery.title_with_replacements,
       kibana.alert.attack_discovery.summary_markdown_with_replacements,
       kibana.alert.attack_discovery.mitre_attack_tactics,
       kibana.alert.attack_discovery.alert_ids,
       kibana.alert.attack_discovery.alerts_context_count,
       kibana.alert.risk_score
| SORT kibana.alert.risk_score DESC, @timestamp DESC
| LIMIT 50
		

If one index doesn't exist yet (for example, no scheduled discoveries have been generated), ES|QL returns an error. In that case, query each index separately and combine the results.

Use the Attack Discovery Find API to retrieve open findings. Results are sorted by @timestamp (most recent first) by default:

GET /api/attack_discovery/_find?status=open&start=now-24h&end=now&with_replacements=true&per_page=50
		

If you're using a non-default Kibana space, prefix the path with /s/{space_id}:

GET /s/my-space/api/attack_discovery/_find?status=open&start=now-24h&end=now&with_replacements=true&per_page=50
		

Review the returned findings and prioritize by risk_score in the response.

Before moving to Step 2, scan the results for duplicate findings. Overlapping schedule runs or repeated manual generations can produce similar discoveries covering the same alerts. Compare the alert_ids across findings—if two findings share most of their alerts, triage them together as one.

For each finding, evaluate three signals to determine whether it warrants a case, further investigation, or acknowledgment.

Signal 1—Alert diversity: How many alerts does the finding contain, and are they from different detection rules? A single alert from one rule provides minimal corroboration. Multiple alerts from distinct rules across different MITRE ATT&CK tactics provide strong corroboration.

Signal 2—Rule frequency: How often do the associated detection rules fire in your environment? Rules that rarely fire and affect few hosts carry more signal. Rules that fire dozens of times per week across many hosts are likely noisy and might need tuning.

Signal 3—Entity risk (if entity analytics is enabled): What are the risk scores and asset criticality levels for the involved users and hosts? A finding involving a critical-risk entity on a high-value asset deserves more attention than one involving an unknown entity with no prior activity.

Use these signals together to assign an overall confidence level, then take appropriate action:

Confidence Signals Recommended action
High Multiple alerts from diverse rules + low rule frequency + high entity risk Create a case and investigate
Moderate Some corroboration but mixed signals (for example, diverse alerts but noisy rules, or low alert diversity but high entity risk) Investigate further before deciding
Low Single alert or single rule + high rule frequency + low or unknown entity risk Acknowledge and move on

Combine your three signal scores to estimate confidence:

Alert diversity Rule frequency Entity risk Confidence
High Infrequent Critical or High High
Medium Moderate Moderate Moderate
Low Moderate Low or Unknown Low
Any Very frequent Any Low

The following subsections explain how to gather each signal.

Click an entity's name in the finding to open the entity details flyout. Review the entity's risk score, asset criticality, and recent activity. Repeat for each user and host mentioned in the finding.

Query the risk score index for the entities mentioned in the discovery. Replace the entity names with the actual hostnames or usernames from the finding:

FROM risk-score.risk-score-latest-default
| WHERE host.name IN ("dc-prod-01", "ws-dev-12")
    OR user.name IN ("admin-jsmith", "svc-backup")
| KEEP host.name, user.name, host.risk.calculated_level, user.risk.calculated_level,
       host.risk.calculated_score_norm, user.risk.calculated_score_norm
		

Each risk score document represents a single entity type, so host columns are null for user rows and vice versa.

Tip

If entity analytics isn't enabled, skip this signal and rely more heavily on alert diversity and rule frequency.

Expand the finding to view its associated alerts. For each alert, note:

  • The detection rule that generated it.
  • The alert severity.
  • Whether the same rule has fired on other hosts or users recently.

Use the Status filter on the Alerts page to check how often these rules fire in your environment.

Query the security alerts index using the alert IDs from the discovery. Replace the alert IDs with the actual values from the finding's alert_ids field:

FROM .alerts-security.alerts-default METADATA _id
| WHERE _id IN ("alert-id-1", "alert-id-2", "alert-id-3", "alert-id-4")
| KEEP @timestamp, _id, kibana.alert.rule.name, kibana.alert.severity,
       host.name, user.name, kibana.alert.rule.rule_id
| SORT @timestamp DESC
		

To assess rule frequency, check how often the associated rules have fired recently:

FROM .alerts-security.alerts-default
| WHERE kibana.alert.rule.name IN ("LSASS Memory Access", "Credential Dumping Detected")
  AND @timestamp >= NOW() - 7 days
| STATS alert_count = COUNT(*), host_count = COUNT_DISTINCT(host.name)
    BY kibana.alert.rule.name
		

Read the LLM-generated summary and details critically. Consider:

  • Does the narrative make sense given the underlying alerts?
  • Are the MITRE ATT&CK tactics plausible for the described attack chain?
  • Are the entities and their described actions consistent with what you know about your environment?
Important

Attack Discovery uses LLM-generated analysis. Treat each discovery as a hypothesis, not a confirmed incident. The narrative is valuable context, but it requires validation before you act on it.

After assessing confidence for your open findings, take the appropriate action for each one.

For findings you've assessed as high confidence, create a case and attach the relevant context:

  1. Click Take action, then select Add to new case or Add to existing case.
  2. Include the discovery's summary and associated alerts in the case description. The LLM-generated narrative provides valuable context for analysts who pick up the case.
  3. Set an appropriate severity on the case based on the finding's risk score and your confidence assessment.

If you identified findings using ES|QL queries, you can create cases through the Attack Discovery UI or the Cases API. Use the discovery IDs or alert IDs from your query results to locate the findings in the UI, or pass them directly to the API.

Use the Kibana Cases API to create a case, then attach the discovery's alert IDs:

POST /api/cases
{
  "title": "AD: <discovery title>",
  "description": "<discovery summary from the finding>",
  "owner": "securitySolution",
  "tags": ["attack-discovery"],
  "severity": "high",
  "connector": { "id": "none", "name": "none", "type": ".none", "fields": null }
}
		

After creating the case, attach the discovery's alerts to it using the alert IDs from the finding.

For more on case management, refer to Cases.

Note

Before creating a case, check whether an existing case already covers the same alerts. Overlapping discoveries can lead to duplicate cases if you don't verify first.

For findings where you need more context before deciding:

  • Click Investigate in timeline to explore the discovery's alerts in Timeline. This lets you view process trees, network connections, and file events associated with the alerts.
  • Click View in AI Assistant or Add to chat to ask follow-up questions about the finding. For example, ask the assistant to explain the relationship between the alerts or suggest next investigation steps.

After investigating, either create a case (if the finding is confirmed) or acknowledge it (if it turns out to be benign).

For findings that don't warrant further action:

  • Individual findings: Click Take action, then select Mark as acknowledged or Mark as closed.
  • Bulk actions: Select the checkboxes next to multiple findings, click Selected x Attack discoveries, and choose the status change.

When you change a finding's status, you can choose to change the status of only the discovery, or of both the discovery and its associated alerts.

Use the bulk API to update the status of multiple findings at once. Replace the discovery IDs with the actual _id values from Step 1:

POST /api/attack_discovery/_bulk
{
  "update": {
    "ids": ["discovery-id-1", "discovery-id-2", "discovery-id-3"],
    "kibana_alert_workflow_status": "acknowledged"
  }
}