Loading

Triage a security alert into a case

This guide walks through building a workflow that turns a raw security alert into a triaged case. The workflow fires when a detection rule matches, enriches the alert with threat intel, opens a case with the alert and its indicators attached, isolates the affected host, and notifies the on-call analyst in Slack.

The workflow is adapted from traditional-triage.yaml in the elastic/workflows library.

If you're new to workflows, complete Build your first workflow first for a walkthrough of the YAML editor and how to run a workflow.

  • Permissions. All privileges for Analytics > Workflows, plus All on Security > Cases in the target space. Refer to Kibana privileges.
  • Detection rule. An enabled detection rule that generates the kind of alert you want to triage. For this workflow, the rule should produce alerts with file.hash.sha256, host.name, and elastic.agent.id populated.
  • Attach the workflow to the rule. After you save the workflow, attach it to the detection rule so the rule invokes the workflow when it fires. Refer to Alert triggers.
  • Connectors. A configured VirusTotal connector for the hash lookup, and a Slack connector for the notification. Note the connector IDs. You'll paste them into the workflow.
  • Host isolation capability. The affected host must run Elastic Defend for the isolation step to succeed.

The workflow runs in a single pass when an alert arrives:

  1. An alert trigger fires when the detection rule matches.
  2. A VirusTotal lookup enriches the alert with a reputation score.
  3. An if step branches on the reputation score. If the file is confirmed malicious, the workflow opens a case, attaches the alert and observables, isolates the host, and notifies Slack. Otherwise, it closes the alert as a false positive.
  1. Configure the alert trigger

    The workflow fires every time the attached detection rule generates an alert. Inside the workflow, the alert payload is available as event.alerts[0].

    triggers:
      - type: alert
    		

    After you save the workflow, open the detection rule's Actions tab and attach this workflow so the rule invokes it.

  2. Enrich the alert with threat intel

    Call the VirusTotal connector to score the file hash. Wrap the call in retry + continue so a transient VirusTotal outage doesn't fail the whole workflow.

    - name: lookup_reputation
      type: virustotal.scanFileHash
      connector-id: "my-virustotal"
      on-failure:
        retry:
          max-attempts: 3
          delay: "5s"
          strategy: exponential
          max-delay: "30s"
        continue: true
      with:
        hash: "{{ event.alerts[0].file.hash.sha256 }}"
    		

    The output lives at steps.lookup_reputation.output. Use steps.lookup_reputation.output.stats.malicious to decide what to do next.

  3. Branch on the reputation result

    Most of the workflow only runs when the file is confirmed malicious. Wrap the case, isolation, and notification steps in an if step:

    - name: handle_malicious_file
      type: if
      condition: "steps.lookup_reputation.output.stats.malicious > 10"
      steps:
        # Case creation, host isolation, and Slack notification go here.
      else:
        - name: close_false_positive
          type: kibana.SetAlertsStatus
          with:
            status: closed
            reason: false_positive
            signal_ids:
              - "{{ event.alerts[0]._id }}"
    		

    The else branch closes the alert as a false positive using kibana.SetAlertsStatus.

  4. Open a case with the alert context

    Inside the if branch, create the case with cases.createCase. Fill the title and description from the alert payload:

    - name: create_case
      type: cases.createCase
      with:
        title: "Malware detected: {{ event.alerts[0].file.hash.sha256 }}"
        description: |
          Auto-created from detection rule `{{ event.rule.name }}`.
    
          VirusTotal malicious engines: {{ steps.lookup_reputation.output.stats.malicious | default: "n/a" }}
        owner: "securitySolution"
        severity: "high"
        tags: ["auto-triage", "malware"]
    		

    title, description, and owner are required. owner must be one of securitySolution, observability, or cases.

  5. Attach the alert and observables to the case

    Link the alert that triggered the workflow with cases.addAlerts, then attach the file hash and source IP as observables with cases.addObservables:

    - name: attach_alert
      type: cases.addAlerts
      with:
        case_id: "{{ steps.create_case.output.id }}"
        alerts:
          - alertId: "{{ event.alerts[0]._id }}"
            index: "{{ event.alerts[0]._index }}"
            rule:
              id: "{{ event.rule.id }}"
              name: "{{ event.rule.name }}"
    
    - name: attach_observables
      type: cases.addObservables
      with:
        case_id: "{{ steps.create_case.output.id }}"
        observables:
          - typeKey: "observable-type-hash-sha256"
            value: "{{ event.alerts[0].file.hash.sha256 }}"
          - typeKey: "observable-type-ipv4"
            value: "{{ event.alerts[0].source.ip }}"
            description: "Source of the malicious activity"
    		

    Observable typeKey values must match the built-in observable types. Refer to cases.addObservables for the full list.

  6. Isolate the affected host

    Call the endpoint isolation API with kibana.request. Link the isolation action to the case and alert so the audit trail is complete:

    - name: isolate_host
      type: kibana.request
      with:
        method: POST
        path: /api/endpoint/action/isolate
        body:
          endpoint_ids:
            - "{{ event.alerts[0].elastic.agent.id }}"
          comment: "Automated isolation: case {{ steps.create_case.output.id }}"
          case_ids:
            - "{{ steps.create_case.output.id }}"
          alert_ids:
            - "{{ event.alerts[0]._id }}"
    		
  7. Notify the on-call analyst

    Post a rich message to the SOC Slack channel with links to the case and the VirusTotal report. Use the {{kibanaUrl}} context variable for the case deep link:

    - name: notify_slack
      type: http
      with:
        url: https://slack.com/api/chat.postMessage
        method: POST
        headers:
          Content-Type: application/json; charset=utf-8
          Authorization: "Bearer {{ consts.slack_token }}"
        body:
          channel: "#soc-oncall"
          text: "Malware detected on {{ event.alerts[0].host.name }}"
          blocks: >-
            [{"type":"section","text":{"type":"mrkdwn","text":"*Malicious file on {{ event.alerts[0].host.name }}*\nHash: `{{ event.alerts[0].file.hash.sha256 }}`\nMalicious engines: {{ steps.lookup_reputation.output.stats.malicious }}"}},
             {"type":"actions","elements":[{"type":"button","text":{"type":"plain_text","text":"View case"},"url":"{{ kibanaUrl }}/app/security/cases/{{ steps.create_case.output.id }}"}]}]
        timeout: 30s
    		

    Store the Slack bot token in a consts block so you can swap environments without editing step bodies.

  • Add historical context. Before opening the case, run an elasticsearch.esql.query to count how many times the hash appears across your logs. Attach the count to the case with cases.addComment.
  • Route by severity. Replace the single if branch with a switch step that opens cases of different severities based on the malicious-engine count.
  • Enrich with an AI summary. Add an ai.summarize step after attach_observables to produce a triage summary, then append it to the case with cases.addComment.
  • Assign the case. Query your on-call schedule and use cases.assignCase to assign the case to the current on-call analyst.