Loading

Classify and route mixed items with AI

This guide walks through building a workflow that takes a stream of mixed items (alerts, tickets, log entries) and routes each one down a different branch based on an AI classification. The workflow pairs the ai.classify step with foreach and if or switch, so each item gets exactly the handling it needs.

The workflow is adapted from ai-steps-demo.yaml in the elastic/workflows library.

If you're new to workflows, complete Build your first workflow first.

  • Permissions. All on Analytics > Workflows. Refer to Kibana privileges.
  • AI connector. A configured LLM connector (Azure OpenAI, OpenAI, Anthropic, or Bedrock). Refer to Connectors. Note the connector ID.
  • A set of items to classify. For this walkthrough, the workflow generates sample items with ai.prompt. In production, you'd read items from an alert trigger (event.alerts), an Elasticsearch search, or an upstream workflow.

The workflow runs manually during development and can be switched to an alert trigger once you're happy with the routing:

  1. Gather items. For the demo, two ai.prompt steps produce a mix of sample observability and security alerts. In production, replace this with your real data source.
  2. Iterate with foreach. Each item is processed independently.
  3. Classify with ai.classify. The step returns the category (for example, observability alert or security alert) and an optional rationale.
  4. Route with if (or switch). Each branch runs the right follow-up: severity classification for observability, malicious-or-not classification for security.
  5. Summarize with ai.summarize. The summary is attached to the routed item.
  1. Declare the AI connector as a constant

    Hold the connector ID in a constant so you can swap environments without touching step bodies:

    consts:
      llm_connector: "your-connector-id"
    
    triggers:
      - type: manual
    		
  2. Gather items to classify

    For development, generate a mix of sample items with two ai.prompt calls. Each call uses a JSON schema so the output is strongly typed and iterable:

    steps:
      - name: gather_observability_items
        type: ai.prompt
        connector-id: "{{ consts.llm_connector }}"
        with:
          prompt: "Generate two sample observability alerts."
          schema:
            items:
              type: object
              required: [id, severity, message]
              properties:
                id: { type: string }
                severity: { type: string, enum: [critical, high, medium, low] }
                message: { type: string }
    
      - name: gather_security_items
        type: ai.prompt
        connector-id: "{{ consts.llm_connector }}"
        with:
          prompt: "Generate three sample security alerts."
          schema:
            items:
              type: object
              required: [id, severity, category]
              properties:
                id: { type: string }
                severity: { type: string, enum: [critical, high, medium, low] }
                category: { type: string }
    		

    In a production workflow, replace these two steps with a real data source. For example, read event.alerts from an alert trigger or run an elasticsearch.search step.

  3. Loop through the combined stream

    Concatenate the two sample arrays and loop over the combined stream. Use ${{ ... }} when passing arrays so they aren't stringified:

    - name: route_each_item
      type: foreach
      foreach: "${{ steps.gather_observability_items.output.content | concat: steps.gather_security_items.output.content }}"
      steps:
        # Classification and branching steps go here. Use foreach.item.
    		
  4. Classify each item

    Call ai.classify with the categories you want to route on. Set includeRationale: true during development so you can see why the model picked a category. Turn it off in production for lower token cost:

    - name: identify_type
      type: ai.classify
      connector-id: "{{ consts.llm_connector }}"
      with:
        input: "${{ foreach.item }}"
        includeRationale: true
        categories:
          - "security alert"
          - "observability alert"
        fallbackCategory: "other"
    		

    The category ends up at steps.identify_type.output.category.

  5. Branch on the classification

    Use if steps for two branches, or switch for three or more. The following pattern uses if for clarity:

    - name: handle_observability
      type: if
      condition: "steps.identify_type.output.category : 'observability alert'"
      steps:
        - name: classify_severity
          type: ai.classify
          connector-id: "{{ consts.llm_connector }}"
          with:
            input: "${{ foreach.item }}"
            categories: ["critical", "high", "medium", "low"]
    
        - name: store_observability_result
          type: data.set
          with:
            type: "observability"
            item: "${{ foreach.item }}"
            severity: "${{ steps.classify_severity.output.category }}"
    
    - name: handle_security
      type: if
      condition: "steps.identify_type.output.category : 'security alert'"
      steps:
        - name: classify_safety
          type: ai.classify
          connector-id: "{{ consts.llm_connector }}"
          with:
            input: "${{ foreach.item }}"
            categories: ["malicious", "not malicious"]
            fallbackCategory: "unknown"
    
        - name: store_security_result
          type: data.set
          with:
            type: "security"
            item: "${{ foreach.item }}"
            status: "${{ steps.classify_safety.output.category }}"
    		

    For more than two branches, use a switch step which reads cleaner than chained if/else.

  6. Summarize the item

    Add an ai.summarize call to produce a human-readable summary. Run it after classification so later steps can include both the category and the summary:

    - name: summarize_item
      type: ai.summarize
      connector-id: "{{ consts.llm_connector }}"
      with:
        input: "${{ foreach.item }}"
    		

    steps.summarize_item.output.content is the summary string.

  • Trigger from real alerts. Replace the manual trigger and the two gather_* steps with an alert trigger and a foreach over event.alerts.
  • Use switch for many categories. When you have three or more branches, replace the if pair with a switch step for cleaner YAML.
  • Follow each branch with a real action. Replace the data.set calls with cases.createCase, http (Slack, PagerDuty), or composition calls that invoke a dedicated child workflow for each category.
  • Persist the enriched stream. Write the classified items back to Elasticsearch with elasticsearch.request for dashboarding.