Classify and route mixed items with AI
This guide walks through building a workflow that takes a stream of mixed items (alerts, tickets, log entries) and routes each one down a different branch based on an AI classification. The workflow pairs the ai.classify step with foreach and if or switch, so each item gets exactly the handling it needs.
The workflow is adapted from ai-steps-demo.yaml in the elastic/workflows library.
If you're new to workflows, complete Build your first workflow first.
- Permissions.
Allon Analytics > Workflows. Refer to Kibana privileges. - AI connector. A configured LLM connector (Azure OpenAI, OpenAI, Anthropic, or Bedrock). Refer to Connectors. Note the connector ID.
- A set of items to classify. For this walkthrough, the workflow generates sample items with
ai.prompt. In production, you'd read items from an alert trigger (event.alerts), an Elasticsearch search, or an upstream workflow.
The workflow runs manually during development and can be switched to an alert trigger once you're happy with the routing:
- Gather items. For the demo, two
ai.promptsteps produce a mix of sample observability and security alerts. In production, replace this with your real data source. - Iterate with
foreach. Each item is processed independently. - Classify with
ai.classify. The step returns the category (for example,observability alertorsecurity alert) and an optional rationale. - Route with
if(orswitch). Each branch runs the right follow-up: severity classification for observability, malicious-or-not classification for security. - Summarize with
ai.summarize. The summary is attached to the routed item.
-
Declare the AI connector as a constant
Hold the connector ID in a constant so you can swap environments without touching step bodies:
consts: llm_connector: "your-connector-id" triggers: - type: manual -
Gather items to classify
For development, generate a mix of sample items with two
ai.promptcalls. Each call uses a JSON schema so the output is strongly typed and iterable:steps: - name: gather_observability_items type: ai.prompt connector-id: "{{ consts.llm_connector }}" with: prompt: "Generate two sample observability alerts." schema: items: type: object required: [id, severity, message] properties: id: { type: string } severity: { type: string, enum: [critical, high, medium, low] } message: { type: string } - name: gather_security_items type: ai.prompt connector-id: "{{ consts.llm_connector }}" with: prompt: "Generate three sample security alerts." schema: items: type: object required: [id, severity, category] properties: id: { type: string } severity: { type: string, enum: [critical, high, medium, low] } category: { type: string }In a production workflow, replace these two steps with a real data source. For example, read
event.alertsfrom an alert trigger or run anelasticsearch.searchstep. -
Loop through the combined stream
Concatenate the two sample arrays and loop over the combined stream. Use
${{ ... }}when passing arrays so they aren't stringified:- name: route_each_item type: foreach foreach: "${{ steps.gather_observability_items.output.content | concat: steps.gather_security_items.output.content }}" steps: # Classification and branching steps go here. Use foreach.item. -
Classify each item
Call
ai.classifywith the categories you want to route on. SetincludeRationale: trueduring development so you can see why the model picked a category. Turn it off in production for lower token cost:- name: identify_type type: ai.classify connector-id: "{{ consts.llm_connector }}" with: input: "${{ foreach.item }}" includeRationale: true categories: - "security alert" - "observability alert" fallbackCategory: "other"The category ends up at
steps.identify_type.output.category. -
Branch on the classification
Use
ifsteps for two branches, orswitchfor three or more. The following pattern usesiffor clarity:- name: handle_observability type: if condition: "steps.identify_type.output.category : 'observability alert'" steps: - name: classify_severity type: ai.classify connector-id: "{{ consts.llm_connector }}" with: input: "${{ foreach.item }}" categories: ["critical", "high", "medium", "low"] - name: store_observability_result type: data.set with: type: "observability" item: "${{ foreach.item }}" severity: "${{ steps.classify_severity.output.category }}" - name: handle_security type: if condition: "steps.identify_type.output.category : 'security alert'" steps: - name: classify_safety type: ai.classify connector-id: "{{ consts.llm_connector }}" with: input: "${{ foreach.item }}" categories: ["malicious", "not malicious"] fallbackCategory: "unknown" - name: store_security_result type: data.set with: type: "security" item: "${{ foreach.item }}" status: "${{ steps.classify_safety.output.category }}"For more than two branches, use a
switchstep which reads cleaner than chainedif/else. -
Summarize the item
Add an
ai.summarizecall to produce a human-readable summary. Run it after classification so later steps can include both the category and the summary:- name: summarize_item type: ai.summarize connector-id: "{{ consts.llm_connector }}" with: input: "${{ foreach.item }}"steps.summarize_item.output.contentis the summary string.
Full workflow YAML
name: ai--classify-and-route
description: Classify a stream of mixed items and route each one down the right branch.
enabled: true
tags: ["ai", "classify", "route"]
consts:
llm_connector: "your-connector-id"
triggers:
- type: manual
steps:
- name: gather_observability_items
type: ai.prompt
connector-id: "{{ consts.llm_connector }}"
with:
prompt: "Generate two sample observability alerts."
schema:
items:
type: object
required: [id, severity, message]
properties:
id: { type: string }
severity: { type: string, enum: [critical, high, medium, low] }
message: { type: string }
- name: gather_security_items
type: ai.prompt
connector-id: "{{ consts.llm_connector }}"
with:
prompt: "Generate three sample security alerts."
schema:
items:
type: object
required: [id, severity, category]
properties:
id: { type: string }
severity: { type: string, enum: [critical, high, medium, low] }
category: { type: string }
- name: route_each_item
type: foreach
foreach: "${{ steps.gather_observability_items.output.content | concat: steps.gather_security_items.output.content }}"
steps:
- name: identify_type
type: ai.classify
connector-id: "{{ consts.llm_connector }}"
with:
input: "${{ foreach.item }}"
includeRationale: true
categories:
- "security alert"
- "observability alert"
fallbackCategory: "other"
- name: summarize_item
type: ai.summarize
connector-id: "{{ consts.llm_connector }}"
with:
input: "${{ foreach.item }}"
- name: handle_observability
type: if
condition: "steps.identify_type.output.category : 'observability alert'"
steps:
- name: classify_severity
type: ai.classify
connector-id: "{{ consts.llm_connector }}"
with:
input: "${{ foreach.item }}"
categories: ["critical", "high", "medium", "low"]
- name: store_observability_result
type: data.set
with:
type: "observability"
item: "${{ foreach.item }}"
severity: "${{ steps.classify_severity.output.category }}"
summary: "${{ steps.summarize_item.output.content }}"
- name: handle_security
type: if
condition: "steps.identify_type.output.category : 'security alert'"
steps:
- name: classify_safety
type: ai.classify
connector-id: "{{ consts.llm_connector }}"
with:
input: "${{ foreach.item }}"
categories: ["malicious", "not malicious"]
fallbackCategory: "unknown"
- name: store_security_result
type: data.set
with:
type: "security"
item: "${{ foreach.item }}"
status: "${{ steps.classify_safety.output.category }}"
summary: "${{ steps.summarize_item.output.content }}"
- Trigger from real alerts. Replace the
manualtrigger and the twogather_*steps with an alert trigger and aforeachoverevent.alerts. - Use
switchfor many categories. When you have three or more branches, replace theifpair with aswitchstep for cleaner YAML. - Follow each branch with a real action. Replace the
data.setcalls withcases.createCase,http(Slack, PagerDuty), or composition calls that invoke a dedicated child workflow for each category. - Persist the enriched stream. Write the classified items back to Elasticsearch with
elasticsearch.requestfor dashboarding.
- AI-augmented workflows: The outcome this workflow supports.
- AI steps reference: Parameters for
ai.prompt,ai.classify,ai.summarize, andai.agent. - Flow control steps:
foreach,if,switch, and others. elastic/workflowsobservability folder: More observability workflow examples.