Loading

Workflows cheat sheet

One-page reference. Bookmark this.

name: my-workflow
description: ...
enabled: true
tags: [team, domain]
version: "1"

triggers: [ ... ]
inputs: [ ... ]
consts: { ... }
outputs: [ ... ]
settings: { ... }
steps: [ ... ]
		
  1. required
  2. optional
  3. optional
  4. required only for composed workflows
  5. optional
  6. required
triggers:
  - type: manual

  - type: scheduled
    with:
      every: "5m"                      # or: rrule with freq DAILY/WEEKLY/MONTHLY

  - type: alert

  - type: workflows.failed
    on:
      condition: "event.workflow.name : 'critical-ingest-pipeline'"
		
  1. requires rule Action attachment
  2. tech preview

Minimum schedule interval: 1 minute. Refer to Triggers.

- name: my_step
  type: some.step_type
  connector-id: "my-connector"
  if: "inputs.run_me : true"
  foreach: "{{ some.array }}"
  timeout: "30s"
  on-failure:
    retry:
      max-attempts: 3
      delay: "5s"
      strategy: exponential
    continue: true
  with:
    # step-specific parameters.
    # Note: workflow.execute / workflow.executeAsync use `workflow-id` INSIDE `with`.
		
  1. unique within the workflow
  2. top-level, kebab-case (for connector and AI steps)
  3. step-level KQL guard
  4. step-level iteration
Want to… Use
Query Elasticsearch elasticsearch.search, elasticsearch.esql.query
Write to Elasticsearch elasticsearch.index, elasticsearch.bulk, elasticsearch.update
Manage cases cases.createCase, cases.updateCase, cases.addComment, cases.addAlerts
Manage alerts kibana.SetAlertsStatus, kibana.SetAlertTags (PascalCase)
Call an API http (with optional connector-id)
Call a service <connector>.<action> (for example, slack.postMessage, jira.createIssue)
Branch if, switch
Loop foreach, while, loop.break, loop.continue
Fan out to independent executions workflow.executeAsync (tech preview)
Pause wait, waitForInput
Transform data data.filter, data.map, data.aggregate, data.parseJson, data.regexExtract
Call AI ai.prompt, ai.classify, ai.summarize, ai.agent
Call another workflow workflow.execute (synchronous), workflow.executeAsync (asynchronous, tech preview)
Log console

Full list: Step type index.

"{{ expr }}"
"${{ expr }}"
		
  1. string interpolation
  2. raw-value (arrays, objects, booleans, numbers)

Top-10 patterns:

"{{ inputs.name }}"
"{{ steps.search.output.hits.total.value }}"
"{{ event.alerts[0].host.name }}"
"{{ foreach.item._id }}"
"{{ variables.threshold }}"
"{{ now | date: '%Y-%m-%d' }}"
"{{ steps.x.output.body | json_parse }}"
"{{ event | json }}"
"{{ event.alerts[0].host.name | default: 'unknown' }}"
"${{ event.alerts | where: 'severity', 'critical' }}"
		
  1. read an input
  2. read a step output
  3. read trigger payload
  4. inside foreach
  5. read a data.set variable
  6. formatted timestamp
  7. parse a JSON string
  8. serialize to JSON string
  9. fallback
  10. filter inline

Full reference: Liquid filters.

Variable Contains
inputs.* Workflow inputs at runtime.
consts.* Constants from workflow top.
steps.<name>.output Output of a previous step.
steps.<name>.error Error if that step failed (with on-failure: continue).
event.* Trigger payload.
execution.* Current execution metadata.
workflow.* Workflow metadata.
foreach.* Loop context: item, index, total, items.
while.iteration Zero-based iteration counter inside a while loop.
variables.* Variables set by data.set.
now, kibanaUrl Standard helpers.

Full reference: Context variables.

on-failure:
  retry:
    max-attempts: 3
    delay: "5s"
    strategy: exponential         # or "fixed"
    jitter: true
    condition: "steps.self.error.status : 429"
  continue: true
  fallback: [ ... ]
  # abort is the default when no on-failure is set
		
  1. KQL
  2. log and move on
  3. graceful degradation

Precedence: per-step on-failure > workflow-level settings.on-failure > abort.

For cross-workflow error handling (page on-call when another workflow fails), use the workflows.failed event-driven trigger.

Full reference: Pass data and handle errors.

  1. Alert trigger needs rule Action attachment. type: alert alone isn't enough. Attach the workflow to the rule's Actions.
  2. while defaults to max-iterations: 2000 with on-limit: continue. When the loop hits the cap, the step succeeds quietly. Set on-limit: fail if you want the workflow to fail at the cap.
  3. switch.cases is an array, not a map. Each case is a { case: <value>, steps: [...] } object. Refer to switch.
  4. cases.* parameters use snake_case: case_id, not caseId.
  5. kibana.SetAlertsStatus / kibana.SetAlertTags are PascalCase. Not kibana.set_alerts_status.
  6. AI step identifiers are top-level kebab-case: connector-id, agent-id, inference-id.
  7. Composition's workflow-id is kebab-case but lives inside with. It's the one exception to the top-level-kebab-case pattern.
  8. data.* steps (except data.set) put source data at the top level: items:, arrays:, or source:. The transformation configuration goes in with.
  9. Use ${{ ... }} for arrays and objects, {{ ... }} for strings.
  10. to_json doesn't exist. Use json to serialize or json_parse to parse.
  11. data.filter and if conditions are KQL, not Liquid. Use item.severity : 'critical', not item.severity == 'critical'.