Omer Kushmaro

Manage your Elastic security stack as code with the Elastic Stack Terraform provider

From detection rules to AI connectors - the latest Terraform provider releases bring security, observability, and ML capabilities to your infrastructure-as-code workflows.

Manage your Elastic security stack as code with the Elastic Stack Terraform provider

The Elastic Stack Terraform provider has reached a significant milestone. Starting with release v0.13.1, you can manage your Elastic security posture - detection rules, exception lists, and prebuilt rules - alongside ML anomaly detection jobs, synthetics monitors, and AI connectors, all as code.

This brings your detection logic and ML jobs into the same versioned, peer-reviewed workflow as your core clusters. It ensures your security posture and AI connectors are no longer manual outliers in an otherwise automated environment.

The challenge: Security and observability configuration at scale

As Elastic deployments grow, so does the complexity of managing them. Security teams maintain hundreds of detection rules. SREs configure monitoring across dozens of clusters. ML engineers tune anomaly detection jobs across multiple environments. All of these configurations must be consistent, auditable, and reproducible.

Without infrastructure as code, teams face two problems:

  1. Configuration drift. Rules, policies, and monitors are created manually through the Kibana UI. Over time, production and staging diverge. No one is sure which version of a detection rule is running where.

  2. Buried audit trail. When a detection rule changes or an exception is added, there's no pull request to review, no commit history to trace, and no rollback path if something breaks. Users need to put in extra effort to access such history.

Elastic Stack Terraform provider solves this by bringing these configurations into the same version-controlled, peer-reviewed workflow that teams already use for infrastructure.

Security artifacts as code: Detection rules, exceptions, and prebuilt rules

You can now manage the full lifecycle of Elastic Security detection rules through Terraform.

Detection rules

The elasticstack_kibana_security_detection_rule resource lets you define, version, and deploy detection rules in the HashiCorp Configuration Language (HCL) format:

resource "elasticstack_kibana_security_detection_rule" "suspicious_admin_logon" {
  name        = "Suspicious Admin Logon Activity"
  type        = "query"
  query       = "event.action:logon AND user.name:admin"
  language    = "kuery"
  enabled     = true
  description = "Detects suspicious admin logon activities"
  severity    = "high"
  risk_score  = 75
  from        = "now-6m"
  to          = "now"
  interval    = "5m"
  tags        = ["security", "authentication", "admin"]
}

This means your detection rules live in Git, undergo code review, and are deployed consistently across environments. No more clicking through the Kibana UI to replicate rules from staging to production.

Detection rule resource docs

Exception lists and items

The security-as-code story extends to a full suite of exception management resources:

  • elasticstack_kibana_security_exception_list - Create and manage exception lists
  • elasticstack_kibana_security_exception_item - Define individual exception items within a list
  • elasticstack_kibana_security_list and elasticstack_kibana_security_list_item - Manage value lists for IP allowlists, file hashes, and other indicators
  • elasticstack_kibana_security_list_data_streams - Associate lists with specific data streams

Here's an example that ties them together - an exception list with items that suppress known false positives for a detection rule:

resource "elasticstack_kibana_security_exception_list" "vuln_scanner_exceptions" {
  list_id        = "vuln-scanner-exceptions"
  name           = "Vulnerability Scanner Exceptions"
  description    = "Suppress alerts from authorized vulnerability scanners"
  type           = "detection"
  namespace_type = "single"
  tags           = ["security", "vulnerability-scanning"]
}

resource "elasticstack_kibana_security_exception_item" "nessus_scanner" {
  list_id        = elasticstack_kibana_security_exception_list.vuln_scanner_exceptions.list_id
  item_id        = "nessus-scanner"
  name           = "Nessus Scanner - Authorized"
  description    = "Suppress alerts from authorized Nessus scanner hosts"
  type           = "simple"
  namespace_type = "single"

  entries = [
    {
      type     = "match"
      field    = "source.ip"
      operator = "included"
      value    = "10.0.50.10"
    },
    {
      type     = "match_any"
      field    = "process.name"
      operator = "included"
      values   = ["nessus", "nessusd"]
    }
  ]

  tags = ["nessus", "authorized-scanner"]
}

resource "elasticstack_kibana_security_exception_item" "qualys_scanner" {
  list_id        = elasticstack_kibana_security_exception_list.vuln_scanner_exceptions.list_id
  item_id        = "qualys-scanner"
  name           = "Qualys Scanner - Authorized"
  description    = "Suppress alerts from authorized Qualys scanner subnet"
  type           = "simple"
  namespace_type = "single"

  entries = [
    {
      type     = "match"
      field    = "source.ip"
      operator = "included"
      value    = "10.0.51.0/24"
    }
  ]

  tags = ["qualys", "authorized-scanner"]
}

The exception list and its items are linked by list_id, so Terraform manages the dependency graph automatically. Adding a new authorized scanner is a one-line PR - no clicking through the Kibana UI, no risk of forgetting which environment got the update.

Prebuilt security rules

The elasticstack_kibana_prebuilt_rule resource lets you manage Elastic's prebuilt detection rules via Terraform. This is particularly valuable for organizations that need to track which prebuilt rules are enabled, customize their parameters, and ensure consistent deployment across environments.

ML anomaly detection as code

Machine learning anomaly detection is one of Elasticsearch's most powerful capabilities - but managing ML jobs across environments has traditionally been a manual process. You create a job in the Kibana UI, tune the detectors, configure the datafeed, and hope someone documents the settings so they can be replicated in the next environment.

The elasticstack_elasticsearch_ml_anomaly_detection_job resource changes that. You can now define the full configuration of an anomaly detection job in HCL - detectors, bucket spans, influencers, data feeds, and analysis limits - and deploy it consistently across dev, staging, and production.

resource "elasticstack_elasticsearch_ml_anomaly_detection_job" "cpu_anomalies" {
  job_id      = "high-cpu-by-host"
  description = "Detect unusual CPU usage patterns"

  analysis_config = {
    bucket_span = "15m"
    detectors   = [{
      function   = "high_mean"
      field_name = "system.cpu.user_pct"
    }]
    influencers = ["host.name"]
  }

  data_description = {
    time_field = "@timestamp"
  }
}

This matters for teams that rely on ML to catch infrastructure anomalies, unusual user behavior, or security threats. Instead of manually recreating jobs when spinning up new clusters or recovering from failures, the entire ML configuration lives in version control - reviewable, repeatable, and recoverable.

Cross-cluster automation with API keys

For organizations running multiple Elasticsearch clusters, the provider now supports cluster API keys for cross-cluster search (CCS) and cross-cluster replication (CCR). You can create API keys specifically designed for secure cross-cluster communication, enabling end-to-end automation of multi-cluster architectures.

This means you can provision two clusters, configure CCS/CCR between them, and set up the necessary security credentials - all in a single Terraform configuration.

resource "elasticstack_elasticsearch_security_api_key" "ccs_key" {
  name = "cross-cluster-search-key"
  type = "cross_cluster"

  access = {
    search = [{
      names = ["logs-*", "metrics-*"]
    }]
    replication = [{
      names = ["archive-*"]
    }]
  }

  expiration = "90d"

  metadata = jsonencode({
    environment = "production"
    purpose     = "ccs-ccr-between-prod-clusters"
    team        = "platform"
  })
}

When the type is set to cross_cluster, the API key is scoped to CCS/CCR operations. You define which index patterns are accessible for search and replication, set an expiration policy, and tag the key with metadata - all reviewable in a pull request.

Learn more about API key resources in the documentation.

AI connectors as code

The provider now supports .bedrock and .gen-ai connectors, bringing AI infrastructure into your Terraform workflows. As teams increasingly integrate large language models into their Elastic workflows - for AI assistants, attack discovery, and automated investigations - managing these connector configurations as code becomes essential.

resource "elasticstack_kibana_action_connector" "bedrock" {
  name              = "aws-bedrock"
  connector_type_id = ".bedrock"
  config = jsonencode({
    apiUrl       = "https://bedrock-runtime.us-east-1.amazonaws.com"
    defaultModel = "anthropic.claude-v2"
  })
  secrets = jsonencode({
    accessKey = var.aws_access_key
    secret    = var.aws_secret_key
  })
}

resource "elasticstack_kibana_action_connector" "openai" {
  name              = "openai"
  connector_type_id = ".gen-ai"
  config = jsonencode({
    apiProvider  = "OpenAI"
    apiUrl       = "https://api.openai.com/v1/chat/completions"
    defaultModel = "gpt-4"
  })
  secrets = jsonencode({
    apiKey = var.openai_api_key
  })
}

With these connectors defined in Terraform, you can version your AI integration configuration alongside the rest of your Elastic infrastructure - and swap models or providers through a simple PR.

Observability enhancements

Synthetics monitors

The elasticstack_kibana_synthetics_monitor resource now includes a labels field, enabling better organization and filtering of synthetic checks. Labels let you tag monitors by team, environment, or service, making it easier to manage synthetic monitoring at scale.

Additional platform improvements

Recent releases also included several resources and attributes that round out the provider's coverage:

  • elasticstack_elasticsearch_alias - Manage Elasticsearch aliases as a dedicated resource
  • elasticstack_kibana_default_data_view - Set the default data view for a Kibana space
  • solution attribute on elasticstack_kibana_space - Configure the solution type for Kibana spaces (available from 8.16)
  • Fleet agent policy enhancements - host_name_format for configuring hostname vs. FQDN, and required_versions for version pinning

Getting started

If you're already using the Elastic Stack Terraform provider, upgrade to the latest provider version to get all of these capabilities:

terraform {
  required_providers {
    elasticstack = {
      source  = "elastic/elasticstack"
      version = "~> 0.14"
    }
  }
}

If you're new to managing your Elastic Stack with Terraform, start with the provider documentation on the Terraform registry.

To start using Elastic Cloud today, log in to the Elastic Cloud console or sign up for a free trial.
For the full set of changes, check out the release notes on GitHub.

Share this article