Get hands-on with Elasticsearch: Dive into our sample notebooks in the Elasticsearch Labs repo, start a free cloud trial, or try Elastic on your local machine now.
ECK 3.4 makes the Elastic Stack on Kubernetes simpler to operate. Zone-aware HA, safe rolling restarts, and Kibana↔Elasticsearch mTLS each become a one-line answer in your manifest.
If you operate Elastic Cloud on Kubernetes (ECK), this release is about reducing the friction in the things you do every day.
Easier to operate, easier to understand
ECK 3.4 is a release focused on reducing what you have to think about when you run The Elastic Stack on Kubernetes. Each headline change picks a multi-step task and turns it into a single declarative answer:
- Simplified zone awareness. Telling ECK that a cluster should be spread across availability zones is now a single field on the NodeSet. The operator handles the topology, the scheduling, and the Elasticsearch-side awareness configuration on your behalf. Your manifests reflect what you mean, not how it's wired.
- Restart a cluster the same way you do everything else. Triggering a rolling restart is now an annotation on the Elasticsearch resource. It's declarative, fits GitOps, and leaves an audit trail. No force-edit on an unrelated field to get a rollout.
- mTLS is automatically configured by the operator. Wiring mutual TLS between Kibana and Elasticsearch by hand requires managing CAs, per-component client certificates, mounts, rotation, and configurations on both ends. ECK 3.4 takes care of all of that: flip a flag on Elasticsearch, point Kibana at it, and the operator manages the rest.
This release is to make day-to-day ECK operations boring, in the best sense: fewer fields to remember, fewer side trips to keep in sync, and simpler-to-understand manifests.
Simplified zone awareness
Make an Elasticsearch cluster highly available across availability zones by setting one field on the NodeSet. ECK 3.4 handles the topology spread, the pod scheduling, and the Elasticsearch-side awareness configuration for you.
Before, you had to wire all of this by hand across four separate objects: an annotation on the Elasticsearch resource for downward node labels, awareness attributes in the NodeSet config, a fieldRef env var in the pod template to surface the zone, and a matching topologySpreadConstraints block plus a nodeAffinity rule pinning the cluster to specific zones. Roughly forty lines of YAML, easy to misconfigure.
In ECK 3.4, the same zone-aware cluster is four lines:
To pin to a specific set of zones, name them, and ECK adds the matching required node affinity rules:
If you do need to customize maxSkew or whenUnsatisfiable, providing a matching topology spread constraint with the same topologyKey in podTemplate still wins. Your override stays an override.
One note for upgrades: enabling zoneAwareness on an existing NodeSet changes the StatefulSet pod template (new topology spread constraints, ZONE env var, node affinity, node.attr.zone), which triggers a one-time rolling restart of the affected NodeSet. Plan accordingly.
To learn more about simplified zones management, you can read this page at Elastic Docs.
Declarative rolling restarts
Restarting an Elasticsearch cluster without changing its spec is now a first-class workflow in 3.4. Two new annotations on the Elasticsearch resource do the work:
eck.k8s.elastic.co/restart-trigger: set or change this value (a timestamp is the conventional choice) to start a rolling restart. Changing the value triggers another restart later; removing the annotation does not.eck.k8s.elastic.co/restart-allocation-delay: optional duration string (e.g. "20m") passed to the Elasticsearch node shutdown API as the allocation delay during the restart, so you can hold off on rebalancing while a pod recycles.
Under the hood, ECK propagates the trigger value to pod annotations, which changes the StatefulSet template hash and feeds every pod through the existing rolling-upgrade path (node shutdown API, predicates, one-pod-at-a-time deletion). There's no new restart mechanism to learn, and the status messages and observability you already have on rolling upgrades carry over.
For GitOps users, this means a Flux/ArgoCD pipeline can request a restart by patching one annotation: no spec drift, no diff churn, no force-edit on an unrelated field.
Managed mTLS for Kibana ↔ Elasticsearch
Mutual TLS orchestration between Kibana and Elasticsearch arrives with this release. The Elasticsearch CRD accepts a single new field, spec.http.tls.client.authentication: true, that tells the cluster to require client certificates on its HTTPs interface. ECK does the rest: it builds a trust bundle from any secret labeled eck.k8s.elastic.co/client-certificate: true, mounts it into the Elasticsearch pods, sets xpack.security.http.ssl.client_authentication: required, and issues an operator-side client certificate so it can keep talking to the cluster throughout the rollout.
This makes enabling and configuring mTLS for the stack (Elasticsearch and Kibana only, in this release) a much simpler task.
Enabling mTLS on Elasticsearch:
On the client side, Kibana's association controller now detects the client-authentication-required annotation on the referenced Elasticsearch and automatically generates a client certificate for Kibana — no extra config needed. If you want to bring your own cert (cert-manager, an internal PKI), point at the secret you've already provisioned:
ECK rotates the certificate, mounts the secret into the Kibana pod, and wires elasticsearch.ssl.certificate and elasticsearch.ssl.key. Cleanup of mTLS resources is deferred until all pods have rolled, so connectivity holds throughout the transition.
Kibana is the first Stack component to get this first-class treatment in 3.4. Support for APM Server, Beats, Fleet Server, Elastic Agent, Logstash, Maps, and Enterprise Search ships in the near future. In the meantime, a new recipe walks through manual mTLS for those components using cert-manager.
Other notable improvements
This release includes other improvements worth highlighting. Here is a list with their related pull requests.
- Native Go FIPS 140-3 in the FIPS-enabled operator (separate image). The FIPS-flavored ECK image (
docker.elastic.co/eck/eck-operator-fips:3.4.0, plus a UBI varianteck-operator-ubi-fips:3.4.0) now ships with native Go FIPS 140-3 support, pinned at the certifiedGOFIPS140=v1.0.0module and enforced at runtime. The standardeck-operatorimage is unchanged. For Elasticsearch 9.4.0 or later, the operator also generates and mounts a FIPS-compliant keystore password automatically whenxpack.security.fips_mode.enabled: trueis set (#9263, #9287). - Reliability fixes worth calling out:
- Stale CAs in the certificate chain are now detected and trigger reissuance (#9197).
- Remote-CA secret generation failures are non-blocking (#9271).
- The NetworkPolicy namespace selector label is fixed for soft multi-tenancy setups (#9153).
- The Elasticsearch controller skips its default PVC if a volume of the same name already exists (#9199).
- The DaemonSet reconciler handles stale cache the same way the Deployment reconciler does (#9256).
Getting started
If you're already running ECK, upgrade to 3.4.0 with Helm:
Or apply the latest operator manifest directly:
If you're new to ECK, start with the quickstart guide to get an Elasticsearch cluster running on Kubernetes in minutes.
For the full list of changes, see the ECK 3.4.0 release notes on GitHub.
To start using Elastic Cloud today, log in to the Elastic Cloud console or sign up for a free trial.
Frequently asked questions
How do I make an Elasticsearch cluster zone-aware in ECK without writing topology spread constraints?
Set spec.nodeSets[].zoneAwareness: {} on the Elasticsearch resource. ECK derives the topology, attaches node.attr.zone, sets maxSkew=1 topology spread constraints, and injects the downward labels for you. Provide zones: [...] if you want to pin to a specific set of availability zones. Enabling this on an existing NodeSet causes a one-time rolling restart.
Can I trigger a rolling restart of an Elasticsearch cluster on Kubernetes without editing the spec?
Yes. ECK 3.4 introduces two annotations on the Elasticsearch resource: eck.k8s.elastic.co/restart-trigger (set or change the value, e.g. a timestamp, to start a rolling restart) and eck.k8s.elastic.co/restart-allocation-delay (optional duration string passed to the Elasticsearch node shutdown API). Removing the trigger annotation does not start a new restart.
How do I enable mutual TLS between Kibana and Elasticsearch on Kubernetes?
With ECK 3.4, set spec.http.tls.client.authentication: true on the Elasticsearch CRD and reference it from Kibana via elasticsearchRef. ECK auto-generates a client certificate for Kibana, builds a trust bundle from any secret labeled eck.k8s.elastic.co/client-certificate: true, and configures xpack.security.http.ssl.client_authentication: required for you. mTLS for Kibana ↔ Elasticsearch is a technical preview in 3.4.
Does ECK 3.4 mTLS support cover all Stack components like Beats and Fleet?
Not yet. Kibana is the first Stack component to get first-class mTLS support in 3.4 — the operator auto-generates its client certificate. Support for APM Server, Beats, Fleet Server, Elastic Agent, Logstash, Maps, and Enterprise Search ships in the next release. A new recipe walks through manual mTLS for those components using cert-manager in the meantime.
Does ECK support FIPS 140-3?
Yes, in a separate operator image. ECK 3.4 publishes a FIPS-flavored build (docker.elastic.co/eck/eck-operator-fips:3.4.0, plus a UBI variant) with native Go FIPS 140-3 support. The standard eck-operator image is unchanged. For Elasticsearch 9.4.0 or later, ECK also generates and mounts a FIPS-compliant keystore password automatically when xpack.security.fips_mode.enabled: true is set.




