Configuration
editConfiguration
editUpgrade the Elastic Agent specification
editYou can upgrade the Elastic Agent version or change settings by editing the YAML specification. ECK applies the changes by performing a rolling restart of the Agent’s Pods. Depending on the settings that you used, ECK will set the outputs part of the configuration, or restart Elastic Agent on certificate rollover.
Customize the Elastic Agent configuration
editThe Elastic Agent configuration is defined in the config
element:
apiVersion: agent.k8s.elastic.co/v1alpha1 kind: Agent metadata: name: quickstart spec: version: 8.17.0 elasticsearchRefs: - name: quickstart daemonSet: podTemplate: spec: securityContext: runAsUser: 0 config: inputs: - name: system-1 revision: 1 type: system/metrics use_output: default meta: package: name: system version: 0.9.1 data_stream: namespace: default streams: - id: system/metrics-system.cpu data_stream: dataset: system.cpu type: metrics metricsets: - cpu cpu.metrics: - percentages - normalized_percentages period: 10s
The root user is required to persist state in a |
Alternatively, it can be provided through a Secret specified in the configRef
element. The Secret must have an agent.yml
entry with this configuration:
apiVersion: agent.k8s.elastic.co/v1alpha1 kind: Agent metadata: name: quickstart spec: version: 8.17.0 elasticsearchRefs: - name: quickstart daemonSet: podTemplate: spec: securityContext: runAsUser: 0 configRef: secretName: system-cpu-config --- apiVersion: v1 kind: Secret metadata: name: system-cpu-config stringData: agent.yml: |- inputs: - name: system-1 revision: 1 type: system/metrics use_output: default meta: package: name: system version: 0.9.1 data_stream: namespace: default streams: - id: system/metrics-system.cpu data_stream: dataset: system.cpu type: metrics metricsets: - cpu cpu.metrics: - percentages - normalized_percentages period: 10s
You can use the Fleet application in Kibana to generate the configuration for Elastic Agent, even when running in standalone mode. Check the Elastic Agent standalone documentation. Adding the corresponding integration package to Kibana also adds the related dashboards and visualizations.
Use multiple Elastic Agent outputs
editElastic Agent supports the use of multiple outputs. Therefore, the elasticsearchRefs
element accepts multiple references to Elasticsearch clusters. ECK populates the outputs section of the Elastic Agent configuration based on those references. If you configure more than one output, you also have to specify a unique outputName
attribute.
To send Elastic Agent’s internal monitoring and log data to a different Elasticsearch cluster called agent-monitoring
in the elastic-monitoring
namespace, and the harvested metrics to our quickstart
cluster, you have to define two elasticsearchRefs
as shown in the following example:
apiVersion: agent.k8s.elastic.co/v1alpha1 kind: Agent metadata: name: quickstart spec: version: 8.17.0 daemonSet: podTemplate: spec: securityContext: runAsUser: 0 elasticsearchRefs: - name: quickstart outputName: default - name: agent-monitoring namespace: elastic-monitoring outputName: monitoring config: agent: monitoring: enabled: true use_output: monitoring logs: true metrics: true inputs: - name: system-1 revision: 1 type: system/metrics use_output: default ...
Customize the connection to an Elasticsearch cluster
editThe elasticsearchRefs
element allows ECK to automatically configure Elastic Agent to establish a secured connection to one or more managed Elasticsearch clusters. By default, it targets all nodes in your cluster. If you want to direct traffic to specific nodes of your Elasticsearch cluster, refer to Traffic Splitting for more information and examples.
Set manually Elastic Agent outputs
editIf the elasticsearchRefs
element is specified, ECK populates the outputs section of the Elastic Agent configuration. ECK creates a user with appropriate roles and permissions and uses its credentials. If required, it also mounts the CA certificate in all Agent Pods, and recreates Pods when this certificate changes. Moreover, elasticsearchRef
element can refer to an ECK-managed Elasticsearch cluster by filling the name
, namespace
, serviceName
fields accordingly, as well as to a Kubernetes secret that contains the connection information to an Elasticsearch cluster not managed by it. In the latter case, for authenticating against the Elasticsearch cluster the secret must contain the fields of url
and either the username
with password
or the api-key
. Refer to Connect to external Elastic resources for additional details.
The outputs can also be set manually. To do that, remove the elasticsearchRefs
element from the specification and include an appropriate output configuration in the config
, or indirectly through the configRef
mechanism.
apiVersion: agent.k8s.elastic.co/v1alpha1 kind: Agent metadata: name: quickstart spec: version: 8.17.0 daemonSet: podTemplate: spec: securityContext: runAsUser: 0 config: outputs: default: type: elasticsearch hosts: - "https://my-custom-elasticsearch-cluster.cloud.elastic.co:9243" password: ES_PASSWORD username: ES_USER ...
Choose the deployment model
editDepending on the use case, Elastic Agent may need to be deployed as a Deployment, a DaemonSet, or a StatefulSet. Provide a podTemplate
element under either the deployment
or the daemonSet
element in the specification to choose how your Elastic Agents should be deployed. When choosing the deployment
option you can additionally specify the strategy used to replace old Pods with new ones.
Similarly, you can set the update strategy when deploying as a DaemonSet. This allows you to control the rollout speed for new configuration by modifying the maxUnavailable
setting:
apiVersion: agent.k8s.elastic.co/v1alpha1 kind: Agent metadata: name: quickstart spec: version: 8.17.0 daemonSet: podTemplate: spec: securityContext: runAsUser: 0 strategy: type: RollingUpdate rollingUpdate: maxUnavailable: 3 ...
Check Set compute resources for Beats and Elastic Agent for more information on how to use the Pod template to adjust the resources given to Elastic Agent.
Role Based Access Control for Elastic Agent
editSome Elastic Agent features, such as the Kubernetes integration, require that Agent Pods interact with Kubernetes APIs. This functionality requires specific permissions. The standard Kubernetes RBAC rules apply. For example, to allow API interactions:
apiVersion: agent.k8s.elastic.co/v1alpha1 kind: Agent metadata: name: elastic-agent spec: version: 8.17.0 elasticsearchRefs: - name: elasticsearch daemonSet: podTemplate: spec: automountServiceAccountToken: true serviceAccountName: elastic-agent securityContext: runAsUser: 0 ... --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: elastic-agent rules: - apiGroups: [""] # "" indicates the core API group resources: - namespaces - pods - nodes - nodes/metrics - nodes/proxy - nodes/stats - events verbs: - get - watch - list - nonResourceURLs: - /metrics verbs: - get - watch - list --- apiVersion: v1 kind: ServiceAccount metadata: name: elastic-agent namespace: default --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: elastic-agent subjects: - kind: ServiceAccount name: elastic-agent namespace: default roleRef: kind: ClusterRole name: elastic-agent apiGroup: rbac.authorization.k8s.io
Deploying Elastic Agent in secured clusters
editTo deploy Elastic Agent in clusters with the Pod Security Policy admission controller enabled, or in OpenShift clusters, you might need to grant additional permissions to the Service Account used by the Elastic Agent Pods. Those Service Accounts must be bound to a Role or ClusterRole that has use
permission for the required Pod Security Policy or Security Context Constraints. Different Elastic Agent integrations might require different settings set in their PSP/SCC.
Running as a non-root user
editIn order to run Elastic Agent as a non-root user you must choose how you want to persist data to the Agent’s volume.
-
Run Elastic Agent with an
emptyDir
volume. This has the downside of not persisting data between restarts of the Elastic Agent which can duplicate work done by the previous running Agent. -
Run Elastic Agent with a
hostPath
volume in addition to aDaemonSet
running asroot
that sets up permissions for theagent
user.
In addition to these decisions, if you are running Elastic Agent in Fleet mode as a non-root user, you must configure certificate_authorities.ssl
in each xpack.fleet.outputs
to trust the CA of the Elasticsearch Cluster.
To run Elastic Agent with an emptyDir
volume.
apiVersion: agent.k8s.elastic.co/v1alpha1 kind: Agent metadata: name: fleet-server spec: deployment: podTemplate: spec: securityContext: fsGroup: 1000 volumes: - name: agent-data emptyDir: {} ...
Gid 1000 is the default group at which the Agent container runs. Adjust as necessary if |
To run Elastic Agent with a hostPath
volume and a DaemonSet
to maintain permissions.
apiVersion: agent.k8s.elastic.co/v1alpha1 kind: Agent metadata: name: fleet-server-sample namespace: elastic-apps spec: mode: fleet fleetServerEnabled: true deployment: {} ... --- apiVersion: agent.k8s.elastic.co/v1alpha1 kind: Agent metadata: name: elastic-agent-sample namespace: elastic-apps spec: daemonSet: {} ... --- apiVersion: apps/v1 kind: DaemonSet metadata: name: manage-agent-hostpath-permissions namespace: elastic-apps spec: selector: matchLabels: name: manage-agent-hostpath-permissions template: metadata: labels: name: manage-agent-hostpath-permissions spec: # serviceAccountName: elastic-agent volumes: - hostPath: path: /var/lib/elastic-agent type: DirectoryOrCreate name: "agent-data" initContainers: - name: manage-agent-hostpath-permissions # image: registry.access.redhat.com/ubi9/ubi-minimal:latest image: docker.io/bash:5.2.15 resources: limits: cpu: 100m memory: 32Mi securityContext: # privileged: true runAsUser: 0 volumeMounts: - mountPath: /var/lib/elastic-agent name: agent-data command: - 'bash' - '-e' - '-c' - |- # Adjust this with /var/lib/elastic-agent/YOUR-NAMESPACE/YOUR-AGENT-NAME/state # Multiple directories are supported for the fleet-server + agent use case. dirs=( "/var/lib/elastic-agent/default/elastic-agent/state" "/var/lib/elastic-agent/default/fleet-server/state" ) for dir in ${dirs[@]}; do mkdir -p "${dir}" # chcon is only required when running an an SELinux-enabled/OpenShift environment. # chcon -Rt svirt_sandbox_file_t "${dir}" chmod g+rw "${dir}" # Gid 1000 is the default group at which the Agent container runs. Adjust as necessary if `runAsGroup` has been modified. chgrp 1000 "${dir}" if [ -n "$(ls -A ${dir} 2>/dev/null)" ] then # Gid 1000 is the default group at which the Agent container runs. Adjust as necessary if `runAsGroup` has been modified. chgrp 1000 "${dir}"/* chmod g+rw "${dir}"/* fi done containers: - name: sleep image: gcr.io/google-containers/pause-amd64:3.2
This is only required when running in an SElinux-enabled/OpenShift environment. Ensure this user has been added to the privileged security context constraints (SCC) in the correct namespace. |
|
UBI is only required when needing the |
|
Privileged is only required when running in an SElinux-enabled/OpenShift environment. |
When running Agent in fleet mode as a non-root user Kibana must be configured in order to properly accept the CA of the Elasticsearch cluster.
--- apiVersion: kibana.k8s.elastic.co/v1 kind: Kibana metadata: name: kibana-sample spec: config: # xpack.fleet.agents.elasticsearch.hosts: xpack.fleet.agents.fleet_server.hosts: ["https://fleet-server-sample-agent-http.default.svc:8220"] xpack.fleet.outputs: - id: eck-fleet-agent-output-elasticsearch is_default: true name: eck-elasticsearch type: elasticsearch hosts: - "https://elasticsearch-sample-es-http.default.svc:9200" ssl: certificate_authorities: ["/mnt/elastic-internal/elasticsearch-association/default/elasticsearch-sample/certs/ca.crt"]
This entry must not exist when running agent in fleet mode as a non-root user. |
|
Note that the correct URL for Elasticsearch is |
|
Note that the correct path for Elasticsearch |