Configurationedit

This functionality is in technical preview and may be changed or removed in a future release. Elastic will work to fix any issues, but features in technical preview are not subject to the support SLA of official GA features.

Upgrade the Logstash specificationedit

You can upgrade the Logstash version or change settings by editing the YAML specification. ECK applies the changes by performing a rolling restart of Logstash Pods.

Logstash configurationedit

Define the Logstash configuration (the ECK equivalent to logstash.yml) in the spec.config section:

apiVersion: logstash.k8s.elastic.co/v1alpha1
kind: Logstash
metadata:
  name: quickstart
spec:
  version: 8.13.4
  count: 1
  elasticsearchRefs:
  - name: quickstart
    clusterName: qs
  config: 
    pipeline.workers: 4
    log.level: debug

Customize Logstash configuration using logstash.yml settings here

Alternatively, you can provide the configuration through a Secret specified in the spec.configRef section. The Secret must have an logstash.yml entry with these settings:

apiVersion: logstash.k8s.elastic.co/v1alpha1
kind: Logstash
metadata:
  name: quickstart
spec:
  version: 8.13.4
  count: 1
  elasticsearchRefs:
  - name: quickstart
    clusterName: qs
  configRef:
    secretName: quickstart-config
---
apiVersion: v1
kind: Secret
metadata:
  name: quickstart-config
stringData:
  logstash.yml: |-
    pipeline.workers: 4
    log.level: debug

Configuring Logstash pipelinesedit

Define Logstash pipelines in the spec.pipelines section (the ECK equivalent to pipelines.yml):

apiVersion: logstash.k8s.elastic.co/v1alpha1
kind: Logstash
metadata:
  name: quickstart
spec:
  version: 8.13.4
  count: 1
  elasticsearchRefs:
    - clusterName: qs
      name: quickstart
  pipelines:
    - pipeline.id: main
      config.string: |
        input {
          beats {
            port => 5044
          }
        }
        output {
          elasticsearch {
            hosts => [ "${QS_ES_HOSTS}" ]
            user => "${QS_ES_USER}"
            password => "${QS_ES_PASSWORD}"
            cacert => "${QS_ES_SSL_CERTIFICATE_AUTHORITY}"
          }
        }

Alternatively, you can provide the pipeline configuration through a Secret specified in the spec.pipelinesRef element. The Secret must have a logstash.yml entry with this configuration:

apiVersion: logstash.k8s.elastic.co/v1alpha1
kind: Logstash
metadata:
  name: quickstart
spec:
  version: 8.13.4
  count: 1
  elasticsearchRefs:
    - clusterName: qs
      name: quickstart
  pipelinesRef:
    secretName: quickstart-pipeline
---
apiVersion: v1
kind: Secret
metadata:
  name: quickstart-pipeline
stringData:
  pipelines.yml: |-
    - pipeline.id: main
      config.string: |
        input {
          beats {
            port => 5044
          }
        }
        output {
          elasticsearch {
            hosts => [ "${QS_ES_HOSTS}" ]
            user => "${QS_ES_USER}"
            password => "${QS_ES_PASSWORD}"
            cacert => "${QS_ES_SSL_CERTIFICATE_AUTHORITY}"
          }
        }

Logstash on ECK supports all options present in pipelines.yml, including settings to update the number of workers, and the size of the batch that the pipeline will process. This also includes using path.config to point to volumes mounted on the Logstash container:

apiVersion: logstash.k8s.elastic.co/v1alpha1
kind: Logstash
metadata:
  name: quickstart
spec:
  version: 8.13.4
  count: 1
  elasticsearchRefs:
    - clusterName: qs
      name: quickstart
  pipelines:
    - pipeline.id: main
      config.string: |
        input {
          beats {
            port => 5044
          }
        }
        output {
          elasticsearch {
            hosts => [ "${QS_ES_HOSTS}" ]
            user => "${QS_ES_USER}"
            password => "${QS_ES_PASSWORD}"
            cacert => "${QS_ES_SSL_CERTIFICATE_AUTHORITY}"
          }
        }

Logstash persistent queues (PQs) and dead letter queues (DLQs) are not currently managed by the Logstash operator, and using them will require you to create and manage your own Volumes and VolumeMounts

Using Elasticsearch in Logstash pipelinesedit

The spec.elasticsearchRefs section provides a mechanism to help configure Logstash to estabish a secured connection to one or more managed Elasticsearch clusters. By default, each elasticsearchRef will target all nodes in its referenced Elasticsearch cluster. If you want to direct traffic to specific nodes of your Elasticsearch cluster, refer to Traffic Splitting for more information and examples.

When you use elasticsearchRefs in a Logstash pipeline, the Logstash operator creates the necessary resources from the associated Elasticsearch cluster, and provides environment variables to allow these resources to be accessed from the pipeline configuration. Environment variables are replaced at runtime with the appropriate values.``` The environment variables have a fixed naming convention:

  • NORMALIZED_CLUSTERNAME_ES_HOSTS
  • NORMALIZED_CLUSTERNAME_ES_USER
  • NORMALIZED_CLUSTERNAME_ES_PASSWORD
  • NORMALIZED_CLUSTERNAME_ES_SSL_CERTIFICATE_AUTHORITY

where NORMALIZED_CLUSTERNAME is the value taken from the clusterName field of the elasticsearchRef property, capitalized, and - transformed to _ - eg, prod-es, would becomed PROD_ES.

The clusterName value should be unique across all referenced Elasticsearches in the same Logstash spec.

The Logstash ECK operator creates a user called eck_logstash_user_role when an elasticsearchRef is specified. This user has the following permissions:

  "cluster": ["monitor", "manage_ilm", "read_ilm", "manage_logstash_pipelines", "manage_index_templates", "cluster:admin/ingest/pipeline/get",],
  "indices": [
    {
      "names": [ "logstash", "logstash-*", "ecs-logstash", "ecs-logstash-*", "logs-*", "metrics-*", "synthetics-*", "traces-*" ],
      "privileges": ["manage", "write", "create_index", "read", "view_index_metadata"]
    }

You can update user permissions to include more indices if the Elasticsearch plugin is expected to use indices other than the default. See the Logstash configuration with a custom index sample configuration that creates a user that writes to a custom index.

This example demonstrates how to create a Logstash deployment that connects to different Elasticsearch instances, one of which is in a separate namespace:

apiVersion: logstash.k8s.elastic.co/v1alpha1
kind: Logstash
metadata:
  name: quickstart
spec:
  version: 8.13.4
  count: 1
  elasticsearchRefs:        
    - clusterName: prod-es  
      name: prod
    - clusterName: qa-es    
      name: qa
      namespace: qa
  pipelines:
    - pipeline.id: main
      config.string: |
        input {
          beats {
            port => 5044
          }
        }
        output {
          elasticsearch {   
            hosts => [ "${PROD_ES_ES_HOSTS}" ]
            user => "${PROD_ES_ES_USER}"
            password => "${PROD_ES_ES_PASSWORD}"
            cacert => "${PROD_ES_ES_SSL_CERTIFICATE_AUTHORITY}"
          }
          elasticsearch {   
            hosts => [ "${QA_ES_ES_HOSTS}" ]
            user => "${QA_ES_ES_USER}"
            password => "${QA_ES_ES_PASSWORD}"
            cacert => "${QA_ES_ES_SSL_CERTIFICATE_AUTHORITY}"
          }
        }

Define Elasticsearch references in the CRD. This will create the appropriate Secrets to store certificate details and the rest of the connection information, and create environment variables to allow them to be referred to in Logstash pipeline configurations.

This refers to an Elasticsearch cluster residing in the same namespace as the Logstash instances.

This refers to an Elasticsearch cluster residing in a different namespace to the Logstash instances.

Elasticsearch output definitions - use the environment variables created by the Logstash operator when specifying an ElasticsearchRef. Note the use of "normalized" versions of the clusterName in the environment variables used to populate the relevant fields.

Expose servicesedit

By default, the Logstash operator creates a headless Service for the metrics endpoint to enable metric collection by the Metricbeat sidecar for Stack Monitoring:

kubectl get service quickstart-ls-api
NAME                TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)    AGE
quickstart-ls-api   ClusterIP   None         <none>        9600/TCP   48s

Additional services can be added in the spec.services section of the resource:

services:
  - name: beats
    service:
      spec:
        ports:
        - port: 5044
          name: "winlogbeat"
          protocol: TCP
        - port: 5045
          name: "filebeat"
          protocol: TCP

Pod configurationedit

You can customize the Logstash Pod using a Pod template, defined in the spec.podTemplate section of the configuration.

This example demonstrates how to create a Logstash deployment with increased heap size and resource limits.

apiVersion: logstash.k8s.elastic.co/v1alpha1
kind: Logstash
metadata:
  name: logstash-sample
spec:
  version: 8.13.4
  count: 1
  elasticsearchRefs:
    - name: "elasticsearch-sample"
      clusterName: "sample"
  podTemplate:
    spec:
      containers:
      - name: logtash
        env:
        - name: LS_JAVA_OPTS
          value: "-Xmx2g -Xms2g"
        resources:
          requests:
            memory: 1Gi
            cpu: 0.5
          limits:
            memory: 4Gi
            cpu: 2

The name of the container in the Pod template must be logstash.