Autodiscover
editAutodiscover
editWhen you run applications on containers, they become moving targets to the monitoring system. Autodiscover allows you to track them and adapt settings as changes happen. By defining configuration templates, the autodiscover subsystem can monitor services as they start running.
You define autodiscover settings in the filebeat.autodiscover
section of the filebeat.yml
config file. To enable autodiscover, you specify a list of providers.
Providers
editAutodiscover providers work by watching for events on the system and translating those events into internal autodiscover events with a common format. When you configure the provider, you can optionally use fields from the autodiscover event to set conditions that, when met, launch specific configurations.
On start, Filebeat will scan existing containers and launch the proper configs for them. Then it will watch for new start/stop events. This ensures you don’t need to worry about state, but only define your desired configs.
Docker
editThe Docker autodiscover provider watches for Docker containers to start and stop.
It has the following settings:
-
host
-
(Optional) Docker socket (UNIX or TCP socket). It uses
unix:///var/run/docker.sock
by default. -
ssl
- (Optional) SSL configuration to use when connecting to the Docker socket.
-
cleanup_timeout
- (Optional) Specify the time of inactivity before stopping the running configuration for a container, 60s by default.
-
labels.dedot
-
(Optional) Default to be false. If set to true, replace dots in
labels with
_
.
These are the fields available within config templating. The docker.*
fields will be available on each emitted event.
event:
- host
- port
- docker.container.id
- docker.container.image
- docker.container.name
- docker.container.labels
For example:
{ "host": "10.4.15.9", "port": 6379, "docker": { "container": { "id": "382184ecdb385cfd5d1f1a65f78911054c8511ae009635300ac28b4fc357ce51" "name": "redis", "image": "redis:3.2.11", "labels": { "io.kubernetes.pod.namespace": "default" ... } } } }
You can define a set of configuration templates to be applied when the condition matches an event. Templates define a condition to match on autodiscover events, together with the list of configurations to launch when this condition happens.
Conditions match events from the provider. Providers use the same format for Conditions that processors use.
Configuration templates can contain variables from the autodiscover event. They can be accessed under the data
namespace.
For example, with the example event, "${data.port}
" resolves to 6379
.
Filebeat supports templates for inputs and modules.
filebeat.autodiscover: providers: - type: docker templates: - condition: contains: docker.container.image: redis config: - type: container paths: - /var/lib/docker/containers/${data.docker.container.id}/*.log exclude_lines: ["^\\s+[\\-`('.|_]"] # drop asciiart lines
This configuration launches a docker
logs input for all containers running an image with redis
in the name.
labels.dedot
defaults to be true
for docker autodiscover, which means dots in docker labels are replaced with _ by default.
If you are using modules, you can override the default input and use the docker input instead.
filebeat.autodiscover: providers: - type: docker templates: - condition: contains: docker.container.image: redis config: - module: redis log: input: type: container paths: - /var/lib/docker/containers/${data.docker.container.id}/*.log
When using autodiscover, you have to be careful when defining config templates, especially if they are reading from places holding information for several containers. For instance, under this file structure:
/mnt/logs/<container_id>/*.log
You can define a config template like this:
Wrong settings:
autodiscover.providers: - type: docker templates: - condition.contains: docker.container.image: nginx config: - type: log paths: - "/mnt/logs/*/*.log"
That would read all the files under the given path several times (one per nginx container). What you really want is to scope your template to the container that matched the autodiscover condition. Good settings:
autodiscover.providers: - type: docker templates: - condition.contains: docker.container.image: nginx config: - type: log paths: - "/mnt/logs/${data.docker.container.id}/*.log"
Kubernetes
editThe Kubernetes autodiscover provider watches for Kubernetes nodes, pods, services to start, update, and stop.
The kubernetes
autodiscover provider has the following configuration settings:
-
node
- (Optional) Specify the node to scope filebeat to in case it cannot be accurately detected, as when running filebeat in host network mode.
-
namespace
- (Optional) Select the namespace from which to collect the metadata. If it is not set, the processor collects metadata from all namespaces. It is unset by default. The namespace configuration only applies to kubernetes resources that are namespace scoped.
-
cleanup_timeout
- (Optional) Specify the time of inactivity before stopping the running configuration for a container, 60s by default.
-
kube_config
- (Optional) Use given config file as configuration for Kubernetes client. If kube_config is not set, KUBECONFIG environment variable will be checked and if not present it will fall back to InCluster.
-
kube_client_options
- (Optional) Additional options can be configured for Kubernetes client. Currently client QPS and burst are supported, if not set Kubernetes client’s default QPS and burst will be used. Example:
kube_client_options: qps: 5 burst: 10
-
resource
-
(Optional) Select the resource to do discovery on. Currently supported
Kubernetes resources are
pod
,service
andnode
. If not configuredresource
defaults topod
. -
scope
-
(Optional) Specify at what level autodiscover needs to be done at.
scope
can either takenode
orcluster
as values.node
scope allows discovery of resources in the specified node.cluster
scope allows cluster wide discovery. Onlypod
andnode
resources can be discovered at node scope. -
add_resource_metadata
-
(Optional) Specify filters and configration for the extra metadata, that will be added to the event. Configuration parameters:
-
node
ornamespace
: Specify labels and annotations filters for the extra metadata coming from node and namespace. By default all labels are included while annotations are not. To change default behaviourinclude_labels
,exclude_labels
andinclude_annotations
can be defined. Those settings are useful when storing labels and annotations that require special handling to avoid overloading the storage output. Note: wildcards are not supported for those settings. The enrichment ofnode
ornamespace
metadata can be individually disabled by settingenabled: false
. -
deployment
: If resource ispod
and it is created from adeployment
, by default the deployment name is added, this can be disabled by settingdeployment: false
. -
cronjob
: If resource ispod
and it is created from acronjob
, by default the cronjob name is added, this can be disabled by settingcronjob: false
.Example:
-
add_resource_metadata: namespace: include_labels: ["namespacelabel1"] node: include_labels: ["nodelabel2"] include_annotations: ["nodeannotation1"] deployment: false cronjob: false
-
unique
-
(Optional) Defaults to
false
. Marking an autodiscover provider as unique results into making the provider to enable the provided templates only when it will gain the leader lease. This setting can only be combined withcluster
scope. Whenunique
is enabled enabled,resource
andadd_resource_metadata
settings are not taken into account. -
leader_lease
-
(Optional) Defaults to
filebeat-cluster-leader
. This will be name of the lock lease. One can monitor the status of the lease withkubectl describe lease beats-cluster-leader
. Different Beats that refer to the same leader lease will be competitors in holding the lease and only one will be elected as leader each time.
The configuration of templates and conditions is similar to that of the Docker provider. Configuration templates can contain variables from the autodiscover event. They can be accessed under data namespace.
These are the fields available within config templating. The kubernetes.*
fields will be available on each emitted event.
Generic fields:
edit- host
- port (if exposed)
- kubernetes.labels
- kubernetes.annotations
Pod specific:
edit- kubernetes.container.id
- kubernetes.container.image
- kubernetes.container.name
- kubernetes.namespace
- kubernetes.node.name
- kubernetes.pod.name
- kubernetes.pod.uid
Node specific:
edit- kubernetes.node.name
- kubernetes.node.uid
Service specific:
edit- kubernetes.namespace
- kubernetes.service.name
- kubernetes.service.uid
- kubernetes.annotations
If the include_annotations
config is added to the provider config, then the list of annotations present in the config
are added to the event.
If the include_labels
config is added to the provider config, then the list of labels present in the config
will be added to the event.
If the exclude_labels
config is added to the provider config, then the list of labels present in the config
will be excluded from the event.
if the labels.dedot
config is set to be true
in the provider config, then .
in labels will be replaced with _
.
By default it is true
.
if the annotations.dedot
config is set to be true
in the provider config, then .
in annotations will be replaced
with _
. By default it is true
.
Starting from 8.6 release kubernetes.labels.*
used in config templating are not dedoted regardless of labels.dedot
value.
This config parameter only affects the fields added in the final Elasticsearch document. For example, for a pod with label app.kubernetes.io/name=ingress-nginx
the matching condition should be condition.equals:
kubernetes.labels.app.kubernetes.io/name: "ingress-nginx"
. If labels.dedot
is set to true
(default value)
the label will be stored in Elasticsearch as kubernetes.labels.app_kubernetes_io/name
. The same applies for kubernetes annotations.
For example:
{ "host": "172.17.0.21", "port": 9090, "kubernetes": { "container": { "id": "bb3a50625c01b16a88aa224779c39262a9ad14264c3034669a50cd9a90af1527", "image": "prom/prometheus", "name": "prometheus" }, "labels": { "project": "prometheus", ... }, "namespace": "default", "node": { "name": "minikube" }, "pod": { "name": "prometheus-2657348378-k1pnh" } }, }
Filebeat supports templates for inputs and modules.
filebeat.autodiscover: providers: - type: kubernetes templates: - condition: equals: kubernetes.namespace: kube-system config: - type: container paths: - /var/log/containers/*-${data.kubernetes.container.id}.log exclude_lines: ["^\\s+[\\-`('.|_]"] # drop asciiart lines
This configuration launches a docker
logs input for all containers of pods running in the Kubernetes namespace
kube-system
.
If you are using modules, you can override the default input and use the docker input instead.
filebeat.autodiscover: providers: - type: kubernetes templates: - condition: equals: kubernetes.container.image: "redis" config: - module: redis log: input: type: container paths: - /var/log/containers/*-${data.kubernetes.container.id}.log
Jolokia
editThe Jolokia autodiscover provider uses Jolokia Discovery to find agents running in your host or your network.
The configuration of this provider consists in a set of network interfaces, as
well as a set of templates as in other providers. The network interfaces will be
the ones used for discovery probes, each item of interfaces
has these settings:
-
name
-
the name of the interface (e.g.
br0
), it can contain a wildcard as suffix to apply the same settings to multiple network interfaces of the same type (e.g.br*
). -
interval
- time between probes (defaults to 10s)
-
grace_period
- time since the last reply to consider an instance stopped (defaults to 30s)
-
probe_timeout
- max time to wait for responses since a probe is sent (defaults to 1s)
Jolokia Discovery mechanism is supported by any Jolokia agent since version 1.2.0, it is enabled by default when Jolokia is included in the application as a JVM agent, but disabled in other cases as the OSGI or WAR (Java EE) agents. In any case, this feature is controlled with two properties:
-
discoveryEnabled
, to enable the feature -
discoveryAgentUrl
, if set, this is the URL announced by the agent when being discovered, setting this parameter implicitly enables the feature
There are multiple ways of setting these properties, and they can vary from application to application, please refer to the documentation of your application to find the more suitable way to set them in your case.
Jolokia Discovery is based on UDP multicast requests. Agents join the multicast group 239.192.48.84, port 24884, and discovery is done by sending queries to this group. You have to take into account that UDP traffic between Filebeat and the Jolokia agents has to be allowed. Also notice that this multicast address is in the 239.0.0.0/8 range, that is reserved for private use within an organization, so it can only be used in private networks.
These are the available fields during within config templating. The jolokia.*
fields will be available on each emitted event.
- jolokia.agent.id
- jolokia.agent.version
- jolokia.secured
- jolokia.server.product
- jolokia.server.vendor
- jolokia.server.version
- jolokia.url
Filebeat supports templates for inputs and modules:
filebeat.autodiscover: providers: - type: jolokia interfaces: - name: lo templates: - condition: contains: jolokia.server.product: "kafka" config: - module: kafka log: enabled: true var.paths: - /var/log/kafka/*.log
This configuration starts a jolokia module that collects logs of kafka if it is running. Discovery probes are sent using the local interface.
Nomad
editThis functionality is in technical preview and may be changed or removed in a future release. Elastic will work to fix any issues, but features in technical preview are not subject to the support SLA of official GA features.
The Nomad autodiscover provider watches for Nomad jobs to start, update, and stop.
The nomad
autodiscover provider has the following configuration settings:
-
address
-
(Optional) Specify the address of the Nomad agent. By default it will try to talk to a
Nomad agent running locally (
http://127.0.0.1:4646
). -
region
- (Optional) Region to use. If not provided, the default agent region is used.
-
namespace
-
(Optional) Namespace to use. If not provided the
default
namespace is used. -
secret_id
- (Optional) SecretID to use if ACL is enabled in Nomad. This is an example ACL policy to apply to the token.
namespace "*" { policy = "read" } node { policy = "read" } agent { policy = "read" }
-
node
-
(Optional) Specify the node to scope filebeat to in case it
cannot be accurately detected when
node
scope is used. -
scope
-
(Optional) Specify at what level autodiscover needs to be done at.
scope
can either takenode
orcluster
as values.node
scope allows discovery of resources in the specified node.cluster
scope allows cluster wide discovery. Defaults tonode
. -
wait_time
-
(Optional) Limits how long a Watch will block. If not specified (or set to
0
) the default configuration from the agent will be used. -
allow_stale
-
(Optional) allows any Nomad server (non-leader) to service a read. This normally
means that the local node where filebeat is allocated will service filebeat’s requests.
Defaults to
true
.
The configuration of templates and conditions is similar to that of the Docker provider.
Configuration templates can contain variables from the autodiscover event. They can be accessed under
data
namespace.
These are the available fields during config templating. The nomad.*
fields will be available
on each emitted event.
- nomad.allocation.id
- nomad.allocation.name
- nomad.allocation.status
- nomad.datacenter
- nomad.job.name
- nomad.job.type
- nomad.namespace
- nomad.region
- nomad.task.name
- nomad.task.service.canary_tags
- nomad.task.service.name
- nomad.task.service.tags
If the include_labels
config is added to the provider config, then the list of labels present in
the config will be added to the event.
If the exclude_labels
config is added to the provider config, then the list of labels present in
the config will be excluded from the event.
if the labels.dedot
config is set to be true
in the provider config, then .
in labels will be
replaced with _
.
For example:
{ ... "region": "europe", "allocation": { "name": "coffeshop.api[0]", "id": "35eba07f-e5e4-20ac-6def-85117bee6efb", "status": "running" }, "datacenters": [ "europe-west4" ], "namespace": "default", "job": { "type": "service", "name": "coffeshop" }, "task": { "service": { "name": [ "coffeshop" ], "tags": [ "coffeshop", "nginx" ], "canary_tags": [ "coffeshop" ] }, "name": "api" }, ... }
Filebeat supports templates for inputs and modules.
filebeat.autodiscover: providers: - type: nomad node: nomad1 scope: local hints.enabled: true allow_stale: true templates: - condition: equals: nomad.namespace: web config: - type: log paths: - /var/lib/nomad/alloc/${data.nomad.allocation.id}/alloc/logs/${data.nomad.task.name}.stderr.[0-9]* exclude_lines: ["^\\s+[\\-`('.|_]"] # drop asciiart lines
This configuration launches a log
input for all jobs under the web
Nomad namespace.
If you are using modules, you can override the default input and customize it to read from the
${data.nomad.task.name}.stdout
and/or ${data.nomad.task.name}.stderr
files.
filebeat.autodiscover: providers: - type: nomad templates: - condition: equals: nomad.task.service.tags: "redis" config: - module: redis log: input: type: log paths: - /var/lib/nomad/alloc/${data.nomad.allocation.id}/alloc/logs/${data.nomad.task.name}.*
The docker
input is currently not supported. Nomad doesn’t expose the container ID
associated with the allocation. Without the container ID, there is no way of generating the proper
path for reading the container’s logs.