Metricbeat Reports cgroup Metrics on Linux
Cgroup, which is short for control group, is a mechanism for allocating resources - such as cpu time, memory, or block I/O time - to a group of processes. Cgroup metrics are especially useful for collecting metrics from containerized processes because normally each container is assigned to its own cgroup. This allows Metricbeat to collect detailed cpu, memory, and disk metrics from processes and even attribute those processes to a specific container ID.
When processes are assigned to a specific cgroup, Metricbeat will report the stats and configured limits of the cgroup. The data is reported in the system process metricset. Metricbeat is capable of collecting data from the cpu, cpuacct, memory, and blkio subsystems. Here’s the PR.
Metricbeat: New Postgresql module
Metricbeat: Add file descriptor usage
Information about the number of file descriptors for each process is now exported in Metricbeat under
system.process.fd. This information is available on Linux and FreeBSD.
Packetbeat: Refactor HTTP exported fields
Previously Content-Type and Content-Length were exported only for the HTTP response. The exported fields were called
http.content_length. With this PR Content-Type and Content-Length are also exported for the HTTP request.
To make it easier to understand, the request and the response details are grouped under
http.response, so the following breaking changes are made:
Packetbeat: Export http body
With this change, the body of the HTTP request and HTTP response are exported. You can configure what type of HTTP attachments to export by configuring the include_body_for option. For example to include the json attachments of the HTTP transactions, you need to configure the following:
and the json attachment for the HTTP request is exported in
http.request.body and the attachment for the response is exported in
Filebeat: Avoid exporting fields as pointers
The event exported by the Beats shouldn’t contain fields of type pointers, only basic types. This PR fixes the type of the message field exported by Filebeat, and exports it as string instead of pointer to string.
Packetbeat: Fix mappings for Packetbeat flows
Fix mappings of the source statistics and destination statistics for Packetbeat flows as they were marked as not_analyzed strings instead of longs.
drop_event action from MetricSet filters
The drop_event filter was causing the MetricSet data to be nil, but the event was still being sent. This PR causes the event to actually be dropped.
Libbeat: Accept array of strings in processor’s condition
Enhance the contains condition used in processors to accept an array of strings, so you check if an exported field contains a certain string.
processors: - drop_event: when: contains: tags: "service-1"
Docs: Restructure the FAQ page
In the current version, the questions available under the FAQ page of each Beat were splitted one per page, which made it a bit difficult to search for your problem, especially with the growing number of the questions available for a single Beat. This PR organize the FAQ in a single page, where you can use the browser shortcuts to easily search for keywords. Here is what the FAQ page for Packetbeat looks like.
system.process.cpu.start_time type to date
The start time of the process was exported as string (eg. “12:03”), which made it difficult to filter by time. The type of the
system.process.cpu.start_time is changed to date in this PR.
Metricbeat: Replace nanos with ns
The fields in nanoseconds that are exported by Metricbeat under nanos, are replaced with ns. This way
system.process.cgroup.cpu.stats.throttled.ns. This breaks the compatibility with Metricbeat 5.0.0-alpha4.
Libbeat: Fix Elasticsearch error parsing
Fix regression in the Elasticsearch bulk-request error parsing. This resolves problems when ingest node processors do not accept the document provided.
Libbeat: Update kafka client
This PR updates the kafka client library adding kafka 0.10 support. It also exposes some new kafka configuration settings. Setting the protocol version to 0.10 in beats config will report the event timestamp to kafka (requires kafka 0.10 running).
Look for config files relative to path.config
This makes a change into how the CLI flags are handled. If the files specified by
-c is not absolute, but
-path.home are used, the configuration files are resolved relative to the config/home path. This also solves a known issue we have in 5.0.0-alpha5.