Get startededit

Step 1: Configure application loggingedit

If you are using the Elastic APM Java agent, the easiest way to transform your logs into ECS-compatible JSON format is through the log_ecs_reformatting configuration option. By only setting this option, the Java agent will automatically import the correct ECS-logging library and configure your logging framework to use it instead (OVERRIDE/REPLACE) or in addition to (SHADE) your current configuration. No other changes required! Make sure to check out other Logging configuration options to unlock the full potential of this option.

Otherwise, follow the steps below to manually apply ECS-formatting through your logging framework configuration. The following logging frameworks are supported:

  • Logback (default for Spring Boot)
  • Log4j2
  • Log4j
  • java.util.logging (JUL)
  • JBoss Log Manager

Add the dependencyedit

The minimum required logback version is 1.1.

Download the latest version of Elastic logging: Maven Central

Add a dependency to your application:

<dependency>
    <groupId>co.elastic.logging</groupId>
    <artifactId>logback-ecs-encoder</artifactId>
    <version>${ecs-logging-java.version}</version>
</dependency>

If you are not using a dependency management tool, like maven, you have to manually add both logback-ecs-encoder and ecs-logging-core jars to the classpath. For example to the $CATALINA_HOME/lib directory. Other than that, there are no required dependencies.

Use the ECS encoder/formatter/layoutedit

Spring Boot applications

In src/main/resources/logback-spring.xml:

<?xml version="1.0" encoding="UTF-8"?>
<configuration>
    <property name="LOG_FILE" value="${LOG_FILE:-${LOG_PATH:-${LOG_TEMP:-${java.io.tmpdir:-/tmp}}}/spring.log}"/>
    <include resource="org/springframework/boot/logging/logback/defaults.xml"/>
    <include resource="org/springframework/boot/logging/logback/console-appender.xml" />
    <include resource="org/springframework/boot/logging/logback/file-appender.xml" />
    <include resource="co/elastic/logging/logback/boot/ecs-console-appender.xml" />
    <include resource="co/elastic/logging/logback/boot/ecs-file-appender.xml" />
    <root level="INFO">
        <appender-ref ref="ECS_JSON_CONSOLE"/>
        <appender-ref ref="CONSOLE"/>
        <appender-ref ref="ECS_JSON_FILE"/>
        <appender-ref ref="FILE"/>
    </root>
</configuration>

You also need to configure the following properties to your application.properties:

spring.application.name=my-application
# for Spring Boot 2.2.x+
logging.file.name=/path/to/my-application.log
# for older Spring Boot versions
logging.file=/path/to/my-application.log

Other applications

All you have to do is to use the co.elastic.logging.logback.EcsEncoder instead of the default pattern encoder in logback.xml

<encoder class="co.elastic.logging.logback.EcsEncoder">
    <serviceName>my-application</serviceName>
    <serviceVersion>my-application-version</serviceVersion>
    <serviceEnvironment>my-application-environment</serviceEnvironment>
    <serviceNodeName>my-application-cluster-node</serviceNodeName>
</encoder>

Encoder Parameters

Parameter name Type Default Description

serviceName

String

Sets the service.name field so you can filter your logs by a particular service name

serviceVersion

String

Sets the service.version field so you can filter your logs by a particular service version

serviceEnvironment

String

Sets the service.environment field so you can filter your logs by a particular service environment

serviceNodeName

String

Sets the service.node.name field so you can filter your logs by a particular node of your clustered service

eventDataset

String

${serviceName}

Sets the event.dataset field used by the machine learning job of the Logs app to look for anomalies in the log rate.

includeMarkers

boolean

false

Log Markers as tags

stackTraceAsArray

boolean

false

Serializes the error.stack_trace as a JSON array where each element is in a new line to improve readability. Note that this requires a slightly more complex Filebeat configuration.

includeOrigin

boolean

false

If true, adds the log.origin.file.name, log.origin.file.line and log.origin.function fields. Note that you also have to set <includeCallerData>true</includeCallerData> on your appenders if you are using the async ones.

To include any custom field in the output, use following syntax:

<additionalField>
    <key>key1</key>
    <value>value1</value>
</additionalField>
<additionalField>
    <key>key2</key>
    <value>value2</value>
</additionalField>

If you’re using the Elastic APM Java agent, log correlation is enabled by default starting in version 1.30.0. In previous versions, log correlation is off by default, but can be enabled by setting the enable_log_correlation config to true.

Step 2: Configure Filebeatedit

  1. Follow the Filebeat quick start
  2. Add the following configuration to your filebeat.yaml file.

For Filebeat 7.16+

filebeat.yaml.

filebeat.inputs:
- type: filestream 
  paths: /path/to/logs.json
  parsers:
    - ndjson:
      overwrite_keys: true 
      add_error_key: true 
      expand_keys: true 

processors: 
  - add_host_metadata: ~
  - add_cloud_metadata: ~
  - add_docker_metadata: ~
  - add_kubernetes_metadata: ~

Use the filestream input to read lines from active log files.

Values from the decoded JSON object overwrite the fields that Filebeat normally adds (type, source, offset, etc.) in case of conflicts.

Filebeat adds an "error.message" and "error.type: json" key in case of JSON unmarshalling errors.

Filebeat will recursively de-dot keys in the decoded JSON, and expand them into a hierarchical object structure.

Processors enhance your data. See processors to learn more.

For Filebeat < 7.16

filebeat.yaml.

filebeat.inputs:
- type: log
  paths: /path/to/logs.json
  json.keys_under_root: true
  json.overwrite_keys: true
  json.add_error_key: true
  json.expand_keys: true

processors:
- add_host_metadata: ~
- add_cloud_metadata: ~
- add_docker_metadata: ~
- add_kubernetes_metadata: ~

For more information, see the Filebeat reference.

When stackTraceAsArray is enablededit

Filebeat can normally only decode JSON if there is one JSON object per line. When stackTraceAsArray is enabled, there will be a new line for each stack trace element which improves readability. But when combining the multiline settings with a decode_json_fields we can also handle multi-line JSON:

filebeat.inputs:
  - type: log
    paths: /path/to/logs.json
    multiline.pattern: '^{'
    multiline.negate: true
    multiline.match: after
processors:
  - decode_json_fields:
      fields: message
      target: ""
      overwrite_keys: true
  # flattens the array to a single string
  - script:
      when:
        has_fields: ['error.stack_trace']
      lang: javascript
      id: my_filter
      source: >
        function process(event) {
            event.Put("error.stack_trace", event.Get("error.stack_trace").join("\n"));
        }