Get startededit

Step 1: Installedit

Add the package to your go.mod file:

require go.elastic.co/ecszap master

Step 2: Configureedit

Set up a default logger. For example:

encoderConfig := ecszap.NewDefaultEncoderConfig()
core := ecszap.NewCore(encoderConfig, os.Stdout, zap.DebugLevel)
logger := zap.New(core, zap.AddCaller())

You can customize your ECS logger. For example:

encoderConfig := ecszap.EncoderConfig{
  EncodeName: customNameEncoder,
  EncodeLevel: zapcore.CapitalLevelEncoder,
  EncodeDuration: zapcore.MillisDurationEncoder,
  EncodeCaller: ecszap.FullCallerEncoder,
}
core := ecszap.NewCore(encoderConfig, os.Stdout, zap.DebugLevel)
logger := zap.New(core, zap.AddCaller())

Examplesedit

Use structured loggingedit

// Add fields and a logger name
logger = logger.With(zap.String("custom", "foo"))
logger = logger.Named("mylogger")

// Use strongly typed Field values
logger.Info("some logging info",
    zap.Int("count", 17),
    zap.Error(errors.New("boom")))

The example above produces the following log output:

{
  "log.level": "info",
  "@timestamp": "2020-09-13T10:48:03.000Z",
  "log.logger": "mylogger",
  "log.origin": {
    "file.name": "main/main.go",
    "file.line": 265
  },
  "message": "some logging info",
  "ecs.version": "1.6.0",
  "custom": "foo",
  "count": 17,
  "error": {
    "message":"boom"
  }
}

Log errorsedit

err := errors.New("boom")
logger.Error("some error", zap.Error(pkgerrors.Wrap(err, "crash")))

The example above produces the following log output:

{
  "log.level": "error",
  "@timestamp": "2020-09-13T10:48:03.000Z",
  "log.logger": "mylogger",
  "log.origin": {
    "file.name": "main/main.go",
    "file.line": 290
  },
  "message": "some error",
  "ecs.version": "1.6.0",
  "custom": "foo",
  "error": {
    "message": "crash: boom",
    "stack_trace": "\nexample.example\n\t/Users/xyz/example/example.go:50\nruntime.example\n\t/Users/xyz/.gvm/versions/go1.13.8.darwin.amd64/src/runtime/proc.go:203\nruntime.goexit\n\t/Users/xyz/.gvm/versions/go1.13.8.darwin.amd64/src/runtime/asm_amd64.s:1357"
  }
}

Use sugar loggeredit

sugar := logger.Sugar()
sugar.Infow("some logging info",
    "foo", "bar",
    "count", 17,
)

The example above produces the following log output:

{
  "log.level": "info",
  "@timestamp": "2020-09-13T10:48:03.000Z",
  "log.logger": "mylogger",
  "log.origin": {
    "file.name": "main/main.go",
    "file.line": 311
  },
  "message": "some logging info",
  "ecs.version": "1.6.0",
  "custom": "foo",
  "foo": "bar",
  "count": 17
}

Wrap a custom underlying zapcore.Coreedit

encoderConfig := ecszap.NewDefaultEncoderConfig()
encoder := zapcore.NewJSONEncoder(encoderConfig.ToZapCoreEncoderConfig())
syslogCore := newSyslogCore(encoder, level) //create your own loggers
core := ecszap.WrapCore(syslogCore)
logger := zap.New(core, zap.AddCaller())

Transition from existing configurationsedit

Depending on your needs there are different ways to create the logger:

encoderConfig := ecszap.ECSCompatibleEncoderConfig(zap.NewDevelopmentEncoderConfig())
encoder := zapcore.NewJSONEncoder(encoderConfig)
core := zapcore.NewCore(encoder, os.Stdout, zap.DebugLevel)
logger := zap.New(ecszap.WrapCore(core), zap.AddCaller())
config := zap.NewProductionConfig()
config.EncoderConfig = ecszap.ECSCompatibleEncoderConfig(config.EncoderConfig)
logger, err := config.Build(ecszap.WrapCoreOption(), zap.AddCaller())

Step 3: Configure Filebeatedit

  1. Follow the Filebeat quick start
  2. Add the following configuration to your filebeat.yaml file.

For Filebeat 7.16+

filebeat.yaml.

filebeat.inputs:
- type: filestream 
  paths: /path/to/logs.json
  parsers:
    - ndjson:
      overwrite_keys: true 
      add_error_key: true 
      expand_keys: true 

processors: 
  - add_host_metadata: ~
  - add_cloud_metadata: ~
  - add_docker_metadata: ~
  - add_kubernetes_metadata: ~

Use the filestream input to read lines from active log files.

Values from the decoded JSON object overwrite the fields that Filebeat normally adds (type, source, offset, etc.) in case of conflicts.

Filebeat adds an "error.message" and "error.type: json" key in case of JSON unmarshalling errors.

Filebeat will recursively de-dot keys in the decoded JSON, and expand them into a hierarchical object structure.

Processors enhance your data. See processors to learn more.

For Filebeat < 7.16

filebeat.yaml.

filebeat.inputs:
- type: log
  paths: /path/to/logs.json
  json.keys_under_root: true
  json.overwrite_keys: true
  json.add_error_key: true
  json.expand_keys: true

processors:
- add_host_metadata: ~
- add_cloud_metadata: ~
- add_docker_metadata: ~
- add_kubernetes_metadata: ~

For more information, see the Filebeat reference.