ECS Logging with Winstonedit
This Node.js package provides a formatter for the winston logger, compatible with Elastic Common Schema (ECS) logging. In combination with the Filebeat shipper, you can monitor all your logs in one place in the Elastic Stack.
Setupedit
Step 1: Installedit
$ npm install @elastic/ecs-winston-format
Step 2: Configureedit
const winston = require('winston') const ecsFormat = require('@elastic/ecs-winston-format') const logger = winston.createLogger({ format: ecsFormat(), transports: [ new winston.transports.Console() ] }) logger.info('hi') logger.error('oops there is a problem', { err: new Error('boom') })
Step 3: Configure Filebeatedit
The best way to collect the logs once they are ECS-formatted is with Filebeat:
- Follow the Filebeat quick start
-
Add the following configuration to your
filebeat.yaml
file.
filebeat.yaml.
filebeat.inputs: - type: log paths: /path/to/logs.json json.keys_under_root: true json.overwrite_keys: true json.add_error_key: true json.expand_keys: true
- Make sure your application logs to stdout/stderr.
- Follow the Run Filebeat on Kubernetes guide.
-
Enable hints-based autodiscover (uncomment the corresponding section in
filebeat-kubernetes.yaml
). - Add these annotations to your pods that log using ECS loggers. This will make sure the logs are parsed appropriately.
annotations: co.elastic.logs/json.keys_under_root: true co.elastic.logs/json.overwrite_keys: true co.elastic.logs/json.add_error_key: true co.elastic.logs/json.expand_keys: true
- Make sure your application logs to stdout/stderr.
- Follow the Run Filebeat on Docker guide.
- Enable hints-based autodiscover.
- Add these labels to your containers that log using ECS loggers. This will make sure the logs are parsed appropriately.
docker-compose.yml.
labels: co.elastic.logs/json.keys_under_root: true co.elastic.logs/json.overwrite_keys: true co.elastic.logs/json.add_error_key: true co.elastic.logs/json.expand_keys: true
For more information, see the Filebeat reference.
Usageedit
const winston = require('winston') const ecsFormat = require('@elastic/ecs-winston-format') const logger = winston.createLogger({ level: 'info', format: ecsFormat(), transports: [ new winston.transports.Console() ] }) logger.info('hi') logger.error('oops there is a problem', { foo: 'bar' })
Running this script (available here) will produce log output similar to the following:
% node examples/basic.js {"@timestamp":"2021-01-13T21:32:38.095Z","log.level":"info","message":"hi","ecs":{"version":"1.6.0"}} {"@timestamp":"2021-01-13T21:32:38.096Z","log.level":"error","message":"oops there is a problem","ecs":{"version":"1.6.0"},"foo":"bar"}
The formatter handles serialization to JSON, so you don’t need to add the json formatter. As well, a timestamp is automatically generated by the formatter, so you don’t need to add the timestamp formatter.
Error loggingedit
By default, the formatter will convert an err
meta field that is an Error instance
to ECS Error fields.
For example:
const winston = require('winston') const ecsFormat = require('@elastic/ecs-winston-format') const logger = winston.createLogger({ format: ecsFormat(), transports: [ new winston.transports.Console() ] }) const myErr = new Error('boom') logger.info('oops', { err: myErr })
will yield (pretty-printed for readability):
% node examples/error.js | jq . { "@timestamp": "2021-01-26T17:25:07.983Z", "log.level": "info", "message": "oops", "ecs": { "version": "1.6.0" }, "error": { "type": "Error", "message": "boom", "stack_trace": "Error: boom\n at Object.<anonymous> (..." } }
Special handling of the err
meta field can be disabled via the convertErr:
false
option:
... const logger = winston.createLogger({ format: ecsFormat({ convertErr: false }), ...
HTTP Request and Response Loggingedit
With the convertReqRes: true
option, the formatter will automatically
convert Node.js core request
and response
objects when passed as the req
and res
meta fields, respectively.
const http = require('http') const winston = require('winston') const ecsFormat = require('@elastic/ecs-winston-format') const logger = winston.createLogger({ level: 'info', format: ecsFormat({ convertReqRes: true }), transports: new winston.transports.Console() ] }) const server = http.createServer(handler) server.listen(3000, () => { logger.info('listening at http://localhost:3000') }) function handler (req, res) { res.setHeader('Foo', 'Bar') res.end('ok') logger.info('handled request', { req, res }) }
This will produce logs with request and response info using ECS HTTP fields. For example:
% node examples/http.js | jq . # using jq for pretty printing ... # run 'curl http://localhost:3000/' { "@timestamp": "2021-01-13T22:00:07.442Z", "log.level": "info", "message": "handled request", "ecs": { "version": "1.6.0" }, "http": { "version": "1.1", "request": { "method": "GET", "headers": { "host": "localhost:3000", "accept": "*/*" } }, "response": { "status_code": 200, "headers": { "foo": "Bar" } } }, "url": { "path": "/", "full": "http://localhost:3000/" }, "user_agent": { "original": "curl/7.64.1" } }
Integration with APM Tracingedit
This ECS log formatter integrates with Elastic APM. If your Node app is using the Node.js Elastic APM Agent, then fields are added to log records that identify an active trace and the configured service name ("service.name" and "event.dataset"). These fields allow cross linking between traces and logs in Kibana and support log anomaly detection.
For example, running examples/http-with-elastic-apm.js and curl -i localhost:3000/
results in a log record with the following:
% node examples/http-with-elastic-apm.js | jq . ... "event": { "dataset": "http-with-elastic-apm.log" }, "trace": { "id": "74631535a02bbe6a07c298b28c7443f4" }, "transaction": { "id": "505400b77aba4d9a" }, "service": { "name": "http-with-elastic-apm" } ...
These IDs match trace data reported by the APM agent.