Working with Filebeat Modulesedit

Starting with version 5.3, Filebeat comes packaged with pre-built Filebeatfilebeat-modules.html[modules] that contain the configurations needed to collect, parse, enrich, and visualize data from various log file formats. Each Filebeat module consists of one or more filesets that contain ingest node pipelines, Elasticsearch templates, Filebeat prospector configurations, and Kibana dashboards.

Filebeat modules are a great way to get started, but you might find that ingest pipelines don’t offer the processing power that you require. If that’s the case, you’ll need to use Logstash.

Graduating to Logstashedit

You may need to graduate to using Logstash instead of ingest pipelines if you want to:

  • Use multiple outputs. Ingest pipelines were designed to only support Elasticsearch as an output, but you may want to use more than one output. For example, you may want to archive your incoming data to S3 as well as indexing it in Elasticsearch.
  • Use the persistent queue feature to handle spikes when ingesting data (from Beats and other sources).
  • Take advantage of the richer transformation capabilities in Logstash, such as external lookups.

Currently, we don’t provide an automatic migration path from ingest pipelines to Logstash pipelines (but that’s coming). For now, you can follow the steps in this section to configure Filebeat and build Logstash pipeline configurations that are equivalent to the ingest node pipelines available with the Filebeat modules. Then you’ll be able to use the same dashboards available with Filebeat to visualize your data in Kibana.

Follow the steps in this section to build and run Logstash configurations that provide capabilities similar to Filebeat modules.

  1. Load the Filebeat index pattern and sample Kibana dashboards. To do this, you need to run the Filebeat module with the Elasticsearch output enabled and specify the -setup flag.

    For example, to load the sample dashboards for Nginx, run:

    ./filebeat -e -modules=nginx -setup -E "output.elasticsearch.hosts=["http://localhost:9200"]"

    A connection to Elasticsearch is required for this one-time setup step because Filebeat needs to create the index pattern and load the sample dashboards into the Kibana index.

    After the template and dashboards are loaded, you’ll see the message INFO Elasticsearch template with name 'filebeat' loaded. You can shut down Filebeat.

  2. Configure Filebeat to send log lines to Logstash.

    See Configuration Examples for detailed examples.

  3. Create a Logstash pipeline configuration that reads from the Beats input and parses the log events.

    See Configuration Examples for detailed examples.

  4. Start Filebeat. For example, to start Filebeat in the foreground, use:

    sudo ./filebeat -e -c filebeat.yml -d "publish"

    Depending on how you’ve installed Filebeat, you might see errors related to file ownership or permissions when you try to run Filebeat modules. See Config File Ownership and Permissions in the Beats Platform Reference if you encounter errors related to file ownership or permissions.

    See Filebeat/filebeat-starting.html[Starting Filebeat] for more info.

  5. Start Logstash, passing in the pipeline configuration file that parses the log. For example:

    bin/logstash -f mypipeline.conf

    You’ll see the following message when Logstash is running and listening for input from Beats:

    [2017-03-17T16:31:40,319][INFO ][logstash.inputs.beats    ] Beats inputs: Starting input listener {:address=>"127.0.0.1:5044"}
    [2017-03-17T16:31:40,350][INFO ][logstash.pipeline        ] Pipeline main started
  6. To visualize the data in Kibana, launch the Kibana web interface by pointing your browser to port 5601. For example, http://127.0.0.1:5601.