Working with Filebeat Modulesedit

Filebeat comes packaged with pre-built modules that contain the configurations needed to collect, parse, enrich, and visualize data from various log file formats. Each Filebeat module consists of one or more filesets that contain ingest node pipelines, Elasticsearch templates, Filebeat prospector configurations, and Kibana dashboards.

Filebeat modules are a great way to get started, but you might find that ingest pipelines don’t offer the processing power that you require. If that’s the case, you’ll need to use Logstash.

Using Logstash instead of Ingest Nodeedit

Logstash provides an ingest pipeline conversion tool to help you migrate ingest pipeline definitions to Logstash configs. However, the tool does not currently support all the processors that are available for ingest node.

You can follow the steps in this section to build and run Logstash configurations that parse the data collected by Filebeat modules. Then you’ll be able to use the same dashboards available with Filebeat to visualize your data in Kibana.

Create and start the Logstash pipelineedit

  1. Create a Logstash pipeline configuration that reads from the Beats input and parses the events.

    See Configuration Examples for detailed examples.

  2. Start Logstash, passing in the pipeline configuration file that parses the log. For example:

    bin/logstash -f mypipeline.conf

    You’ll see the following message when Logstash is running and listening for input from Beats:

    [2017-10-13T00:01:15,413][INFO ][logstash.inputs.beats    ] Beats inputs: Starting input listener {:address=>"127.0.0.1:5044"}
    [2017-10-13T00:01:15,443][INFO ][logstash.pipeline        ] Pipeline started {"pipeline.id"=>"main"}

The Logstash pipeline is now ready to receive events from Filebeat. Next, you set up and run Filebeat.

Set up and run Filebeatedit

  1. If you haven’t already set up the Filebeat index template and sample Kibana dashboards, run the Filebeat setup command to do that now:

    ./filebeat -e setup

    The -e flag is optional and sends output to standard error instead of syslog.

    A connection to Elasticsearch and Kibana is required for this one-time setup step because Filebeat needs to create the index template in Elasticsearch and load the sample dashboards into Kibana.

    After the template and dashboards are loaded, you’ll see the message INFO Kibana dashboards successfully loaded. Loaded dashboards.

  2. Configure Filebeat to send log lines to Logstash. To do this, in the filebeat.yml config file, disable the Elasticsearch output, and enable the Logstash output. For example:

    #output.elasticsearch:
      #hosts: ["localhost:9200"]
    output.logstash:
      hosts: ["localhost:5044"]
  3. Run the modules enable command to enable the modules that you want to run. For example:

    ./filebeat modules enable nginx

    You can further configure the module by editing the config file under the Filebeat modules.d directory. For example, if the log files are not in the location expected by the module, you can set the var.paths option.

  4. Start Filebeat. For example, to start Filebeat in the foreground, use:

    ./filebeat -e

    Depending on how you’ve installed Filebeat, you might see errors related to file ownership or permissions when you try to run Filebeat modules. See Config File Ownership and Permissions in the Beats Platform Reference if you encounter errors related to file ownership or permissions.

    See Starting Filebeat for more info.

Visualize the dataedit

To visualize the data in Kibana, launch the Kibana web interface by pointing your browser to port 5601. For example, http://127.0.0.1:5601.