Example: Set up Filebeat modules to work with Kafka and Logstashedit

This section shows how to set up Filebeat modules to work with Logstash when you are using Kafka in between Filebeat and Logstash in your publishing pipeline. The main goal of this example is to show how to load ingest pipelines from Filebeat and use them with Logstash.

The examples in this section show simple configurations with topic names hard coded. For a full list of configuration options, see documentation about configuring the Kafka input plugin. Also see Configure the Kafka output in the Filebeat Reference.

Set up and run Filebeatedit

  1. If you haven’t already set up the Filebeat index template and sample Kibana dashboards, run the Filebeat setup command to do that now:

    filebeat -e setup

    The -e flag is optional and sends output to standard error instead of syslog.

    A connection to Elasticsearch and Kibana is required for this one-time setup step because Filebeat needs to create the index template in Elasticsearch and load the sample dashboards into Kibana. For more information about configuring the connection to Elasticsearch, see the Filebeat modules quick start.

    After the template and dashboards are loaded, you’ll see the message INFO {kib} dashboards successfully loaded. Loaded dashboards.

  2. Run the modules enable command to enable the modules that you want to run. For example:

    filebeat modules enable system

    You can further configure the module by editing the config file under the Filebeat modules.d directory. For example, if the log files are not in the location expected by the module, you can set the var.paths option.

  3. Run the setup command with the --pipelines and --modules options specified to load ingest pipelines for the modules you’ve enabled. This step also requires a connection to Elasticsearch. If you want use a Logstash pipeline instead of ingest node to parse the data, skip this step.

    filebeat setup --pipelines --modules system
  4. Configure Filebeat to send log lines to Kafka. To do this, in the filebeat.yml config file, disable the Elasticsearch output by commenting it out, and enable the Kafka output. For example:

    #output.elasticsearch:
      #hosts: ["localhost:9200"]
    output.kafka:
      hosts: ["kafka:9092"]
      topic: "filebeat"
      codec.json:
        pretty: false
  5. Start Filebeat. For example:

    filebeat -e

    Filebeat will attempt to send messages to Logstash and continue until Logstash is available to receive them.

    Depending on how you’ve installed Filebeat, you might see errors related to file ownership or permissions when you try to run Filebeat modules. See Config File Ownership and Permissions in the Beats Platform Reference if you encounter errors related to file ownership or permissions.

Create and start the Logstash pipelineedit

  1. On the system where Logstash is installed, create a Logstash pipeline configuration that reads from a Kafka input and sends events to an Elasticsearch output:

    input {
      kafka {
        bootstrap_servers => "myhost:9092"
        topics => ["filebeat"]
        codec => json
      }
    }
    
    output {
      if [@metadata][pipeline] {
        elasticsearch {
          hosts => "https://myEShost:9200"
          manage_template => false
          index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
          pipeline => "%{[@metadata][pipeline]}" 
          user => "elastic"
          password => "secret"
        }
      } else {
        elasticsearch {
          hosts => "https://myEShost:9200"
          manage_template => false
          index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
          user => "elastic"
          password => "secret"
        }
      }
    }

    Set the pipeline option to %{[@metadata][pipeline]}. This setting configures Logstash to select the correct ingest pipeline based on metadata passed in the event.

    If you want use a Logstash pipeline instead of ingest node to parse the data, see the filter and output settings in the examples under Use Logstash pipelines for parsing.

  2. Start Logstash, passing in the pipeline configuration file you just defined. For example:

    bin/logstash -f mypipeline.conf

    Logstash should start a pipeline and begin receiving events from the Kafka input.

Visualize the dataedit

To visualize the data in Kibana, launch the Kibana web interface by pointing your browser to port 5601. For example, http://127.0.0.1:5601. Click Dashboards then view the Filebeat dashboards.