Filebeat quick start: installation and configurationedit

This guide describes how to get started quickly with log collection. You’ll learn how to:

  • install Filebeat on each system you want to monitor
  • specify the location of your log files
  • parse log data into fields and send it to Elasticsearch
  • visualize the log data in Kibana
Filebeat System dashboard

Before you beginedit

You need Elasticsearch for storing and searching your data, and Kibana for visualizing and managing it.

To get started quickly, spin up a deployment of our hosted Elasticsearch Service. The Elasticsearch Service is available on AWS, GCP, and Azure. Try it out for free.

Step 1: Install Filebeatedit

Install Filebeat on all the servers you want to monitor.

To download and install Filebeat, use the commands that work with your system:

curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.14.2-amd64.deb
sudo dpkg -i filebeat-7.14.2-amd64.deb

Other installation optionsedit

Step 2: Connect to the Elastic Stackedit

Connections to Elasticsearch and Kibana are required to set up Filebeat.

Set the connection information in filebeat.yml. To locate this configuration file, see Directory layout.

Specify the cloud.id of your Elasticsearch Service, and set cloud.auth to a user who is authorized to set up Filebeat. For example:

cloud.id: "staging:dXMtZWFzdC0xLmF3cy5mb3VuZC5pbyRjZWM2ZjI2MWE3NGJmMjRjZTMzYmI4ODExYjg0Mjk0ZiRjNmMyY2E2ZDA0MjI0OWFmMGNjN2Q3YTllOTYyNTc0Mw=="
cloud.auth: "filebeat_setup:YOUR_PASSWORD" 

This examples shows a hard-coded password, but you should store sensitive values in the secrets keystore.

To learn more about required roles and privileges, see Grant users access to secured resources.

You can send data to other outputs, such as Logstash, but that requires additional configuration and setup.

Step 3: Collect log dataedit

There are several ways to collect log data with Filebeat:

  • Data collection modules — simplify the collection, parsing, and visualization of common log formats
  • ECS loggers — structure and format application logs into ECS-compatible JSON
  • Manual Filebeat configuration

Enable and configure data collection modulesedit

  1. Identify the modules you need to enable. To see a list of available modules, run:

    filebeat modules list
  2. From the installation directory, enable one or more modules. For example, the following command enables the system, nginx, and mysql module configs:

    filebeat modules enable system nginx mysql
  3. In the module configs under modules.d, change the module settings to match your environment.

    For example, log locations are set based on the OS. If your logs aren’t in default locations, set the paths variable:

    - module: nginx
      access:
        var.paths: ["/var/log/nginx/access.log*"] 

To see the full list of variables for a module, see the documentation under Modules.

To test your configuration file, change to the directory where the Filebeat binary is installed, and run Filebeat in the foreground with the following options specified: ./filebeat test config -e. Make sure your config files are in the path expected by Filebeat (see Directory layout), or use the -c flag to specify the path to the config file.

For more information about configuring Filebeat, also see:

Enable and configure ECS loggers for application log collectionedit

While Filebeat can be used to ingest raw, plain-text application logs, we recommend structuring your logs at ingest time. This lets you extract fields, like log level and exception stack traces.

Elastic simplifies this process by providing application log formatters in a variety of popular programming languages. These plugins format your logs into ECS-compatible JSON, which removes the need to manually parse logs.

See ECS loggers to get started.

Configure Filebeat manuallyedit

If you’re unable to find a module for your file type, or can’t change your application’s log output, see configure the input manually.

Step 4: Set up assetsedit

Filebeat comes with predefined assets for parsing, indexing, and visualizing your data. To load these assets:

  1. Make sure the user specified in filebeat.yml is authorized to set up Filebeat.
  2. From the installation directory, run:

    filebeat setup -e

    -e is optional and sends output to standard error instead of the configured log output.

This step loads the recommended index template for writing to Elasticsearch and deploys the sample dashboards for visualizing the data in Kibana.

This step does not load the ingest pipelines used to parse log lines. By default, ingest pipelines are set up automatically the first time you run the module and connect to Elasticsearch.

A connection to Elasticsearch (or Elasticsearch Service) is required to set up the initial environment. If you’re using a different output, such as Logstash, see:

Step 5: Start Filebeatedit

Before starting Filebeat, modify the user credentials in filebeat.yml and specify a user who is authorized to publish events.

To start Filebeat, run:

sudo service filebeat start

If you use an init.d script to start Filebeat, you can’t specify command line flags (see Command reference). To specify flags, start Filebeat in the foreground.

Also see Filebeat and systemd.

Filebeat should begin streaming events to Elasticsearch.

Step 6: View your data in Kibanaedit

Filebeat comes with pre-built Kibana dashboards and UIs for visualizing log data. You loaded the dashboards earlier when you ran the setup command.

To open the dashboards:

  1. Launch Kibana:

    1. Log in to your Elastic Cloud account.
    2. Navigate to the Kibana endpoint in your deployment.
  2. In the side navigation, click Discover. To see Filebeat data, make sure the predefined filebeat-* index pattern is selected.

    If you don’t see data in Kibana, try changing the time filter to a larger range. By default, Kibana shows the last 15 minutes.

  3. In the side navigation, click Dashboard, then select the dashboard that you want to open.

The dashboards are provided as examples. We recommend that you customize them to meet your needs.

What’s next?edit

Now that you have your logs streaming into Elasticsearch, learn how to unify your logs, metrics, uptime, and application performance data.

  1. Ingest data from other sources by installing and configuring other Elastic Beats:

    Elastic Beats To capture

    Metricbeat

    Infrastructure metrics

    Winlogbeat

    Windows event logs

    Heartbeat

    Uptime information

    APM

    Application performance metrics

    Auditbeat

    Audit events

  2. Use the Observability apps in Kibana to search across all your data:

    Elastic apps Use to

    Metrics app

    Explore metrics about systems and services across your ecosystem

    Logs app

    Tail related log data in real time

    Uptime app

    Monitor availability issues across your apps and services

    APM app

    Monitor application performance

    SIEM app

    Analyze security events