- Logstash Reference: other versions:
- Logstash Introduction
- Getting Started with Logstash
- How Logstash Works
- Setting Up and Running Logstash
- Setting Up X-Pack
- Breaking changes
- Upgrading Logstash
- Configuring Logstash
- Working with Logstash Modules
- Working with Filebeat Modules
- Data Resiliency
- Transforming Data
- Deploying and Scaling Logstash
- Performance Tuning
- Monitoring Logstash
- Working with plugins
- Input plugins
- Beats input plugin
- Cloudwatch input plugin
- Couchdb_changes input plugin
- Dead_letter_queue input plugin
- Drupal_dblog input plugin
- Elasticsearch input plugin
- Eventlog output plugin
- Exec input plugin
- File input plugin
- Ganglia input plugin
- Gelf input plugin
- Gemfire input plugin
- Generator input plugin
- Github input plugin
- Google_pubsub input plugin
- Graphite input plugin
- Heartbeat input plugin
- Http input plugin
- Http_poller input plugin
- Imap input plugin
- Irc input plugin
- Jdbc input plugin
- Jms input plugin
- Jmx input plugin
- Kafka input plugin
- Kinesis input plugin
- Log4j input plugin
- Lumberjack input plugin
- Meetup input plugin
- Pipe input plugin
- Puppet_facter input plugin
- Rabbitmq input plugin
- rackspace input plugin
- Redis input plugin
- Relp input plugin
- Rss input plugin
- S3 input plugin
- Salesforce input plugin
- Snmptrap input plugin
- Sqlite input plugin
- Sqs input plugin
- Stdin input plugin
- Stomp input plugin
- Syslog input plugin
- Tcp input plugin
- Twitter input plugin
- Udp input plugin
- Unix input plugin
- Varnishlog input plugin
- Websocket input plugin
- Wmi input plugin
- Xmpp input plugin
- Zenoss input plugin
- Zeromq input plugin
- Output plugins
- Boundary output plugin
- Circonus output plugin
- Cloudwatch output plugin
- Csv output plugin
- Datadog output plugin
- Datadog_metrics output plugin
- Elasticsearch output plugin
- Email output plugin
- Exec output plugin
- File output plugin
- Ganglia output plugin
- Gelf output plugin
- Google BigQuery output plugin
- Google_cloud_storage output plugin
- Graphite output plugin
- Graphtastic output plugin
- Http output plugin
- Influxdb output plugin
- Irc output plugin
- Jira output plugin
- Juggernaut output plugin
- Kafka output plugin
- Librato output plugin
- Loggly output plugin
- Lumberjack output plugin
- Metriccatcher output plugin
- Mongodb output plugin
- Nagios output plugin
- Nagios_nsca output plugin
- Newrelic output plugin
- Opentsdb output plugin
- Pagerduty output plugin
- Pipe output plugin
- Rabbitmq output plugin
- Rackspace output plugin
- Redis output plugin
- Redmine output plugin
- Riak output plugin
- Riemann output plugin
- S3 output plugin
- Sns output plugin
- Solr_http output plugin
- Sqs output plugin
- Statsd output plugin
- Stdout output plugin
- Stomp output plugin
- Syslog output plugin
- Tcp output plugin
- Udp output plugin
- Webhdfs output plugin
- Websocket output plugin
- Xmpp output plugin
- Zabbix output plugin
- Zeromq output plugin
- Filter plugins
- Aggregate filter plugin
- Alter filter plugin
- Anonymize filter plugin
- Cidr filter plugin
- Cipher filter plugin
- Clone filter plugin
- Collate filter plugin
- Csv filter plugin
- Date filter plugin
- De_dot filter plugin
- Dissect filter plugin
- Dns filter plugin
- Drop filter plugin
- Elapsed filter plugin
- Elasticsearch filter plugin
- Environment filter plugin
- Extractnumbers filter plugin
- Fingerprint filter plugin
- Geoip filter plugin
- Grok filter plugin
- I18n filter plugin
- Jdbc_streaming filter plugin
- Json filter plugin
- Json_encode filter plugin
- Kv filter plugin
- Metaevent filter plugin
- Metricize filter plugin
- Metrics filter plugin
- Mutate filter plugin
- Oui filter plugin
- Prune filter plugin
- Punct filter plugin
- Range filter plugin
- Ruby filter plugin
- Sleep filter plugin
- Split filter plugin
- Syslog_pri filter plugin
- Throttle filter plugin
- Tld filter plugin
- Translate filter plugin
- Truncate filter plugin
- Urldecode filter plugin
- Useragent filter plugin
- Uuid filter plugin
- Xml filter plugin
- Yaml filter plugin
- Zeromq filter plugin
- Codec plugins
- Avro codec plugin
- Cef codec plugin
- Cloudfront codec plugin
- Cloudtrail codec plugin
- Collectd codec plugin
- Compress_spooler codec plugin
- Dots codec plugin
- Edn codec plugin
- Edn_lines codec plugin
- Es_bulk codec plugin
- Fluent codec plugin
- Graphite codec plugin
- Gzip_lines codec plugin
- Json codec plugin
- Json_lines codec plugin
- Line codec plugin
- Msgpack codec plugin
- Multiline codec plugin
- Netflow codec plugin
- Nmap codec plugin
- Oldlogstashjson codec plugin
- Plain codec plugin
- Protobuf codec plugin
- Rubydebug codec plugin
- Contributing to Logstash
- How to write a Logstash input plugin
- How to write a Logstash input plugin
- How to write a Logstash codec plugin
- How to write a Logstash filter plugin
- Contributing a Patch to a Logstash Plugin
- Logstash Plugins Community Maintainer Guide
- Submitting your plugin to RubyGems.org and the logstash-plugins repository
- Glossary of Terms
- Release Notes
- Logstash 5.6.16 Release Notes
- Logstash 5.6.15 Release Notes
- Logstash 5.6.14 Release Notes
- Logstash 5.6.13 Release Notes
- Logstash 5.6.12 Release Notes
- Logstash 5.6.11 Release Notes
- Logstash 5.6.10 Release Notes
- Logstash 5.6.9 Release Notes
- Logstash 5.6.8 Release Notes
- Logstash 5.6.7 Release Notes
- Logstash 5.6.6 Release Notes
- Logstash 5.6.5 Release Notes
- Logstash 5.6.4 Release Notes
- Logstash 5.6.3 Release Notes
- Logstash 5.6.2 Release Notes
- Logstash 5.6.1 Release Notes
- Logstash 5.6.0 Release Notes
Stitching Together Multiple Input and Output Plugins
editStitching Together Multiple Input and Output Plugins
editThe information you need to manage often comes from several disparate sources, and use cases can require multiple destinations for your data. Your Logstash pipeline can use multiple input and output plugins to handle these requirements.
In this section, you create a Logstash pipeline that takes input from a Twitter feed and the Filebeat client, then sends the information to an Elasticsearch cluster as well as writing the information directly to a file.
Reading from a Twitter Feed
editTo add a Twitter feed, you use the twitter
input plugin. To
configure the plugin, you need several pieces of information:
- A consumer key, which uniquely identifies your Twitter app.
- A consumer secret, which serves as the password for your Twitter app.
- One or more keywords to search in the incoming feed. The example shows using "cloud" as a keyword, but you can use whatever you want.
- An oauth token, which identifies the Twitter account using this app.
- An oauth token secret, which serves as the password of the Twitter account.
Visit https://dev.twitter.com/apps to set up a Twitter account and generate your consumer
key and secret, as well as your access token and secret. See the docs for the twitter
input plugin if you’re not sure how to generate these keys.
Like you did earlier when you worked on Parsing Logs with Logstash, create a config file (called second-pipeline.conf
) that
contains the skeleton of a configuration pipeline. If you want, you can reuse the file you created earlier, but make
sure you pass in the correct config file name when you run Logstash.
Add the following lines to the input
section of the second-pipeline.conf
file, substituting your values for the
placeholder values shown here:
twitter { consumer_key => "enter_your_consumer_key_here" consumer_secret => "enter_your_secret_here" keywords => ["cloud"] oauth_token => "enter_your_access_token_here" oauth_token_secret => "enter_your_access_token_secret_here" }
Configuring Filebeat to Send Log Lines to Logstash
editAs you learned earlier in Configuring Filebeat to Send Log Lines to Logstash, the Filebeat client is a lightweight, resource-friendly tool that collects logs from files on the server and forwards these logs to your Logstash instance for processing.
After installing Filebeat, you need to configure it. Open the filebeat.yml
file located in your Filebeat installation
directory, and replace the contents with the following lines. Make sure paths
points to your syslog:
filebeat.prospectors: - input_type: log paths: - /var/log/*.log fields: type: syslog output.logstash: hosts: ["localhost:5043"]
Absolute path to the file or files that Filebeat processes. |
|
Adds a field called |
Save your changes.
To keep the configuration simple, you won’t specify TLS/SSL settings as you would in a real world scenario.
Configure your Logstash instance to use the Filebeat input plugin by adding the following lines to the input
section
of the second-pipeline.conf
file:
beats { port => "5043" }
Writing Logstash Data to a File
editYou can configure your Logstash pipeline to write data directly to a file with the
file
output plugin.
Configure your Logstash instance to use the file
output plugin by adding the following lines to the output
section
of the second-pipeline.conf
file:
file { path => "/path/to/target/file" }
Writing to Multiple Elasticsearch Nodes
editWriting to multiple Elasticsearch nodes lightens the resource demands on a given Elasticsearch node, as well as providing redundant points of entry into the cluster when a particular node is unavailable.
To configure your Logstash instance to write to multiple Elasticsearch nodes, edit the output
section of the second-pipeline.conf
file to read:
output { elasticsearch { hosts => ["IP Address 1:port1", "IP Address 2:port2", "IP Address 3"] } }
Use the IP addresses of three non-master nodes in your Elasticsearch cluster in the host line. When the hosts
parameter lists multiple IP addresses, Logstash load-balances requests across the list of addresses. Also note that
the default port for Elasticsearch is 9200
and can be omitted in the configuration above.
Testing the Pipeline
editAt this point, your second-pipeline.conf
file looks like this:
input { twitter { consumer_key => "enter_your_consumer_key_here" consumer_secret => "enter_your_secret_here" keywords => ["cloud"] oauth_token => "enter_your_access_token_here" oauth_token_secret => "enter_your_access_token_secret_here" } beats { port => "5043" } } output { elasticsearch { hosts => ["IP Address 1:port1", "IP Address 2:port2", "IP Address 3"] } file { path => "/path/to/target/file" } }
Logstash is consuming data from the Twitter feed you configured, receiving data from Filebeat, and indexing this information to three nodes in an Elasticsearch cluster as well as writing to a file.
At the data source machine, run Filebeat with the following command:
sudo ./filebeat -e -c filebeat.yml -d "publish"
Filebeat will attempt to connect on port 5043. Until Logstash starts with an active Beats plugin, there won’t be any answer on that port, so any messages you see regarding failure to connect on that port are normal for now.
To verify your configuration, run the following command:
bin/logstash -f second-pipeline.conf --config.test_and_exit
The --config.test_and_exit
option parses your configuration file and reports any errors. When the configuration file
passes the configuration test, start Logstash with the following command:
bin/logstash -f second-pipeline.conf
Use the grep
utility to search in the target file to verify that information is present:
grep syslog /path/to/target/file
Run an Elasticsearch query to find the same information in the Elasticsearch cluster:
curl -XGET 'localhost:9200/logstash-$DATE/_search?pretty&q=fields.type:syslog'
Replace $DATE with the current date, in YYYY.MM.DD format.
To see data from the Twitter feed, try this query:
curl -XGET 'http://localhost:9200/logstash-$DATE/_search?pretty&q=client:iphone'
Again, remember to replace $DATE with the current date, in YYYY.MM.DD format.
On this page