Tech Topics

Monitoring the Search Queries

Ever wonder how your users are using your Elasticsearch cluster? Have you felt the need to investigate the queries sent to the Elasticsearch cluster by your users?

Using Packetbeat you can keep an eye on what comes and goes and avoid those nasty surprises your users may throw at your cluster.

You can use Elastic plugins to monitor your own Elastic Stack, with the addition of Marvel the road to great monitoring started and is continuing to improve in the future generation as part of the X-Pack bundle. Long term, this will best provide insight into cluster performance, even behind SSL. Elastic recommends that you make use of these components first and foremost, and that you do have a dedicated Elasticsearch cluster running on at least one node, for this purpose. This monitoring cluster is a great place to also store additional query detail; so if you don't already have a monitoring cluster this gives you another great reason to set it up as it is imperative that you send the query data to a separate cluster.

Please note this blog focuses on monitoring search traffic over HTTP only; current versions of Packetbeat do not support inspecting encrypted payloads

What’s a monitoring cluster?

A monitoring cluster is a cluster dedicated for storing and analyzing the monitoring data from your production Elasticsearch cluster. Keeping your monitoring data on a separate cluster is highly recommended; if things do go wrong in production, you want insight to this data and you want them somewhere you can access them (outside the “fire zone”).

This separation becomes essential if you are planning to monitor search queries via Packetbeat.

If you have Marvel but this is you wish to set up a monitoring cluster this document is a good starting point. Make sure to give your monitoring cluster a face by installing a dedicated Kibana instance for monitoring.

Picture it

Monitoring Search Queries-final.jpg

Getting ready for the data

  1. Install a Logstash instance to process your monitored packets.

You’ll need this for to filter the traffic to specific portions of the traffic you are interested (i.e., only search queries).

https://www.elastic.co/guide/en/logstash/current/installing-logstash.html

  1. Configure Logstash. For this example I chose to only monitor search queries.

I created config file called sniff_search.conf with below content; it includes extracting query_body and the index that has been searched into their own fields. You can go as crazy as you wish here with extracting bits that are useful to you.

input {
  beats {
    port => 5044
  }
}
filter {
  if "search" in [request]{
    grok {
      match => { "request" => ".*\n\{(?<query_body>.*)"} 
    }
    grok {
      match => { "path" => "\/(?<index>.*)\/_search"}     
    }
    if [index] {
    } 
    else {
      mutate {
        add_field  => { "index" => "All" }
      }
    }
    mutate {
      update  => { "query_body" => "{%{query_body}" }
    }
  }
}
output {
  if "search" in [request] and "ignore_unmapped" not in [query_body]{
    elasticsearch {
      hosts => "10.255.4.165:9200"
    }
  }
}
  1. Start Logstash:

On Linux:

./bin/logstash -f sniff_search.conf

https://www.elastic.co/guide/en/beats/libbeat/current/logstash-installation.html#_starting_logstash

Start sniffing

  1. Install Packetbeat on each host of the production cluster you would like to monitor.
  2. Configure packetbeat.yml on each node
  • Change the interface to any or configure a specific network interface, when using specific interface this has to match what Elasticsearch is binding to.
  • Please note you can use “device: any” only on Linux kernel. For OSX to listen to 9200 port, please specify "device:lo0", and for Windows, please refer to the documentation for the correct device to include.
# Select the network interfaces to sniff the data. You can use the "any"
# keyword to sniff on all connected interfaces.
    interfaces:
    device: any
  • Configure the ports where to listen for HTTP traffic, by default Elasticsearch uses port 9200 for HTTP traffic.
http:   
# Configure the ports where to listen for HTTP traffic. You can disable    
# the HTTP protocol by commenting out the list of ports.   
  ports: [9200]   
  send_request: true   
  include_body_for: ["application/json", "x-www-form-urlencoded"]
  • Comment out the port keys for other protocols to disable them.
  • Configure Packetbeat to send the data to the monitoring Logstash instance. Note to comment out the default Elasticsearch output.
#elasticsearch:
  # Array of hosts to connect to.
  # Scheme and port can be left out and will be set to the default (http and 9200)
  # In case you specify and additional path, the scheme is required: http://localhost:9200/path
  # IPv6 addresses should always be defined as: https://[2001:db8::1]:9200
  # hosts: ["Localhost:9200"]
  ### Logstash as output
logstash:
  # The Logstash hosts
  hosts: ["10.255.4.166:5044"]
  1. Start Packetbeat. For sniffing the packets it must be started as root.

On Linux:

sudo ./packetbeat -e -c packetbeat.yml -d "publish"

https://www.elastic.co/guide/en/beats/packetbeat/current/_step_4_starting_packetbeat.html

  1. After starting Packetbeat it will listen for packets on 9200 sending them to Logstash and from there to the monitoring Elasticsearch cluster, it will be indexed in indexes like: logstash-2016.05.24

This is an example of what a document will look like:

{
     "bytes_in" => 537,
    "client_ip" => "10.255.5.101",
  "client_port" => 52213,
  "client_proc" => "",
"client_server" => "",
           "ip" => "10.255.4.167",
         "port" => 9200,
         "path" => "/logstash-*/_search",
         "beat" => {
     "hostname" => "ip-10-255-4-167.eu-west-1.compute.internal",
         "name" => "ip-10-255-4-167.eu-west-1.compute.internal"
},
         "proc" => "",
       "server" => "",
       "method" => "POST",
         "type" => "http",
       "status" => "OK",
       "params" => "%7B+%22query%22%3A+%7B%0A%22match%22%3A+%7B%0A+++%22clientip%22%3A+%22105.235.130.196%22%0A%7D%0A%7D%7D%0A=",
         "http" => {
              "code" => 200,
    "content_length" => 7587,
            "phrase" => "OK"
},
    "bytes_out" => 7675,
      "request" => "POST /logstash-*/_search HTTP/1.1\r\nHost: 10.255.4.167:9200\r\nConnection: keep-alive\r\nContent-Length: 62\r\nAccept: application/json, text/javascript, */*; q=0.01\r\nOrigin: chrome-extension://lhjgkmllcaadmopgmanpapmpjgmfcfig\r\nUser-Agent: Mozilla/5.0 (Windows NT 6.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/49.0.2623.112 Safari/537.36\r\nContent-Type: application/x-www-form-urlencoded; charset=UTF-8\r\nAccept-Encoding: gzip, deflate\r\nAccept-Language: en-US,en;q=0.8\r\n\r\n{ \"query\": {\n\"match\": {\n   \"clientip\": \"105.235.130.196\"\n}\n}}\n",
   "@timestamp" => "2016-08-05T14:35:36.740Z",
        "query" => "POST /logstash-*/_search",
        "count" => 1,
    "direction" => "in",
 "responsetime" => 28,
     "@version" => "1",
         "host" => "ip-10-255-4-167.eu-west-1.compute.internal",
         "tags" => [
             [0] "beats_input_raw_event"
         ],
   "query_body" => "{ \"query\": {\n\"match\": {\n   \"clientip\": \"105.235.130.196\"\n}\n}}\n",
        "index" => "logstash-*"
}

From the data you see the IP and the port of the client connected to Elasticsearch "client_ip": "10.255.5.101" , "client_port": 56433. The IP of the node and port   "ip": "10.255.4.167", "port": 9200,, you can also see the query sent to Elasticsearch "query": "POST /logstash-*/_search",...

You can visualize this data in Kibana. Connect to Kibana you installed in step 2 and configure an index pattern of Logstash-*

You can find an example of visualization you can create below:

dashboard copy.jpg

Above dashboard is just one example you might find useful, if interested you can download it here.

You can easily import the visualizations and the dashboard via Kibana through Settings > Objects tab.

Screen Shot 2016-08-08 at 14.59.58.png

When/why monitor search queries

Maintaining a healthy cluster needs insight into how it’s used and the search queries your users run. Whether you provide direct search access to the cluster to your users or have an application layer in between, usage patterns can be helpful in planning for your data and resources.

Once you have the data inside your monitoring cluster you can answer questions like these and many more:

  1. What are the most used queries?
  2. What are my most searched indexes?
  3. Does a high average search response time period correlate with other monitoring information from Marvel?
  4. What are the slowest queries?
  5. Are the slow queries on a particular index; maybe a deeper look into that index’s settings is needed?
  6. Number of searches from clients, are there abnormal peaks in usage from a specific client?

Answering these questions helps better planning and outage prevention.

[1] https://www.elastic.co/downloads/elasticsearch

[2] https://www.elastic.co/downloads/kibana