Using Nmap + Logstash to Gain Insight Into Your Network | Elastic Blog
Engineering

# Using Nmap + Logstash to Gain Insight Into Your Network

In this post we'll look at a brand new logstash codec plugin: logstash-codec-nmap. This plugin lets you directly import Nmap scan results into Elasticsearch where you can then visualize them with Kibana. Nmap is somewhat hard to describe because its a sort of swiss army knife of network tools. It crams many different features into a single small executable. I've put together a small list of things you can do with Nmap below, though it is by no means complete!

• Ping one or more hosts and discover the RTT for the ping
• Issue traceroutes to one or more hosts
• Check one or more hosts for one or more open ports
• Scan a network for all open hosts and ports
• Attempt to detect the OS of a host and the application running on a port
• Check for complex states, the like OpenSSL Heartbleed vulnerability via custom NSE scripts

Using Logstash, Elasticsearch, and Kibana you can create neat dashboards, like the one I have for my home LAN below:

## Monitoring Host Availability with Nmap

Let's start by just poking around with some Nmap basics. Let's say we simply want to check if a host is up or not with an ICMP ping. We can do this by simply running sudo nmap -sP example.net. You should see the output below:

Starting Nmap 7.01 ( https://nmap.org ) at 2016-01-26 12:28 CST
Nmap scan report for example.net (93.184.216.34)
Host is up (0.020s latency).
Other addresses for example.net (not scanned): 2606:2800:220:1:248:1893:25c8:1946
Nmap done: 1 IP address (1 host up) scanned in 0.21 seconds

You may be wondering why we need to run our ping command with sudo. The reason is that ICMP ping packets must be sent as root, otherwise the connect method will be used. This is why the ping command is setuid with root as owner on most platforms. While the output here is nice and human readable, it is not something Logstash can parse. To get machine readable XML output you'll need to use the -oX filename option, which if - is used for the filename redirects to stdout. Let's try running sudo nmap -sP example.net -oX -. You should see the same output, but with more verbosity and in XML format.

Now that we've got our bearings let's setup a Logstash server to receive this data. To understand this setup let's quickly recap what a Logstash codec is. Logstash codecs simply provide a way to specify how raw data should be decoded, regardless of source. This means that we can use the Nmap codec to read Nmap XML from a variety of inputs. We could read it off a message queue or via syslog for instance, before passing the data on to the Nmap codec.

A very flexible solution for a lot of people is to use the Logstash HTTP input. This input sets up a webserver inside the logstash process which listens for requests and turns the request body into a Logstash event--in our case using the nmap codec. For now we'll use a the rubydebug output on stdout to let us see the parsed nmap data. Try out the config below:

input {
http {
host => "127.0.0.1"
port => 8000
codec => nmap
}
}

output {
stdout {
codec => rubydebug
}
}

Then, in your Logstash folder, run bin/plugin install logstash-codec-nmap. Once you have the Nmap codec installed you can start logstash with bin/logstash -f my_config. Logstash is now ready to watch for Nmap XML on port 8000.

You can send a simple ping by running the following in your shell. We'll use cURL to transport the Nmap XML to Logstash.

nmap -sP example.net -oX - | curl -H "x-nmap-target: example.net" http://localhost:8000 --data-binary @-

Note that we're using -oX - with Nmap to send XML to stdout, and --data-binary @- to set stdin as the request body for our request. We're also setting a custom header: x-nmap-target.The Logstash HTTP input will make this header available to us as part of our events.

After sending this request you should see a bunch of output in your terminal from logstash. There should be two Logstash events in your terminal now, one with a type of nmap_host, the other nmap_scan_metadata (note that this 'type' field is distinct from the Logstash convention of '@type'). You should also see a bunch of HTTP metadata from the HTTP input, including our custom x-nmap-target header.

## Nmap Codec Event Types

OK! So, now we know how to get some basic info out of nmap. Let's take a deeper look at the data coming out of the Nmap codec, which does some restructuring and denormalization of the Nmap XML.

Since we did a ping scan these were the only types of event created. However, for richer Nmap scans more types are created, the types are listed below:

• nmap_scan_metadata: An object containing top level information about the scan, including how many hosts were up, and how many were down. Useful for the case where you need to check if a DNS based hostname does not resolve, where both those numbers will be zero.
• nmap_host: One event is created per host. This is the full data covering an individual host, including open ports and traceroute information as a nested structure.
• nmap_port: One event is created per host/port. This duplicates data already in nmap_host: This was put in for the case where you want to model ports as separate documents in Elasticsearch (which Kibana prefers).
• nmap_traceroute_link: One of these is created per traceroute 'connection', with a from and a to object describing each hop. Note that traceroute hop data is not always correct due to the fact that each tracing ICMP packet may take a different route. Also very useful for Kibana visualizations.

## A Small Network Monitor

Using the Elasticsearch output and Kibana we can setup a more fully featured example. This is something I run on my own home network to check a few different things. Here we'll use some of Nmap's more powerful features, the ability to target an entire subnet at once. My home network runs on the subnet 192.168.1.0/24, for instance. We can turn all pretty much all the useful options with the -A flag, which will per the Nmap documentation"Enable OS detection, version detection, script scanning, and traceroute" giving us the command below.

nmap -A 192.168.1.0/24 -oX - | curl -H "x-nmap-target: local-subnet" http://localhost:8000 --data-binary @-.

In the line above we've used a different descriptive HTTP header in this example. We could use this in our Logstash configuration to help divide and filter the output of different cURL commands, though in this case we will not.

Next, to put this into Elasticsearch in a sane manner we'll need an Elasticsearch mapping. Luckily, I've created one that handles the necessary document types for this demo, which you can find here. You'll want to download this file somewhere locally, then point to it in this config, where "./elasticsearch_nmap_template.json" is specified. Since this mapping template is so radically different than the default Logstash template we've configured the Logstash Elasticsearch output to send data to timestamped indexes prefixed with nmap-logstash-*, instead of the usual logstash-*, to prevent template collisions.

If you put that in place with a simple cronjob and let it run for a while you'll see some interesting results in your network over time. After we have some data we can run some tests and see what we get! Loading the results up in Kibana I can use this data to break down OSes on my network over time, like so:

You could also use this data to chart your typical outbound network routes by aggregating based on the from.ttl in the produced nmap_traceroute_link documents, as well as correlate that with those link's RTT. There's a rich set of data in these documents, too rich to go into detail here. I highly recommend browsing the details of the rubydebug output to see what's possible.

## Visualizing Outbound Routes

We might also be interested in our outbound connectivity, not just what's live on the network. We can use Nmap to both help us discern our outbound routes via traceroutes, as well as determine if we've lost connectivity to the outsound world. To do that we'll need to, as you might imagine, hit some targets outside our network. We can do this by running the following Nmap command:

sudo nmap --traceroute -sP example.net -oX - | curl -H "x-nmap-target: remote-check" http://localhost:8000 --data-binary @-

This will test if we can ping the outside world, and provide a traceroute that could be helpful in diagnosing network problems as well. If you put this in cron along with the previous command you can check for host uptime, ping time, as well as the latency for all hosts along the path.

## Next Steps

This module is in its early stages! Look for more from me regarding this codec in coming blog posts.