Software Asset Management with Elasticsearch @ Red Mint Network | Elastic Blog
User Stories

# Software Asset Management with Elasticsearch @ Red Mint Network

When your company has reached a critical size with an IT infrastructure that includes a substantial number of workstations and servers, managing your software assets and getting the real inventory picture can sometimes be a pain. By the time an audit comes around (e.g. on demand from Microsoft), it might be too late if you don't have the analytics to output accurate history data. How many users for a specific software application? What is the usage time per user? Do you have more running instances than the number of purchased licenses allow? Is anyone running illegal copies of the software? Or conversely, have you purchased more licenses than you actually need?

If you can't produce this information, you might find yourself exposed to excess costs due to the paying of unused licenses, billing adjustments, and legal fees. We'll explain how we enable smart, and pragmatic software asset management with Elasticsearch and SDN (Software Defined Networking) telemetry.

## Network as a data source

Fortunately, a lot of software is quite verbose. It constantly contacts its vendor's servers for various purposes such as authorization requests, update checks, telemetry, or access to cloud services. This generates network traffic that can be dissected by a VNF (Virtual Network Function) in the Internet Service Provider's infrastructure explored in Elasticsearch.

Knowing the network packet's source (where the software is installed), destination, and timestamp, makes it possible to know when a software is up and running just by detecting meaningful events on the network link.

For instance, we've noticed that when Adobe® Photoshop® CC or Adobe® Illustrator® CC are running, they open a TLS-encrypted session with ans.oobesaas.adobe.com:

{
"@timestamp": "2017-03-06T12:42:05.230651762+01:00",
"track_id": "fa9b4f9539a18daeb7578e47ea2fb0b6544ea527",
"type": "track",
"track": {
"appname": "SSL",
"host_hmac": "71b5e991310e75d05c3e242ab1fc86a5dce6f3e6",
...
},
...
}

Here, we use the anonymized field host_hmac to identify the data source (computer, or tablet, etc.) and we use the SNI (Server Name Indication) of the certificate extracted from the TLS handshake to identify the destination.

Kibana shows this exchange being performed every 9 minutes:

That's enough information to determine the number of users on the network and the usage times in a given time window!

## Computing the number of users

In this example, we want to compute the number of users day by day in the previous week. First, we filter the data to select only those objects that match the relevant SNI and requested date range. The date selector is used to define a relevant range (from now-7d/d to now/d). Then, we use the date histogram aggregation to gather data on daily buckets. Finally, computing the number of users is a straightforward matter using the cardinality aggregation on the host_hmac field.

Let's query Elasticsearch to get the number of Adobe software users for each day of the previous week:

{
"size": 0,
"timeout": "1s",
"aggs": {
"telemetry": {
"filter": {
"bool": {
"must": [
{
"range": {
"@timestamp": {"gte": "now-7d/d", "lte": "now/d"}
}
}
],
"should": [
],
"minimum_should_match": 1
}
},
"aggs": {
"time_slot": {
"date_histogram": {
"field": "@timestamp",
"interval": "day"
},
"aggs": {
"user_count": {"cardinality": {"field": "track.host_hmac"}}
}
}
}
}
}
}

## Computing usage times

To compute usage times, a Time-To-Live value must be defined. If no signal is seen during this time period, the software is considered inactive. In the case of a periodic signal, the TTL must be slightly longer than the signal period to avoid glitches due to network latency or host slowdowns. The example below uses a 10-minutes TTL (longer than the observed 9-minute TTL in Adobe®).

Let's use the scripted metric aggregation to compute the usage time per user and return the average value.

This type of aggregation can be used to implement a Map/Reduce model with a combination of four scripts written in the Painless language.

### init

Initialize an empty hashmap to aggregate results per host.

    params._agg = [:];

### map

Gather the timestamp of each signal occurrence.

    def host = doc[&apos;track.host_hmac&apos;].value;
def ts = doc[&apos;@timestamp&apos;].value;

/* Gather signal timestamps for each host */
params._agg.putIfAbsent(host, []);
params._agg[host].add(ts);

### combine

Sort the results from each shard.

    for (ts_list in params._agg.values()) {
ts_list.sort(Long::compare);
}
return params._agg;

### reduce

1. Merge the results
2. Compute the usage time per user
3. Compute and return the average value

long ttl = 9 * 60 * 1000;
long total_time = 0;
def per_host = [:];
int count;

/* Merge results */
for (agg in params._aggs) {
if (agg == null) {
continue;
}
for (e in agg.entrySet()) {
def host = e.getKey();
per_host.putIfAbsent(host, []);
}
}

/* 0-division is evil */
count = per_host.size();
if (count == 0) {
return 0;
}

/* Compute the usage time per host */
for (ts_list in per_host.values()) {
long prev = 0;
ts_list.sort(Long::compare);

for (ts in ts_list) {
long delta = ts - prev;
total_time += delta > ttl ? ttl : delta;
prev = ts;
}
}

/* Compute average */
return (total_time / count) / 1000;

As things stand, the scripted metric aggregation is still experimental. Pipelining its result to another aggregation doesn't seem currently supported. That's why the computation of the average is performed by the Painless script and not with the Avg Bucket aggregation.

### Displaying the data

These two aggregations can be combined into a single query:

{
"aggs" : {
"time_slot": {
"date_histogram": {
"field": "@timestamp",
"interval": "day"
},
"aggs": {
"user_count": {
"cardinality": {
"field": "track.host_hmac"
}
},
"time_used": {
"scripted_metric": {
"init_script": "...",
"map_script": "...",
"combine_script": "...",
"reduce_script": "..."
}
}
}
}
}

The result is easy to parse: for each day in the requested date range, we get the number of users and their average usage time.

"buckets": [
{
"key_as_string": "2017-03-11T00:00:00.000Z",
"key": 1489190400000,
"doc_count": 4352,
"user_count": { "value": 128 },
"time_used": { "value": 20340 }
},
{
"key_as_string": "2017-03-12T00:00:00.000Z",
"key": 1489276800000,
"doc_count": 4860,
"user_count": { "value": 180 },
"time_used": { "value": 16080 }
},
...
]

The result of this query is sufficient to display the two metrics in a single mixed bar/line chart constructed with Chart.js.

Mining of network data sources can create powerful IT analytics when combined with the Elastic Stack. The ongoing Software Defined Networking and Virtual Network Function revolution is a great way to deliver new data services to customers in a data-as-a-service paradigm. Extracting signal from noise, giving it a meaning with Elasticsearch and presenting a valuable perspective with the data is what we do.

We hope you enjoyed reading this blog entry. Vianney Bajart @ Red Mint Network.

Adobe Photoshop CC and Adobe Illustrator CC are either registered or applied for trademarks of Adobe Systems Incorporated in the United States and/or other countries.