Getting Startededit

Warning

This functionality is experimental and may be changed or removed completely in a future release. Elastic will take a best effort approach to fix any issues, but experimental features are not subject to the support SLA of official GA features.

To use the Rollup feature, you need to create one or more "Rollup Jobs". These jobs run continuously in the background and rollup the index or indices that you specify, placing the rolled documents in a secondary index (also of your choosing).

Imagine you have a series of daily indices that hold sensor data (sensor-2017-01-01, sensor-2017-01-02, etc). A sample document might look like this:

{
  "timestamp": 1516729294000,
  "temperature": 200,
  "voltage": 5.2,
  "node": "a"
}

Creating a Rollup Jobedit

We’d like to rollup these documents into hourly summaries, which will allow us to generate reports and dashboards with any time interval one hour or greater. A rollup job might look like this:

PUT _xpack/rollup/job/sensor
{
    "index_pattern": "sensor-*",
    "rollup_index": "sensor_rollup",
    "cron": "*/30 * * * * ?",
    "page_size" :1000,
    "groups" : {
      "date_histogram": {
        "field": "timestamp",
        "interval": "1h",
        "delay": "7d"
      },
      "terms": {
        "fields": ["node"]
      }
    },
    "metrics": [
        {
            "field": "temperature",
            "metrics": ["min", "max", "sum"]
        },
        {
            "field": "voltage",
            "metrics": ["avg"]
        }
    ]
}

We give the job the ID of "sensor" (in the url: PUT _xpack/rollup/job/sensor), and tell it to rollup the index pattern "sensor-*". This job will find and rollup any index that matches that pattern. Rollup summaries are then stored in the "sensor_rollup" index.

The cron parameter controls when and how often the job activates. When a rollup job’s cron schedule triggers, it will begin rolling up from where it left off after the last activation. So if you configure the cron to run every 30 seconds, the job will process the last 30 seconds worth of data that was indexed into the sensor-* indices.

If instead the cron was configured to run once a day at midnight, the job would process the last 24hours worth of data. The choice is largely preference, based on how "realtime" you want the rollups, and if you wish to process continuously or move it to off-peak hours.

Next, we define a set of groups and metrics. The metrics are fairly straightforward: we want to save the min/max/sum of the temperature field, and the average of the voltage field.

The groups are a little more interesting. Essentially, we are defining the dimensions that we wish to pivot on at a later date when querying the data. The grouping in this job allows us to use date_histograms aggregations on the timestamp field, rolled up at hourly intervals. It also allows us to run terms aggregations on the node field.

For more details about the job syntax, see Rollup Job Configuration.

After you execute the above command and create the job, you’ll receive the following response:

{
  "acknowledged": true
}

Starting the jobedit

After the job is created, it will be sitting in an inactive state. Jobs need to be started before they begin processing data (this allows you to stop them later as a way to temporarily pause, without deleting the configuration).

To start the job, execute this command:

POST _xpack/rollup/job/sensor/_start

Searching the Rolled resultsedit

After the job has run and processed some data, we can use the Rollup Search endpoint to do some searching. The Rollup feature is designed so that you can use the same Query DSL syntax that you are accustomed to… it just happens to run on the rolled up data instead.

For example, take this query:

GET /sensor_rollup/_rollup_search
{
    "size": 0,
    "aggregations": {
        "max_temperature": {
            "max": {
                "field": "temperature"
            }
        }
    }
}

It’s a simple aggregation that calculates the maximum of the temperature field. But you’ll notice that is is being sent to the sensor_rollup index instead of the raw sensor-* indices. And you’ll also notice that it is using the _rollup_search endpoint. Otherwise the syntax is exactly as you’d expect.

If you were to execute that query, you’d receive a result that looks like a normal aggregation response:

{
  "took" : 102,
  "timed_out" : false,
  "terminated_early" : false,
  "_shards" : ... ,
  "hits" : {
    "total" : 0,
    "max_score" : 0.0,
    "hits" : [ ]
  },
  "aggregations" : {
    "max_temperature" : {
      "value" : 202.0
    }
  }
}

The only notable difference is that Rollup search results have zero hits, because we aren’t really searching the original, live data any more. Otherwise it’s identical syntax.

There are a few interesting takeaways here. Firstly, even though the data was rolled up with hourly intervals and partitioned by node name, the query we ran is just calculating the max temperature across all documents. The groups that were configured in the job are not mandatory elements of a query, they are just extra dimensions you can partition on. Second, the request and response syntax is nearly identical to normal DSL, making it easy to integrate into dashboards and applications.

Finally, we can use those grouping fields we defined to construct a more complicated query:

GET /sensor_rollup/_rollup_search
{
    "size": 0,
    "aggregations": {
        "timeline": {
            "date_histogram": {
                "field": "timestamp",
                "interval": "7d"
            },
            "aggs": {
                "nodes": {
                    "terms": {
                        "field": "node"
                    },
                    "aggs": {
                        "max_temperature": {
                            "max": {
                                "field": "temperature"
                            }
                        },
                        "avg_voltage": {
                            "avg": {
                                "field": "voltage"
                            }
                        }
                    }
                }
            }
        }
    }
}

Which returns a corresponding response:

{
  "took" : 93,
  "timed_out" : false,
  "terminated_early" : false,
  "_shards" : ... ,
  "hits" : {
    "total" : 0,
    "max_score" : 0.0,
    "hits" : [ ]
  },
  "aggregations" : {
    "timeline" : {
      "meta" : { },
      "buckets" : [
        {
          "key_as_string" : "2018-01-18T00:00:00.000Z",
          "key" : 1516233600000,
          "doc_count" : 6,
          "nodes" : {
            "doc_count_error_upper_bound" : 0,
            "sum_other_doc_count" : 0,
            "buckets" : [
              {
                "key" : "a",
                "doc_count" : 2,
                "max_temperature" : {
                  "value" : 202.0
                },
                "avg_voltage" : {
                  "value" : 5.1499998569488525
                }
              },
              {
                "key" : "b",
                "doc_count" : 2,
                "max_temperature" : {
                  "value" : 201.0
                },
                "avg_voltage" : {
                  "value" : 5.700000047683716
                }
              },
              {
                "key" : "c",
                "doc_count" : 2,
                "max_temperature" : {
                  "value" : 202.0
                },
                "avg_voltage" : {
                  "value" : 4.099999904632568
                }
              }
            ]
          }
        }
      ]
    }
  }
}

In addition to being more complicated (date histogram and a terms aggregation, plus an additional average metric), you’ll notice the date_histogram uses a 7d interval instead of 1h.

Conclusionedit

This quickstart should have provided a concise overview of the core functionality that Rollup exposes. There are more tips and things to consider when setting up Rollups, which you can find throughout the rest of this section. You may also explore the REST API for an overview of what is available.