Customers

Using Elasticsearch to Manage a Supercomputer’s Hot, Warm, and Cold Architecture

What were the drivers in making 2016 the hottest year on record? How might massive particle accelerators be replaced by desktop devices? What gravitational forces were at work in forming the early universe? As scientists around the world strive to answer these and other critical questions, many turn to a unique supercomputer for the computations, data analyses, and simulations crucial to their work: the National Energy Research Scientific Computing Center (NERSC).

NERSC is the primary scientific computing facility for the Office of Science in the U.S. Department of Energy. More than 6,000 users rely on NERSC to conduct data-intensive research that will help solve fundamental problems in science and engineering. To meet their needs, NERSC runs two massive petaflop systems: Edison (a Cray XC30 system) and Cori (a Cray XC40 system). According to a November 2016 review of the top 500 supercomputers in the world, Cori — which features a unique mix of more than 10,000 Haswell and Knight’s Landing nodes — is the fifth most powerful system in the world.

To achieve their scientific goals, NERSC users — a group that includes more than one Nobel Prize recipient — must be able to access and work with their log and metric data when needed. That could be today, tomorrow, or a decade from now. This presents NERSC with a significant challenge: how can they manage all the data in an efficient way? At Elastic{ON} 2017, two members of NERSC’s Operations Technology Group — Thomas Davis and Cary Whitney — shared how Elasticsearch helps make that possible.

“We’ve been told, ‘You can’t use Elasticsearch as a time series [metrics] database.’ But we do,” said Davis. “To do it, we built what we call a hot, warm, and cold architecture." Based on this Elasticsearch architecture, their unique data collection system organizes information according to whether the storage need is short term (a few days, possibly weeks), longer term (generally weeks or months), or very long term.

“When we say long term, we mean forever,” said Davis. “When it gets onto the [High Performance Storage System (HPSS)], we never, ever delete anything out of that system.” Some of the information on the system is more than 30 years old. But that doesn’t mean it won’t be used: NERSC needs to store it, ensuring it can be retrieved when required.

The hot storage nodes consist of a SSD-based system configured for speed, not space. The warm nodes — for storage of up to a year — add disk space but still allow relatively rapid retrieval. “The warm storage nodes we build using a [RAID] 5 array drive mixed with what’s called a [logical volume measurement (LVM)] cache. We combine the two together to give me an SSD that fronts the RAID 5 array,” said Davis.

NERSC’s cold storage takes place on a GlusterFS. “We snapshot onto the Gluster, we take the data off the Gluster, put it in HPSS. If we need to restore, we pull from HPSS back onto the Gluster, then take that and restore it back in using Elastic[search].” Altogether, NERSC currently dedicates 90TB disk space to Elasticsearch to manage their massive time series database.

“The whole idea of the data collect…was to make sure that we had one location that had everything. One location, one access method, so that you weren’t doing a SQL query from here, a flat file look-up from here, and something else from there, and trying to get it all together. One place,” said Whitney.

So exactly how do Davis, Whitney, and the rest of the team at NERSC use Elasticsearch and Curator 4 to manage their data? Watch the full session from Elastic{ON} 2017 to find out, including how they tag nodes, what goes into retrieving and restoring data, and how they handle their daily cold storage routine.


nersc-elasticsearch-data-sources-supercomputer-metrics.jpg