The Hotel NERSC Data Collect: Where Data Checks In, But Never Checks Out
Register to Watch
Plus, we'll send you relevant content.
The NERSC data collect system is designed to provide access to 30TB of logs and time-series data generated by the supercomputers at Berkeley Lab. This talk will cover the life of an index inside the cluster, from initial tagging, node routing, snapshot/restore, use of aliases to combine indexes, and archiving on high disk capacity nodes using generic hardware. Additionally, Thomas and Cary will highlight several aspects of using Elasticsearch as a large, long term data storage engine, including index allocation tagging, use of index aliases, Curator and scripts to generate snapshots, long term archiving of these snapshots, and restoration.
With over 25 years of Linux experience and an engineering degree from the University of Nebraska, Thomas is the architect, metering guru, and project lead of the NERSC's environmental data collection system. This system is also being used as a general purpose one-stop shop for the collection of system logs, metrics, and alerts from all of the NERSC systems. He has also been instrumental in introducing new technologies to NERSC.
Cary is a computer scientist who has always been involved with data gathering and presentation. Coming to LBL in 1999, he started by monitoring the High Energy Physics cluster (PDSF), later moving to the High Performance Computing (HPC) system.