High performance computing's applicability continues to expand as more universities, companies and government organizations turn to these solutions in order to vastly scale up their analytics capabilities.
One particularly noteworthy application of HPC is in the realm of climate research. At the recent IBM Edge conference in Las Vegas, Pamela Gillman, manager of data analysis services groups for the National Center for Atmospheric Research, spoke to SiliconANGLE's theCUBE about how her team leverages HPC to learn more about the climate.
Climate data's scope
As Gillman explained, climate research embraces a much longer-term view than weather forecasting. Consequently, climate researchers must incorporate a tremendous amount of data into their work. She told the news source that some researchers focus on weather patterns over 1,000-year time periods.
Going even further, Gillman noted that one group she works with is studying the state of the climate during the Paleolithic era. The purpose of this and other research is to determine the feasibility and best practices for predicting climate changes over time.
For the NCAR to conduct its climate research, HPC is critical. Gillman explained that the NCAR comprises approximately half a dozen labs, while she works specifically with two groups. One is the team working on Paleolithic climate data and how it can be applied to future forecasting, while the other devotes its energy to hurricane prediction.
This project occurs at the Computational Information Sciences Lab, a 25,000 square foot supercomputer center, the news source reported.
When asked to elaborate on the role played by HPC, Gillman pointed to the combination of these tools with big data resources. She told the news source that her first International Panel for Climate Control run, conducted every four to five years, produced a manageable 100 terabytes of data. However, the next run resulted in so much data that the NCAR wasn't able to curate all of it, although it remains available to the wider community. Gillman's group then runs analytics on this data in portions in conjunction with HPC solutions.
"[T]he group managing that data can use their resources to pull variables out, package data sets and then deliver to customers," the news source explained.
Gillman asserted that supercomputer users' responsibilities have changed significantly in recent years. In the past, their job was simply to ensure that the HPC machines were as fast as possible. Once the data was produced it became someone else's to work with and worry about.
At NCAR, however, the data center has been centralized into a single pool.
"So, what we're trying to do is shift to where, as the data is produced, somebody can look at it, and they don't have to move it," said Gillman, SiliconANGLE reported.
Changes to HPC
As the NCAR deployment highlights, usage of HPC tools is evolving as more organizations begin to utilize of these resources in a growing number of ways.
Yet there are challenges inherent to this evolution. For example, Natalie Bates of the Energy Efficient High Performance Computing Working Group recently spoke with Scientific Computing about the ongoing struggle to minimize the energy usage of HPC solutions. While some progress has been made on this front, the reliance on silicon-based components means that no breakthroughs are likely in this capacity for quite some time.
More immediately, organizations using HPC solutions can make greater use of these assets by optimizing memory use and performance by using debugging tools capable of handling multiple processes or threads simultaneously. Through efficient debugging, HPC developers can spend more time focusing on their research.