As high performance computing gathers more steam and is applied to an ever-increasing variety of operations, researchers are eagerly looking for ways to improve the technology's capabilities.
The most recent example of innovation and advancement in this field can be found at Johns Hopkins University. As GCN reported, Edmond DeMattia, a senior system engineer and virtualization architect at the school's Applied Physics Laboratory Air and Missile Defense Department's Combat Systems Development Facility, has managed to overcome a long-standing hurdle by combining HPC with virtualization.
The mother of invention
As the news source noted, virtualization offers many advantages, yet until this point it had not been successfully utilized in HPC and simulation environments. This is due to the fact that simulation applications typically have operating system requirements that are simply not compatible with virtualization solutions.
DeMattia set out to overcome this conflict in order to improve the department's computing capabilities. This was challenging due to the significant requirements shouldered by the school's simulation hardware.
"Some simulations take five seconds per task, and we run that same task up to a million times," DeMattia said, the news source reported. "While others may take 15 hours per task, but are only run 1,000 times."
Yet as GCN explained, these computing requirements, while intense, were not always in use. This resulted in a significant amount of wasted resources. DeMattia realized that he needed to tap nontraditional computing methods in order to maximize efficiency and performance.
The source noted that when DeMattia initially ventured into converting a HPC environment into a virtualized environment, he anticipated a performance loss of 6 to 8 percent. However, he was pleasantly surprised to discover that the conversion led to a 2 percent performance gain instead.
Ultimately, the results were extremely positive, and transformative.
"My team fundamentally redesigned how high-performance scientific computing is performed in the Air and Missile Defense Department by utilizing virtualization and distributed storage as the framework for pooling resources across multiple departments," DeMattia told the news source.
GCN explained that this new configuration reduced idle computing cycles while enabling greater run efficiency. In the context of millions of calculations, the relatively modest gains resulted in tremendous improvements.
DeMattia asserted that this process is not uniquely applicable to his department's needs. On the contrary, he claimed that other government labs can and should make use of virtualization within HPC environments, the source reported.
This is noteworthy considering the rapid expansion of HPC solutions in the government sphere and beyond. For example, the International Center for Biosaline Agriculture in Dubai recently implemented a new HPC solution to help the organization analyze the impact of climate change on water resources and agriculture capabilities in the Middle East and North Africa region. This project, part of the Modeling and Monitoring Agriculture and Water Resources Development Initiative, is being funded by the U.S. Agency for International Development. As Dr. Rachael McDonnell, a water policy and governance scientist with the ICBA, pointed out, HPC tools are critical for efforts to maximize resource use in these areas.
"Technology is absolutely essential to our ability to deliver information to governments and public bodies which potentially leads to life-changing results," said McDonnell. "The implementation of Dell's HPC solution is key to our ability to analyze vast amounts data which can be used to improve the lives of people in the MENA region."
While virtualization may not be ideal in this and every other instance of HPC deployment, the potential for combining these configurations holds tremendous possibilities for many types of research.