The University of Maryland recently unveiled a new high performance computing system to upgrade the school's research capabilities.
The supercomputer, called Deepthought2, is one of the most advanced university-owned HPC systems in the country, according to the University of Maryland. Deepthought2 will improve research capabilities for a wide range of subjects, from the formation of galaxies to fire and combustion simulations.
"Deepthought2 places the University of Maryland in a leadership position in the use of high-performance computing in support of diverse and complex research," said Ann G. Wylie, professor and interim vice president for information technology. "This new supercomputer will allow hundreds of university faculty, staff, and students to pursue a broad range of research computing activities locally – such as multi-level simulations, big data analysis and large-scale computations – that previously could only be run on national supercomputers."
The university noted that this new HPC system replaced the original Deepthought, which was installed in 2006. Deepthought2 offers computing speeds 10 times faster than the original. It features 1 petabyte of storage and can complete 250 to 300 trillion operations every second.
"High performance computing is key, and Deepthought2's expanded capabilities will further Maryland's research funding competitiveness and help UMD researchers bring in new grant money to fund the science enabled by such a powerful local facility," said Derek Richardson, professor of astronomy.
An accelerating trend
HPC systems are not new to American universities. However, there is reason to believe that the prominence and importance of this technology is growing. Last month, for example, Rutgers University-Newark announced the addition of a High Performance Computing Cluster known as NM3 (Newark Massive Memory Machine).
Similarly, the University of Delaware added a second community cluster to its Mills HPC cluster to increase researchers' ability to explore advanced engineering issues.
With this expansion and growth, it’s important for HPC researchers to rely on development tools and techniques that help reduce incidences of failure and degradation of performance. Using a debugging tool capable of handling multiple cores and processes simultaneously, for example, is a good way to ensure HPC systems are operating at their maximum potential.