High-performance computing's role in research growing

High-performance computing’s role in research growing

on May 21, 14 • by Chris Bubinas • with No Comments

Several universities recently announced plans to deploy high-performance computing solutions...

Home » High Performance Computing » High-performance computing’s role in research growing

Ever since its advent, high-performance computing has proven critical for researchers in a variety of fields. As several recent events revealed, this trend appears to be accelerating.

Rutgers University-Newark announced that it will soon deploy a high-performance computing cluster that will play a key role in improving research capabilities in numerous disciplines. The cluster, the Newark Massive Memory Machine (NM3), contains 1,500 processors and a tremendous amount of RAM and data-storage capacity.

“This will mean better science,” said Michele Pavanello, an assistant professor of theoretical chemistry at Rutgers University-Newark and the principal investigator on the grant for NM3. “And that could translate into more grant dollars in the future to help us expand the infrastructure.”

HPC benefits at Rutgers

The university reported that in the course of her investigation, Pavanello received input from a range of Rutgers University-Newark science faculty concerning their research, teaching requirements and goals. She then looked for a high-performance computing cluster that could adequately meet these needs.

For example, Bart Krekelberg, a professor with the university’s Center for Molecular and Behavioral Neuroscience, emphasized that in order to effectively map brain activity, researchers must use electrodes to record hundreds of neurons at once.

“The more areas of the brain we can record simultaneously, and the more electrodes we use, the better picture we get of how it works,” said Krekelberg. “The technology to record brain activity this way is fairly new, and it requires immense computing power to process it, because even sophisticated desktops no longer cut it. They simply take too long to do the job.”

High-performance computing solutions are the ideal choice for such a task. The source noted that a typical brain-mapping session for Krekelberg may generate as much as 50GB of data. On the school’s current servers, it would take about 12 hours to pre-process this information, but NM3 can perform the task in one hour.

Rutgers University-Newark will initially provide its own faculty and graduate students with priority access to NM3. However, Pavanello indicated that she hopes to eventually expand NM3 and make its computing capabilities available to students and faculty at all three Rutgers campuses.

HPC in Delaware

Moving in a similar direction, the University of Delaware recently announced that it will deploy a second community HPC cluster, following the success of its Mills High-Performance Computing cluster. This new deployment will provide greater access to HPC resources for UD faculty and researchers.

The university noted that its community-cluster architecture is ideal for researchers, as it allows them to utilize HPC power without significant cost or the need to maintain their own specific clusters. This combination is extremely important for many scientists.

“The Mills cluster has been one of the reasons why I accepted my position at UD,” said Cristina Archer, associate professor in the College of Earth, Ocean, and Environment. “Without it, I would not have been able to perform the computer-intensive simulations that I need for my research on turbulence generated by wind turbines.”

David Racca, a policy scientist in the Center for Applied Demography and Survey Research (CASDR), also emphasized the role played by HPC for his research. He noted that CASDR has collected a tremendous amount of GPS data from state vehicles over the past three years, and that processing a month’s worth of this information via the fastest available personal computer takes more than three weeks. With UD’s HPC cluster, though, this same process takes only three days.

As these two stories reveal, colleges, universities and other centers for research are increasingly realizing that only by embracing high-performance computing can they maximize faculties’ productivity and capabilities in a range of scientific disciplines.

Learn more:

• Read how the NICS group at the University of Tennessee solved complex issues when porting scientific libraries to the Intel Xeon Phi coprocessor (from Scientific Computing World)
• See how Lawrence Livermore National Laboratory used TotalView parallel debugging (PDF) to reduce development times on their IBM Blue Gene supercomputer

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *

Scroll to top