Los Alamos National Laboratory receives new HPC system

Los Alamos National Laboratory receives new HPC system

on Aug 6, 14 • by Chris Bubinas • with No Comments

The Los Alamos National Laboratory is the latest organization to expand its use of HPC technology...

Home » High Performance Computing » Los Alamos National Laboratory receives new HPC system

High performance computing continues to grow in popularity, providing research organizations and other firms with advanced capabilities that are simply not available through more conventional systems.

The latest organization to expand its use of HPC technology is the Los Alamos National Laboratory. The system, called Wolf, can operate at 197 teraflops per second and contains 19.7 terabytes of memory. Wolf users have access to 86.3 million central processing unit core hours per year.

"This machine modernizes our mid-tier resources available to Laboratory scientists," said Bob Tomlinson, a member of the Los Alamos National Laboratory's High Performance Computing group. "Wolf is a critical tool that can be used to advance many fields of science."

The Los Alamos National Laboratory declared that it will use Wolf to conduct research in areas such as climate science, astrophysics modeling and materials analysis. Such efforts will help ensure the laboratory remains a world leader in the fields of high performance computing and computational science, particularly in regard to national security issues.

HPC expanding
Implementations such as this one are increasingly common. A new IDC report found that the worldwide HPC market is expected to grow through 2018, THE Journal noted.

"HPC technical server revenues are expected to grow at a healthy rate because of the crucial role they play in economic competitiveness as well as scientific progress," said Earl Joseph, program vice president for technical computing at IDC, the news source reported. "As the global race toward exascale computing fuels the high end of the market, more small and medium-sized businesses and research organizations are exploiting HPC servers for advanced simulations and high performance data analysis."

As HPC grows, it’s imperative for organizations to adopt tools that are built to handle multiple CPUs and processes, and support common HPC platform architectures. Choosing scalable debugging tools from the outset, for example, will help reduce development times and shorten the time it takes to localize and fix problems on live systems.

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *

*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

Scroll to top