advertisement
advertisement

The U.S. Just Picked Intel, IBM, Nvidia, And Others To Help Make Supercomputers 50 Times Faster

The U.S. aims to take back the title for fastest computer that it lost to China in 2013.

The U.S. Just Picked Intel, IBM, Nvidia, And Others To Help Make Supercomputers 50 Times Faster
[Photo: Flickr user U.S. Department of Energy]

In 2013, the United States ceded the supercomputing crown to China, whose Sunway TaihuLight machine is over five times as powerful as the U.S. government’s fastest system, called Titan. But the Department of Energy is working on a new generation of supercomputers that are 50 times faster than Titan—a performance boost described as “exascale.” Today, DoE awarded $258 million in contracts to six U.S. companies—AMD, Cray (maker of Titan), Hewlett Packard Enterprise, IBM, Intel, and Nvidia—to build an exascale system by 2021. The companies themselves are expected to add at least $172 million of their own funding to the project, since they will also benefit from the new technologies developed.

advertisement

Hewlett Packard Enterprise’s RAM-based computer, dubbed The Machine, previews some of the technology that may be part of the Exascale Computing Project. [Photo: HPE]
Technological contributions to the Exascale Computing Project will vary. In the first phase, companies will be designing technologies that could potentially be used in a future system, says Hewlett Packard Enterprise (HPE). It will be pushing a new computer architecture that keeps data in monstrous amounts of superfast RAM where it can be worked on all at once, rather than shuttling it to and from slower hard drives. This pool of data will be a “fabric” linking processors that work on the data. HPE introduced a prototype of this technology, called The Machine last month.

That memory-oriented approach seems to jibe with IBM’s philosophy. “High performance computing is shifting from a purely compute-centric model to one that is much more data-centric,” writes James Sexton of IBM Research in an email to Fast Company. “The ability to manage, train, sort and analyze the data ‘where it resides’ and ‘as it flows’ through the system is critical to gaining insight quickly from the plethora of data sources vs. traditional [computing methods].”

Nvidia’s contribution will be more-powerful graphics processing units for artificial intelligence—a generation or more beyond the new Volta chips going into DoE computers already slated for 2018. Beyond the hardware, DoE is also charging the companies to write powerful software to run the monstrous calculations.

What can it do?

You’d be forgiven for not grasping what the performance goal of 1,000 petaflop/s (quadrillions of calculations per second) means for real-world applications. DoE lists plenty of potential examples, such as simulating the effects of earthquakes, predicting the effectiveness of cancer treatments, designing nuclear fusion reactors, and modeling the aerodynamics of giant wind farms. An April 2017 DoE presentation on phenomena that the system can model even includes the words “climate change”—once in a 30-page document.

One of the Exascale Computing Project’s green goals is to boost performance without requiring more energy. But even if DoE succeeds, it will still have a pretty high electric bill. The best-case scenario is a system that consumes 20 megawatts, enough to power about 20,000 homes.

About the author

Sean Captain is a technology journalist and editor. Follow him on Twitter @seancaptain.

More