Fast company logo
|
advertisement

Hewlett Packard Enterprise is building The Machine to solve both big science and big data problems, using some technologies that don’t quite exist yet.

Where This Supercomputer Is Going, There Are No Hard Drives

BY Sean Captainlong read

You’ve probably experienced a computer that gets bogged down when too many programs use up all the RAM–random access memory. Programs demand more data than fit in the fast-access RAM, forcing them to wait on hard drives. They hold several times more data but can take from a thousand to a million times longer to fetch it–leading, for instance, to those groans from gamers agonizing for the next level to load.

From laptops with the standard 8 gigabytes of memory to supercomputers with over 200 times as much, there are always tasks that crave more and more RAM. The very biggest have been for large scientific experiments, like those done at Cambridge University’s COSMOS Supercomputer center, founded by Stephen Hawking in 1997 to help probe the edges of cosmology, astrophysics, and particle physics.

Its projects are somewhat ambitious. One models how merging black holes warp the fabric of space-time across millions of light years. Another measures slight variations in temperature levels across the sky in order to figure out how the entire universe formed.

COSMOS’s previous RAM-packed supercomputer helped create this model of gravitational waves emanating from merging black holes. [Image: courtesy of Cambridge University]
But with the explosion of commercial big-data collection and machine learning, decidedly less cosmic tasks–like, say, generating real-time, customized offers for online shoppers–increasingly resemble big-science computing projects. That creates a lot more customers for computer makers.

“Although the [supercomputing] market is growing at a healthy rate, that no way compares to the rate at which the machine learning and big data community is growing,” says Daniel Reed, a professor of computer science and electrical engineering at the University of Iowa.

So it is that Hewlett Packard Enterprise–a two-year-old spin-off from the iconic Silicon Valley company that still makes consumer hardware–is building similar supercomputers for both the physicists at Cambridge and traditional business clients. In 2016, HPE paid about $275 million for supercomputer manufacturer SGI, the maker of previous COSMOS systems, and rolled the company’s tech into a new line of computers, called Superdome Flex.

The computer maker is betting its future on ginormous-RAM computing to distinguish itself from competitors like Cray, Dell, and IBM. Cambridge’s new system has a heavy arsenal of RAM–6 terabytes, or 750 times what a typical laptop has–and can be expanded up to 48TB in the future. Still, Superdome Flex arrives at a difficult time for HPE, with flat revenue, a static stock price, and the surprise departure announcement of chief executive Meg Whitman on November 21.

Deals with Cambridge and other research centers could at least bring more attention to what’s made possible by big-memory computing. It enables things like exploring a dynamic model of the second-biggest bang in the universe, the merging of black holes. These events warp the fabric of space-time, creating the kind of gravitational waves that scientists first detected two years ago, after one merger had sent them rippling across a billion light years to reach Earth.

Paul Shellard with the Superdome Flex, and older SGI computers behind him. [Photo: courtesy of Cambridge University]
There are far larger supercomputers, but most are clusters of machines that require complex programming to divvy computing projects among their various nodes. That’s a task COSMOS researchers would like to avoid. “They’re experts in physics of the early universe and the Big Bang. They’re not computer scientists,” says COSMOS director Paul Shellard, describing many of his researchers.

Superdome Flex is the latest model in a genre of supercomputer that consolidates all its RAM through superfast connections, called memory fabric. This forms one giant pool of what’s called “shared memory,” allowing a supercomputer to function like a single, giant PC. Since its founding in 1997, COSMOS has been running on big-memory supercomputers built by SGI, and since 2014 has boasted the largest such computer in Europe. These machines enable researchers to start “naively” pursuing theories, says Shellard, taking rough ideas from a laptop right to a supercomputer without having to write new code. “We’ve always started out on a shared-memory system. That’s where you innovate first,” he says.

To model merging black holes, Shellard’s team used a separate cluster supercomputer to do the heaviest calculations, then fed the results into a shared memory computer (made by SGI) to assemble the visualization. It shows a dance of two black spheres circling each other, ever closer, producing a swirl of colors. These are the first gravitational waves ever detected, picked up by observatories in the U.S. on September 14, 2015, after a billion-year journey.

“This is like having a great big laptop,” says Shellard. “One of the things about a laptop is you can look at the data. You can visualize it.”

Science Meets Business

Hewlett Packard Enterprise sees the biggest opportunities, however, in more down-to-earth jobs that also require rapidly navigating masses of data. “Think about, you’re an e-commerce provider,” says Randy Meyer, VP for “mission-critical systems” at HPE. “If you have that knowledge about something I’m doing right now, something I’ve done in the past, and something other people like me may have done, the time to present that offer is now.”

This echoes an overall shift in computing, says Reed, who was once a VP for “extreme computing” at Microsoft. In the old days, scientists would first develop a theory and then test it out on the computer. Now machine learning lets vast amounts of data “tell” their own story.

HPE Superdome Flex [Photo: courtesy of HPE]
“It was moving down that path of ‘we know we want a set of known transactions to run against the data,'” says Reed, “to ‘there must be something interesting in this data, but we don’t know what it is. Let’s go try to find it.'”

A giant pool of RAM is like an open-world game for algorithms to explore–but without any of the load-time lag you might be used to from an actual game. That’s where “random access” is critical. A program can get any byte of data in RAM at any time just as easily as any other byte. By comparison, retrieving data from a hard drive requires a preconceived notion of what bytes to request, of what data will be important–and a comparatively lethargic process to get it.

Among other applications, open-world data exploration could boost cybersecurity, says Kirk Bresniker, the chief architect at Hewlett Packard Labs, by monitoring an entire corporate network at once. Malware has become extremely subtle in how it infiltrates a network and calls back to hacker’s machines. “They’re not just going to some subdomain that ends in “.ru,” he says. “They are going from one compromised system to a second compromised system to a third compromised system.”

Lots of tiny, odd things may happen along the way, he says, like one of those computers connecting to a website with a bizarre-looking URL that comes online for only a few seconds. By following the whole chain of subtle oddities in connections, software can ascertain that something fishy is happening, claims Bresniker. “You can’t just look at one relationship. You have to look at second order, third order. It’s like the six degrees of Kevin Bacon,” he says.

https://www.youtube.com/watch?v=vEPHdNtJD74

Speculative Technology

Hewlett Packard Enterprise is riding a larger industry trend toward computers that preload more and more data from slow hard drives into fast RAM, says Daniel Reed. And the company has a daring end goal: an all-RAM supercomputer with no hard drives. The biggest thing required to make it economical is something called persistent memory–RAM that consumes virtually no power–technology that doesn’t quite exist yet. “I guess the real takeaway is, I’m all-in on this,” says Meyer.

In the meantime, Hewlett Packard Enterprise is faking its ideal computer by jerry-rigging one with today’s technology, a project it began in 2014. “It was a call we made pretty early on that we would approximate persistent memory today,” says Bresniker, “and make it look persistent just by keeping the power on.” The prototype, which went online in May 2017, carries a name that sounds deceptively, audaciously simple, and also not unlike a professional wrestler: The Machine.

advertisement
HPE engineers have to wear headphones because of all the noise from cooling fans in The Machine prototype in Fort Collins, Colorado. [Photo: courtesy of HPE]
Meyer calls Superdome Flex “The Machine 0.5.” It fits into a 10-year plan to roll parts of The Machine into shipping products as the new technologies it requires become practical. One example: fiber-optic memory fabric that sends information with pulses of light rather than electrons. “I’m going to have a commercialized platform we can drop those [technologies] into, aligned with other technologies, so that customers don’t have to wait for a big bang–no pun intended,” says Meyer. “They can start taking advantage of those things now.”

The technical challenge for The Machine is not to produce enough RAM, but to not go broke paying its electric bill. A hard drive holds data securely even when the power’s off–earning it the name “nonvolatile” memory. The price of instant access for RAM, however, is volatility–every bit of data needs a constant charge to keep from fading.

But future RAM might not work that way. In 2008, for instance, HP Labs created a metallic material that could be rejiggered on the atomic level when zapped with electricity, thereby changing the amount of electrical resistance it provides. That alteration, which remains after the current is turned off, is a way to record information. The process can be reversed with another zap of electricity, creating a method to quickly write and rewrite data.

HP predicted that its memory resistor, or memristor, might eliminate the need for hard drives some day. Meanwhile, Intel and several other companies have been backing a rival technology, called phase-change memory, that applies current to change the crystalline structure of a special type of glass in order to alter its electrical resistance. There has even been talk of these and other nonvolatile memory technologies functioning as a synthetic neuron in AI devices that mimic brains.

A component of The Machine running in Hewlett Packard Labs. [Photo: courtesy of HPE]
A decade later, memristors remain a science experiment, although Hewlett Packard Enterprise and hard-drive maker Western Digital continue working on a commercial product. Phase-change memory is further along, with small chips available. (In October, IBM announced that it hadrun AI applications inside a phase change chip.) “I would say they’re right at the cusp in transitioning, and maybe have begun the transition, from science experiments,” says Daniel Reed, about persistent memory.

Instead of waiting, though, HP decided in 2014 to build a pretend version of its computer of the future using standard RAM. The project continued after the company split in 2015, and Hewlett Packard Enterprise brought its first prototype version of The Machine online earlier this year, with a rack of servers holding 160 terabytes of RAM.

While Bresniker’s hardware team was building the prototype, he asked his software team to imagine how they would use The Machine. “It took them about a year and a half of thinking about the kinds of changes we had, about me giving them systems that were successive approximations.” The security application that monitors every connection on an entire corporate network at once was one of their ideas.

Another was a new way to predict the value of financial futures. Instead of number crunching using predictive models for each trade, the programmers found that, with enough RAM, they could explore every variation on a financial model at once and store all the results in RAM. Afterwards, any future trade could be calculated by looking up the pre-computed results in a table. “They would come up with an answer that was just as accurate, but 8,000 and in some cases 10,000 times faster,” says Bresniker. “And when you can do a business transaction 10,000 times faster . . . it changes the business you are in.”

Bresniker then met researchers from the German Center for Neurodegenerative Disease, known by its German acronym, DZNE. The institution focuses heavily an Alzheimer’s by analyzing huge data sets, like making comparisons of entire genomes. Hewlett Packard Enterprise loaned DZNE one of its older systems, called Superdome X, to try running its operations in a big memory machine.

The researchers sped up calculations 100-fold. “And then we go beyond that,” said researcher Joachim Schultze at a Hewlett Packard Enterprise conference in June. “We want to integrate with imaging data, with a patient’s history, with their drug data, and so on.” DZNE will soon get its own Superdome Flex, with one and a half terabytes of RAM.

HPE’s unremarkable green logo appears on a Superdome Flex module. [Photo: courtesy of HPE]
In addition to debuting The Machine this year, HPE alsosent a smaller-scale supercomputer to the International Space Stationin August for a year of testing in the harsh environment. And HPE was also one of six companies, including IBM, Intel, and Nvidia, to win a U.S. Department of Energy contractto build the country’s next generation “Exascale” supercomputer, which aims to win the world’s biggest computer title back from China. HPE will be advocating its memory fabric technology and large-memory designs, even though Intel recently struck a blow against the effort when it opted to not join HPE’s consortium supporting the technology.

Cross-Checking Loads Of Copies Of The Universe

Whether or not Hewlett Packard Enterprise’s ultimate vision for The Machine materializes, computers with giant pools of RAM have already left their mark on physics, says Paul Shellard. His team is studying a key phase in the evolution of the universe: its transition from a foggy sphere of light into a mass containing the first atoms.

This event, about 400,000 years after the Big Bang, is the source of the cosmic microwave background, or CMB, the oldest “light” that astronomers are able to see. The CMB isn’t perfectly even. Slight variations in intensity show unevenness in the primordial mass that would later grow into intergalactic features.

“It’s the blueprint of what the universe will become later on,” says Shellard. “Because later on, gravitational forces take over, and galaxies form, and stars form. And planets and you and I appear.”

The better COSMOS can read those subtle variations in the CMB, the more detailed that blueprint becomes. COSMOS is now crunching data from the European Space Agency’s Planck telescope, which orbits the sun about a million miles beyond the Earth. Planck has returned terabytes of data that COSMOS is analyzing to distinguish minute differences in cosmic microwave background intensity. “It involves actually having loads of copies of the universe, which you filter in different ways, and then you cross-correlate them with each other,” says Shellard.

COSMOS has been measuring these variations in two dimensions, which requires 1.5 terabytes of RAM. With the new computer, Shellard plans to use far-larger three-dimensional models of the universe that may ultimately require 20 terabytes.

“Now you’ve got to keep all these copies of the three-dimensional universe in memory, and you’re cross-correlating those with each other looking for patterns,” says Shellard. “We’re really expecting these programs to fly much faster on this new system.” Putting all this data in memory also means that researchers can change parameters on the fly and see the effects in real time.

Though he just got a new computer, Shellard is already looking forward to more gear. “We’re certainly encouraging HP[E] to make their system bigger and bigger,” he says. “They can put lots more into their systems. It’s a matter of us affording it.”

Recognize your brand’s excellence by applying to this year’s Brands That Matter Awards before the early-rate deadline, May 3.

PluggedIn Newsletter logo
Sign up for our weekly tech digest.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Privacy Policy

ABOUT THE AUTHOR

Sean Captain is a business, technology, and science journalist based in North Carolina. Follow him on Twitter  More


Explore Topics