Remember the old cautions to keep magnets away from your computer and use a surge protector? What about exposing it to cosmic rays for a year? That’s what NASA finished testing earlier this year, after running a standard Hewlett Packard Enterprise (HPE) business computer aboard the International Space Station. Instead of bulky shielding, the system used software tricks to monitor dangerous conditions and power down or make other adjustments to survive hazards like radiation spikes and unexpected power outages.
With three or four months before the system gets a ride back to Earth for more testing, NASA decided to put the system, an HPE Apollo 4000-series enterprise server, to work doing real science experiments on the ISS.
“We’ve been scheduled to return to Earth on SpaceX 17, which is in late February or early March,” says Mark Fernandez, lead developer for HPE’s Spaceborne Computer program. “Therefore we can open up the supercomputer on the ISS for advancing other types of space exploration.”
The Apollo 4000 is technically a “supercomputer,” because it can perform one trillion floating-point operations per second–a teraflop. That’s now a routine piece of equipment for corporations on Earth, but it’s more processing power than all the other computers on the ISS combined, says Fernandez. “We’ve got 32 [computing] cores onboard,” says Fernandez. “We could run [at least] 32 virtual machines and address the computational needs of 32 experiments.”
NASA has a lot more computing power on the ground, but it has limited bandwidth between the ground and the ISS. The main benefit of having a powerful computer on the station is to do the initial processing of data from cameras or other sensors, then select, compress, and beam down just the relevant information. Fernandez gives image analysis of the Earth as an example.
“If you’re collecting 4K images and looking for something specific, which you already have the image processing software to find…we could do that on board,” he says.
What he doesn’t say is what those “specific” things might be. “I can’t discuss specifics right now,” he says, other than to say they might involve “climate.” Fernandez doesn’t answer whether those tasks might also include jobs for the military.
IT service for ET
Only so much can happen in a few months, though. Fernandez says that it is “still under consideration” whether the ISS might permanently house its own supercomputer in the future.
That depends in part on further tests of the system once it returns to Earth. Some components–such as the Intel “Broadwell” processors, and RAM, solid state hard drives, and other components from various vendors–will get thorough evaluation by the manufacturers, who will know where they’ve been.
Others will go into a kind of blind study, submitted as if they were defective components from a regular Earth-based computer. “We’re gonna send them some [components] that didn’t fail and say that they failed,” says Fernandez. “And we’ll see if they come back and say, ‘There’s nothing wrong with this product.'”
Powerful computers become ever more valuable the farther astronauts are from Earth, as the bandwidth decreases and the delay increases–such as up to 24 minutes between Earth and Mars.
“If you head out to Mars with humans,” says Fernandez, “you’re going to want a pretty redundant, self-reliant, self-healing system that’s quite capable of doing some calculations. And you may not know what those are [ahead of time].”
The radiation doses get higher the farther a computer (or astronaut) gets from the Earth’s sheltering magnetic field (which Mars lacks). So next steps will include testing computers farther out, says Fernandez. Future tests might include putting smaller versions of the computer on higher-flying satellites or even piggybacking on possible tourist trips around the Moon. “This is all just blue sky discussions at the moment,” he says. Or rather, black sky.