The Hubble Space Telescope has an eight-foot wide mirror that surgeon-outfitted physicists scrub to remove any dust particles before it launches into outer space. As the mirror gazes into space, it opens its eye for 20 minutes at a time, absorbing every bit of light to see the faintest of stellar objects.
But there’s a physical limit to how much detail can be discerned with all these photons, which means even the Hubble gets a lot of what’s known as noise, or static in the picture eventually. Which is why a team of Swiss researchers has developed a neural network–an artificial intelligence that plays out across a human-inspired digital brain–that has learned how to take fuzzy telephotography and turn it into sharp, discernible images.
To develop the system, the researchers built two neural nets and made them compete (a dark but fairly typical way that scientists develop the best neural nets). They were each trained on the same set of data, which included perfect images of galaxies and nebula spotted through telescopes alongside Photoshop-fuzzified alternatives.
How did the best system do? You can see the images for yourself here. On the left are examples of the training set, which includes the original photograph and the Photoshopped distortion. On the right you’ll see the image that the net was able to reproduce from a trained guess, along with the “deconvolution” method–essentially image-fix algorithms–that NASA has deployed in the past.
The amazing part? The net was able to reproduce the faintest, most delicate structures. “It is able to recover details that the deconvolution cannot, such as star-forming regions, dust lanes, and the shape of spiral arms,” write the researchers. You can see this for yourself in the testing above. A spiral galaxy’s wispy arms are cut off by deconvolution but glow with an ethereal presence when reproduced by the neural net.
The system isn’t perfect. Researchers shared several of their “fail” images (which don’t look horrible to the naked eye, to be honest), and admit that even their own neural nets wouldn’t be able to see galaxies under the influence of varying redshifts (or the color shift that occurs as planets and stars get farther and farther away), because they’ve yet to be trained on the information. That’s a fixable problem, but it teases a larger point.
Much like a dictionary only knows the words that have been written inside, making it ignorant to modern slang, so too is a neural net’s knowledge built from very specific sets of information, which mean things we haven’t seen before are harder to parse, or even impossible for the net to identify.
So neural nets may be able to scrub through old Hubble imagery to find new galaxies that fit the common spiral/elliptical/lenticular/irregular designations. But as for discovering the unseen wonders of the universe that have yet to be classified? When the day comes, we may have to spot the aliens ourselves.