Astronomers Deploy AI to unravel the Mysteries of the Universe

Monday, March 06, 2017 Unknown 0 Comments Category : , ,




Astronomer Kevin Schawinski has spent much of his career studying how massive black holes shape galaxies. But he isn’t into dirty work—dealing with messy data—so he decided to figure out how neural networks could do it for him. Problem is, he and his cosmic colleagues suck at that sophisticated kind of coding.
That changed when another professor at Schawinski’s institution, ETH Zurich, sent him an email and CCed Ce Zhang, who actually is a computer scientist. “You guys should talk,” the email said. And they did: Together, they plotted how they could take leading-edge machine-learning techniques and superimpose them on the universe. And recently, they released their first result: a neural network that sharpens up blurry, noisy images from space. Kind of like those scenes in CSI-type shows where a character shouts “Enhance! Enhance!” at gas station security footage, and all of a sudden the perp’s face resolves before your eyes.
Schawinski and Zhang’s work is part of a larger automation trend in astronomy: Autodidactic machines can identify, classify, and—apparently—clean up their data better and faster than any humans. And soon, machine learning will be a standard digital tool astronomers can pull out, without even needing to grasp the backend.

The Most-Improved Award

In their initial research, Schawinski and Zhang came across a kind of neural net that, in an example, generated original pictures of cats after learning what “cat-ness” is from a set of feline images. “It immediately became clear,” says Schawinski.
This feline-friendly system was called a GAN, or generative adversarial network. It pits two machine-brains—each its own neural network—against each other. To train the system, they gave one of the brains a purposefully noisy, blurry image of a cat galaxy and then an unmarred version of that same galaxy. That network did its best to fix the degraded galaxy, making it match the pristine one. The second half of the network evaluated the differences between that fixed image and the originally OK one. In test mode, the GAN got a new set of scarred pictures and performed computational plastic surgery.
Once trained up, the GAN revealed details that telescopes weren’t sensitive enough to resolve, like star-forming spots. “I don’t want to use a cliché phrase like ‘holy grail,’” says Schawinski, “but in astronomy, you really want to take an image and make it better than it actually is.” When I asked the two scientists, who Skyped me together on Friday, what’s next for their silicon brains, Schawinski asked Zhang, “How much can we reveal?” which suggests to me they plan to take over the world. 
They went on to say, though, that they don’t exactly know, short-term (or at least they’re not telling). “Long-term, these machine learning techniques just become part of the arsenal scientists use,” says Schawinski, in a kind of ready-to-eat form. “Scientists shouldn’t have to be experts on deep learning and have all the arcane knowledge that only five people in the world can grapple with.”.
What Ghosts in Machines Are Good For 
Other astronomers have already used machine learning to do some of their work. A set of scientists at ETH Zurich, for example, used artificial intelligence to combat contamination in radio data. They trained a neural network to recognize and then mask the human-made radio interference that comes from satellites, airports, WiFi routers, microwaves, and malfunctioning electric blankets. Which is good, because the number of electronic devices will only increase, while black holes aren’t getting any brighter. Neural networks need not limit themselves to new astronomical observations, though. Scientists have been dragging digital data from the sky for decades, and they can improve those old observations by plugging them into new pipelines. “With the same data people had before, we can learn more about the universe,” says Schawinski. Machine learning also makes data less tedious to process. Much of astronomers’ work once involved the slog of searching for the same kinds of signals over and over—the blips of pulsars, the arms of galaxies, the spectra of star-forming regions—and figuring out how to automate that slogging. But when a machine learns, it figures out how to automate the slogging. The code itself decides that “galaxy type 16” exists and has spiral arms and then says, “Found another one!” As Alex Hocking, who developed one such system, put it, “the important thing about our algorithm is that we have not told the machine what to look for in the images, but instead taught it how to ‘see.’”  A prototype neural network that pulsar astronomers developed in 2012 found 85 percent of the pulsars in a test dataset; a 2016 system flags fast radio burst candidates as human- or space-made, and from a known source or from a mystery object. On the optical side, a computer brainweb called RobERt—Robotic Exoplanet Recognition—processes the chemical fingerprints in planetary systems, doing in seconds what once took scientists days or weeks. Even creepier, when the astronomers asked RobERt to “dream up” what water would look like, he, uh, did it.  The point, here, is that computers are better and faster at some parts of astronomy than astronomers are. And they will continue to change science, freeing up scientists’ time and wetware for more interesting problems than whether a signal is spurious or a galaxy is elliptical. “Artificial intelligence has broken into scientific research in a big way,” says Schawinski. “This is a beginning of an explosion. This is what excites me the most about this moment. We are witnessing and—a little bit—shaping the way we’re going to do scientific work in the future.”

RELATED POSTS

0 comments