Categories
Misc

Researchers Harness GANs for Super-Resolution of Space Simulations

Carnegie Mellon University and University of California researchers developed a deep learning model that upgrades cosmological simulations from low to high resolution, allowing scientists to create a complex simulated universe within a day.

Astrophysics researchers have long faced a tradeoff when simulating space— simulations could be either high-resolution or cover a large swath of the universe. With the help of generative adversarial networks, they can accomplish both at once.

Carnegie Mellon University and University of California researchers developed a deep learning model that upgrades cosmological simulations from low to high resolution, allowing scientists to create a complex simulated universe within a day. 

These simulations are critical for researchers to unravel mysteries around galaxy formation, dark matter and dark energy. 

“Cosmological simulations need to cover a large volume for cosmological studies, while also requiring high resolution to resolve the small-scale galaxy formation physics, which would incur daunting computational challenges. said Yueying Ni, a Ph.D. candidate at Carnegie Mellon. “Our technique can be used as a powerful and promising tool to match those two requirements simultaneously by modeling the small-scale galaxy formation physics in large cosmological volumes.”  

The team’s GAN model can take full-scale, low-resolution models and turn them into super-resolution simulations with up to 512 times as many particles. Though it was trained on data from only small areas of space, the model was able to replicate large-scale structures seen only in massive simulations. 

Published in PNAS, the journal of the National Academy of Sciences, the project used the hundreds of NVIDIA RTX GPUs on the Texas Advanced Computing Center’s Frontera system.   

Using deep learning, the researchers could upscale the low-res model at the left to the super-res model at right — capturing the same detail as a conventional high-res model (center) while using much fewer computational resources. Image credit: Y. Li et al/PNAS 2021.

While existing methods would take over three weeks on a single processing core to create a detailed simulation of 134 million particles, the GPU-accelerated deep learning approach does it in just 36 minutes. And for simulations 1,000 times as large, the new method shrunk simulation time down from months on a dedicated supercomputer to 16 hours on a single GPU.

This acceleration can help scientists run more simulations to predict how the universe would look in different scenarios. 

“With our previous simulations, we showed that we could simulate the universe to discover new and interesting physics, but only at small or low-res scales,” said Rupert Croft, physics professor at Carnegie Mellon. “By incorporating machine learning, the technology is able to catch up with our ideas.”

Since the current neural networks focused on how gravity moves dark matter around over time, other phenomena such as supernovae and black holes were left out of the simulations. The team next plans to extend their methods to capture the forces responsible for these events. 

“The universe is the biggest data set there is,” said Scott Dodelson, head of the department of physics at Carnegie Mellon and director of the National Science Foundation Planning Institute for Artificial Intelligence in Physics. And “artificial intelligence is the key to understanding the universe and revealing new physics.” 

Read the full article in PNAS >> 

Read more >> 

Main image from TNG Simulations

Leave a Reply

Your email address will not be published. Required fields are marked *