June 26, 2020
Previous Next

Deep Learning Surrogate Models Could Hasten New Scientific Discoveries

By Jeremy Thomas

LLNL scientists have developed surrogate models supported by neural networks that can perform as well, and in some ways better, than computationally expensive simulators. The surrogate models could lead to new insights into complicated high-energy-density physics problems such as inertial confinement fusion (ICF) experiments on NIF.

In a paper recently published by the Proceedings of the National Academy of Sciences (PNAS), the researchers describe the development of a deep learning-driven “Manifold & Cyclically Consistent” (MaCC) surrogate model incorporating a multi-modal neural network capable of quickly and accurately emulating complex scientific processes.

The research team applied the model to NIF ICF implosions in which a computationally expensive numerical simulator is used to predict the energy yield of a target imploded by shockwaves produced by the facility’s high-energy laser. Comparing the results of the neural network-backed surrogate to the existing simulator, the researchers found the surrogate could adequately replicate the simulator and it significantly outperformed the current state of the art in surrogate models across a wide range of metrics.

“One major question we were dealing with was ‘how do we start using machine learning when you have a lot of different kinds of data?’” said LLNL computer scientist Rushil Anirudh, the paper’s lead author. “What we proposed was making the problem simpler by finding a common space where all these modalities, such as high pressure or temperature, live and do the analysis within that space. We’re saying that deep learning can capture the important relationships between all these different data sources and give us a compact representation for all of them.”

“The nice thing about doing all this is not only that it makes the analysis easier because now you have a common space for all these modalities, but we also showed that doing it this way actually gives you better models, better analysis, and objectively better results than with baseline approaches,” Anirudh added.

Simulations that would normally take a numerical simulator a half-hour to run could be done equally as well within a fraction of a second using neural networks, Anirudh explained. Perhaps even more valuable than saving compute time, explained computer scientist and co-author Timo Bremer, is the demonstrated ability of the deep learning surrogate model to analyze a large volume of complex, high-dimensional data in the ICF test case, which has implications for nuclear stockpile modernization efforts. The results indicate the approach could lead to new scientific discoveries and a completely novel class of techniques for performing and analyzing simulations, Bremer said.

This is particularly important at NIF, Bremer noted, because scientists do not yet fully understand why discrepancies exist between simulations and experimental results. In the future, deep learning models could elicit new capabilities and provide a way for scientists to analyze the massive amounts of x-ray images, sensor data, and other information collected from diagnostics of each NIF shot — including data that has not been incorporated because there is too much of it to be analyzed by humans alone, Bremer said.

“This tool is providing us with a fundamentally different way of connecting simulations to experiments,” Bremer said. “By building these deep learning models, it allows us to directly predict the full complexity of the simulation data.

“Using this common latent space to correlate all these different modalities and different diagnostics, and using that space to connect experiments to simulations, is going to be extremely valuable, not just for this particular piece of science, but everything that tries to combine computational sciences with experimental sciences. This is something that could potentially lead to new insights in a way that’s just unfeasible right now.”

Comparing the results of predictions made by the surrogate model to the simulator typically used for ICF experiments, the researchers found the MaCC surrogate was nearly indistinguishable from the simulator in errors and expected quantities of energy yield and more accurate than other types of surrogate models.

Researchers said the key to the MaCC model’s success was the coupling of forward and inverse models and training them on data together. The surrogate model used data inputs to make predictions, and those predictions were run through an inverse model to estimate, from the outputs, what the inputs might have been. During training, the surrogate’s neural networks learned to be compatible with the inverse models, meaning that errors did not accumulate as much as they would have before, Anirudh said.

“We were exploring this notion of self-consistency,” Anirudh explained. “We found that including the inverse problem into the surrogate modeling process is actually essential. It makes the problem more data-efficient and slightly more robust. When you put these two pieces together, the inverse model and the common space for all the modalities, you get this grand surrogate model that has all these other desirable properties. It is more efficient and better with less amount of data, and it’s also resilient to sampling artifacts.”

The team said machine learning-based surrogates can speed up extremely complex calculations and compare varied data sources efficiently without requiring a scientist to scan tremendous amounts of data. As simulators become increasingly complex, producing even more data, such surrogate models will become a fundamental complementary tool for scientific discovery, the researchers said.

“The tools we built will be useful even as the simulation becomes more complex,” said computer scientist and co-author Jayaraman Thiagarajan. “Tomorrow we will get new computing power, bigger supercomputers, and more accurate calculations and these techniques will still hold true. We are surprisingly finding that you can produce very powerful emulators for the underlying complex simulations, and that’s where this becomes very important.

“As long as you can approximate the underlying science using a mathematical model,” Thiagarajan added, “the speed at which we can explore the space becomes really, really fast. That will hopefully help us in the future to make scientific discoveries even quicker and more effectively. We believe that even though we used it for this particular application, this approach is broadly applicable to the general umbrella of science.”

Researchers said the MaCC surrogate model could be adapted for any future change in modality, new types of sensors, or imaging techniques. Because of its flexibility and accuracy, the model and its deep learning approach — referred to at LLNL as “cognitive simulation” or simply CogSim — is already being applied to a number of other projects within the Laboratory and is transitioning to programmatic work, including efforts in uncertainty quantification, weapons physics design, magnetic confinement fusion, and other laser projects.

MaCC is a key product of the Lab’s broader Cognitive Simulation Director’s Initiative, led by principal investigator and LLNL physicist Brian Spears and funded through the Laboratory Directed Research and Development (LDRD) program. The initiative aims to advance a wide range of AI technologies and computational platforms specifically designed to improve scientific predictions by more effectively coupling precision simulation with experimental data. By focusing on both the needs in critical mission spaces and the opportunities presented by AI and compute advances, the initiative has helped further LLNL’s lead in using AI for science. 

“MaCC’s ability to combine multiple, scientifically relevant data streams opens the door for a wide range of new analyses,” Spears said. “It will allow us to extract information from our most valuable and mission-critical experimental and simulation data sets that has been inaccessible until now. Fully exploiting this information in concert with a new suite of related CogSim tools will lead quickly and directly to improved predictive models.”  

The research team has made their data publicly available at the Open Data Initiative.