Oct. 29, 2024
Previous Next

ICECap Looks to Use Exascale ICF Simulations to Pioneer Digital Design

By Jeremy Thomas,[email protected],(925) 422-5539

A groundbreaking multidisciplinary team of LLNL researchers is combining the power of exascale computing with AI, advanced workflows, and graphics processor acceleration to advance scientific innovation and revolutionize the digital designs of high-yield inertial confinement fusion (ICF) experiments.

The project, called ICECap (Inertial Confinement on El Capitan), is a transformative approach to ICF design optimization targeted primarily for El Capitan, the National Nuclear Security Administration (NNSA)'s first exascale computer.

Sited at LLNL and scheduled to deploy later this year, El Capitan is projected to exceed 2 ExaFLOPs (two quintillion calculations per second) of peak computing performance in service of the NNSA Tri-Labs (LLNL, Los Alamos National Laboratory, and Sandia National Laboratories).

At the core of ICECap is discovering the next generation of robust, high-yield ICF designs, expanding the possibilities of computational science and shaping the future of plasma science through emerging technologies. ICF has implications for NNSA’s stockpile stewardship mission, as well as future viable fusion power plants.

Described in a paper published by the journal Physics of Plasmas, ICECap represents a leap forward in high performance computing (HPC), setting the stage for an era where “hero runs”—large-scale multiphysics simulations—become routine.

With a focus on data-driven approaches to digital design and computational modeling, ICECap team members said the project has the potential for not only accelerating science but for transforming the way scientists conduct research and drive advancements across various disciplines, offering new solutions to previously untenable problems.

“With ICECap, we're trying to see how we can leverage AI to really change the way we do scientific discovery,” said ICECap principal investigator Luc Peterson, an LLNL physicist. “We have supercomputers that can do fantastic simulations, but how can we use AI to help us take advantage of them to find new things? We’re doing this on El Capitan because we think we’re at the point where we can actually do both breadth and depth in computing, so you can search lots of parameter spaces to find what you’re looking for and do it all in extremely high fidelity.”

ICECap researchers report on the development of a sophisticated prototype workflow that streamlines the optimization process for numerous intricate design parameters at once.

On El Capitan’s predecessor machines, ICECap researchers applied the AI-based workflow to ICF experiments carried out at LLNL’s National Ignition Facility (NIF) and the intricate design of a hohlraum—the housing for the deuterium-tritium targets used in ICF science.

Using only a few hundred simulations, the machine learning (ML)-based workflow optimized target design at 17 different parameters simultaneously, a feat previously thought unattainable in such a short span of simulations. The results demonstrated the framework’s ability to make design choices that align with physics intuition in an automated and efficient manner, according to researchers.

Finding Echoes of Simulations in the Real World

By successfully tackling issues such as the thickness of the ablator (the outer surface of the ICF target capsule), NIF’s laser power, and target manufacturability, ICECap represents a significant technological leap in automated ICF design, pushing the boundaries of what is achievable in computational modeling and design optimization and ushering in further breakthroughs in scientific modeling and simulation, researchers said. According to Peterson, the framework did exactly what they would have expected when compared to real-world experiments and design modifications.

“We took a lot of these parameters and we put them all in and said, ‘OK, re-optimize this; see what you can do,’ ” Peterson said. “Our algorithms changed all the parameters at once, which is not something that a human would do. It changed the timing of the second (laser pulse) and thickened the ablator, which had been done independently on real-world campaigns including the landmark December 2022 fusion ignition shot.

“So, these two independent ways humans had come up with to improve the design, our algorithm was able to pick up and change simultaneously, coming up with a blend of the two approaches. And our algorithm could walk a fine line—because if you're changing even one (parameter), you have to change several other secondary parameters or the design won’t work,” he said.

While ICECap is being developed in the context of next-generation ICF science, researchers said the project’s goal is to “build a general framework for all design problems.” Peterson and the ICECap team don’t see the framework as replacing scientists and engineers, but instead “augmenting” them, freeing up their time to ask higher-level questions.

By automating optimization of high-dimensional, multifidelity systems, researchers could use the ICECap framework to efficiently explore and advance intricate design parameters, thus harnessing the full computational power of exascale systems and encouraging more in-depth and detailed studies of a broad range of scientific phenomena, team members explained.

“The methodology works with one-dimensional simulations,” Peterson said. “But we're hoping with El Capitan, we’ll have the computational power to do 2D and 3D simulations that can capture those other important higher-order physics—we’re going to be able to treat our numerical experiments really like playgrounds to figure out how our best representations of reality respond to certain scenarios.

“The cool thing about El Capitan is that we won’t be limited to a single static image. We can pan the camera around, we can zoom in and out, we can look around and see what things really look like. We can ask questions we haven’t even thought of before, because we’re at the point where you can get extremely realistic representations of the world, and we're going to learn a lot,” Peterson said.

ICECap researchers said the project is not without its challenges in ensuring seamless operation, as they navigate the complexities of potentially millions of large-scale simulations, massive data volumes, system performance monitoring, and the sheer scale of computational power in an exascale computing environment such as El Capitan’s.

To address these challenges, the ICECap team devised innovative mitigation strategies, including the use of smoke tests with simulated data and utilizing El Capitan’s early access hardware to map out performance scalability and fine-tune the system infrastructure ahead of the machine’s deployment.

A One-Size-Fits-All Approach

ICECap leverages several high energy density physics codes optimized for graphics processing units (GPUs) like MARBL and HYDRA and tools for running large-scale ML-based workflows including Merlin, which was originally applied to ICF problems but is now being used to research COVID-19, drug design, and a host of physics problems. It also takes a modular approach, which allows for flexibility and portability in addressing a wide range of challenging science and engineering problems.

Still image from a 3D MARBL simulation of the Aug. 8, 2021 “Burning Plasma” shot at NIF
Still image from a 3D MARBL simulation of the Aug. 8, 2021 “Burning Plasma” shot at the National Ignition Facility. This calculation consists of 161 million high-order quadrature points and ran on 16 nodes of RZAdams, using AMD’s MI300A architecture. Credit: Rob Rieben

“As a relatively new code, MARBL has been steadily ramping up in GPU-accelerated multiphysics capabilities for modelling ICF capsules,” said computational scientist Rob Rieben, who leads the MARBL team. “ICECap will be an ideal opportunity to stress-test the latest features of the code on challenging 2D and 3D ICF inverse design simulations at an unprecedented scale.”

According to Joe Koning, a computational physicist on the HYDRA ICF code project, ICECap aims to “leverage GPU-enabled simulation codes, machine learning, workflows, and optimization algorithms to realize the goal of discovery of robust high-yield ICF designs on exascale computers. And HYDRA, the principal code used for 2D/3D ICF design, is an essential component of the project.”

Other HYDRA researchers said the team made major improvements in the software stack to help make ICECap become possible, from “extending the user interface and ALE (Arbitrary Lagrangian-Eulerian) flow models to facilitate automation to extending key physics modules to run on MI300 GPUs,” said Chris Schroeder, another computational physicist on the HYDRA team. “I'm thrilled to work at this nexus of cutting-edge science and computing.”

Peterson said the team has been constructing the ICECap framework “with an eye toward modularity, so that researchers aren’t tied to a very specific code or code base or problem.” This modularity will enable scientists to adapt the technologies to different tasks and complex systems by swapping out simulation models and instrumenting them for optimization.

Laying the groundwork for a smooth transition to the exascale age, Peterson said the team hopes to “shift the conversation towards what large-scale computing is for and usher in an era where the heroic simulations of yesterday are the commonplace simulations of tomorrow.

“Our goal was to have a low bar for folks to plug in, so we had to come up with a framework that allows two different codes to talk to each other,” he said. “We've been anticipating modularity and broad use for years and have been building the tools to reach this point with that in mind.

“ICECap is designed for ICF and fusion, but there are all sorts of other applications, like advanced manufacturing and automated laboratories, where in the future you will have an AI system proposing designs, and then you’ll send it to your robotic laboratory, and it constructs that and then sends it back. This is all one step on that path—it’s an exciting time.”

Funding for ICECap comes from LLNL’s Weapon Simulation and Computing program and leverages technology developed in a Laboratory Directed Research and Development (LDRD) project applying cognitive simulation to design, led by Peterson.

Other ICECap team members include Tim Bender, Robert Blake, Nai-Yuan Chiang, M. Giselle Fernández-Godino, Bryan Garcia, Andrew Gillette, Brian Gunnarson, Cooper Hansen, Judy Hill, Kelli Humbird, Bogdan Kustowski, Irene Kim, Eugene Kur, Steve Langer, Ryan Lee, Katie Lewis, Alister Maguire, Jose Milovich, Yamen Mubarka, Renee Olson, Jay Salmonson, Brian Spears, Jayaraman Thiagarajan, Ryan Tran, Jingyi Wang, and Chris Weber.

More Information:

“Toward digital design at the exascale: An overview of project ICECap,” Physics of Plasmas, June 18, 2024

“Designing for Ignition: Precise Changes Yield Historic Results,” NIF & Photon Science News, March 1, 2023

“High Performance Computing, AI, and Cognitive Simulation Helped LLNL Conquer Fusion Ignition,” NIF & Photon Science News, June 21, 2023

Follow us on X: @lasers_llnl