State-Of-The-Art Supercomputer Will Help Us Tackle Nuclear Fusion


Dr. Alfredo Carpineti

Senior Staff Writer & Space Correspondent

clockAug 28 2018, 18:13 UTC

3D rendering of the interior of a Tokamak. Efman/Shutterstock

Nuclear fusion promises unlimited clean energy by harnessing the physics at the core of stars. Progress towards such a power plant has been steady, but the difficulties in actually getting fusion to be tamed are several. To tackle some of them, we will soon get the help of a state-of-the-art supercomputer.

“Accelerated Deep Learning Discovery in Fusion Energy Science” is one of 10 Early Science Projects on data science and machine learning for the Aurora supercomputer, which is being developed by the US Department of Energy’s (DOE) Princeton Plasma Physics Laboratory. It will be operational by 2021, and it will perform 1 billion billion calculations per second – 50 to 100 times faster than the most powerful supercomputer today.


“Our research will utilize capabilities to accelerate progress that can only come from the deep learning form of artificial intelligence,” project lead Professor William Tang, from Princeton University, said in a statement.

Deep learning is a computational technique that allows computers to be trained to solve complex problems quickly and accurately. The goal of the project is to work out how to minimize and even control disruption in the flow of plasma, a serious problem in the tokamak fusion reactors.

Donut-shaped tokamaks are one of two types of reactors. The plasma – the hot and charged state of matter – is kept in the reactor with magnetic fields. The plasma is heated to a point where it begins fusing, forming heavier elements. The goal is to achieve a self-sustaining reaction. This is expected to be realized in the ITER project, the international reactor currently under construction in France, to demonstrate that fusion energy is a practical way to produce electricity.


ITER will require the software to predict disruption with 95 percent accuracy and at least 30 milliseconds (or longer) before a disruption occurs – a challenging requirement, but one that this computing process will try to achieve. The software will study data from disruptions in smaller reactors and learn from models and theoretical simulations. It is currently being tested on “smaller” supercomputers, but only the upcoming one will give the project the detailed resolution needed for ITER’s requirements.