General Lab Information

High Performance Computing Group

Multiscale Acceleration: Powering Future Discoveries in High Energy Physics

CSI’s HPC Group is collaborating on a five-year DOE SciDAC-5 project led by Brookhaven Lab’s Physics Department that includes external partners at other national labs (Berkeley, Argonne, and FermiLab) and universities (Columbia, Boston, MIT, Utah, Illinois, and Buffalo). The effort is developing novel algorithms and techniques for addressing some of the most challenging problems in scaling lattice quantum chromodynamics (QCD) simulations for deployment on new and future DOE leadership-class computing facilities. Underpinning this project is a close collaboration with applied mathematicians from the SciDAC-5 FASTMath/PETSc institute.

Lattice QCD is a computational technique for studying the dynamics of quarks and gluons in the hadronic regime, where traditional mathematical methods are inapplicable. The physics in a small volume of four-dimensional space the size of a few protons is instead discretized on a grid and directly simulated on supercomputers using Monte Carlo techniques. Lattice simulations provide vital inputs to experimental searches for new physics, for example in the search for B-physics anomalies at the Large Hadron Collider and Belle II, the muon magnetic moment experiments at FermiLab, and upcoming neutrino experiments, such as Deep Underground Neutrino Experiment.

The push for more precise lattice results requires larger volumes and finer lattice spacings. Unfortunately, existing lattice algorithms typically suffer from scalability issues that dramatically increase the computational cost of performing such simulations. To solve these issues, this project is divided into three work packages. The first focuses on so-called “multiscale solvers,” which aim to exploit QCD properties at different length scales to accelerate the inversion of the Dirac matrix—typically the most computationally expensive component of a lattice calculation. The second seeks to overcome the “critical slowing down” of the Markov-chain Monte Carlo sampling at fine lattice spacings by accelerating the evolution of its slow modes through careful, physically inspired modifications of the update algorithm. Finally, the third focuses on using “domain decomposition” to reduce the Monte Carlo procedure’s reliance on the strength of the supercomputer’s network as current trends show much faster growth in local compute capability relative to network performance.

This project is vital to ensuring that lattice simulations can continue helping to address some of the most important questions in high energy physics. The CSI staff involved in the project are primarily focusing on developing the “domain decomposed hybrid Monte Carlo” (DDHMC) algorithm in the “Grid” software package and have successfully demonstrated the correctness of the implementation, as well as created a novel variant capable of dealing with single fermion flavors. Projections based on the current implementation imply relative speed-ups of ~2x are attainable on machines with weaker networks with the potential for ~3.5x or more with additional optimization.