Nvidia, whose focus on quantum computing thus far has been very much around enabling hybrid classical-quantum workloads to run on its GPUs, and Xanadu, developer of the PennyLane quantum programming framework, this week touted new performance and scalability increases achieved by researchers using PennyLane along with Nvidia cuQuantum software and A100 GPU clusters.
A blog post from Nvidia described how researchers at the U.S. Department of Energy’s Brookhaven National Laboratory are preparing to run quantum computing simulations on a supercomputer for the first time. The Perlmutter supercomputer at the National Energy Research Scientific Computing Center (NERSC) is using the latest version of PennyLane, open-source software which builds on the cuQuantum software development kit.
Brookhaven researcher Shinjae Yoo, for example, will run programs related to physics and machine learning that are based on very large datasets across as many as 256 Nvidia A100 Tensor Core GPUs on Perlmutter to simulate about three dozen qubits, which the Nvidia post claims is “about twice the number of qubits most researchers can model these days.”
“This opens the door to letting even my interns run some of the largest simulations — that’s why I’m so excited,” said Yoo, whose team has six projects using PennyLane in the pipeline.
Meanwhile, Katherine Klymko, who leads the quantum computing Program at NERSC, stated that at least four other projects could produce results on the Perlmutter supercomputer this year using multi-node PennyLane, including efforts from NASA Ames and the University of Alabama.
“Researchers in my field of chemistry want to study molecular complexes too large for classical computers to handle,” she said. “Tools like Pennylane are key to let them extend what they can currently do classically to prepare for eventually running algorithms on large-scale quantum computers.”
Meanwhile, Lee J. O’Riordan, a senior quantum software developer at Xanadu, stated in the blog post that PennyLane users want to see the number of simulated qubits continue to ramp up. “When we started work in 2022 with cuQuantum on a single GPU, we got 10x speedups pretty much across the board … we hope to scale by the end of the year to 1,000 nodes — that’s 4,000 GPUs — and that could mean simulating more than 40 qubits,” O’Riordan said.
Image: Brookhaven researcher Shinjae Yoo
Dan O’Shea has covered telecommunications and related topics including semiconductors, sensors, retail systems, digital payments and quantum computing/technology for over 25 years.