ibm quantum interior

Yesterday Nvidia officially dipped a toe into quantum computing with the launch of cuQuantum SDK, a development platform for simulating quantum circuits on GPU-accelerated systems. As Nvidia CEO Jensen Huang emphasized in his keynote, Nvidia doesn’t plan to build quantum computers, but thinks GPU-accelerated platforms are the best systems for quantum circuit and algorithm development and testing.

As a proof point, Nvidia reported it collaborated with Caltech to develop “a state-of-the-art quantum circuit simulator with cuQuantum running on NVIDIA A100 Tensor Core GPUs. It generated a sample from a full-circuit simulation of the Google Sycamore circuit in 9.3 minutes on Selene, a task that 18 months ago experts thought would take days using millions of CPU cores.”

There has been a steady proliferation of companies and services in the young quantum computing ecosystem, however systems and software specifically targeting quantum simulation is a more recent occurrence. IBM and a small handful of others have done so for years but have tended to offer those capabilities in conjunction with their particular quantum computer offering. The move by cloud providers such as Azure and AWS to provide common portals with access to different quantum technology providers is stirring more activity.

In conjunction with the cuQuantum launch, Nvidia posted a blog by Chris Porter examining the state of quantum technology. It includes a quick summary guide by prominent physicist and quantum computing researcher Paul Benioff (see directly below).

Quantum computing’s prospects are tantalizing – enough so to mobilize a global race to develop practical quantum information-based systems – but most observers think practical application of these systems is still years away.

Here’s an excerpt from the Nvidia blog:

Predictions of when we reach so-called quantum computing supremacy — the time when quantum computers execute tasks classical ones can’t — is a matter of lively debate in the industry.

The good news is the world of AI and machine learning put a spotlight on accelerators like GPUs, which can perform many of the types of operations quantum computers would calculate with qubits.

So, classical computers are already finding ways to host quantum simulations with GPUs today. For example, NVIDIA ran a leading-edge quantum simulation on Selene, our in-house AI supercomputer.

NVIDIA announced in the GTC keynote the cuQuantum SDK to speed quantum circuit simulations running on GPUs. Early work suggests cuQuantum will be able to deliver orders of magnitude speedups.

The SDK takes an agnostic approach, providing a choice of tools users can pick to best fit their approach. For example, the state vector method provides high-fidelity results, but its memory requirements grow exponentially with the number of qubits.

That creates a practical limit of roughly 50 qubits on today’s largest classical supercomputers. Nevertheless we’ve seen great results (below) using cuQuantum to accelerate quantum circuit simulations that use this method.

Think of cuQuantum, says Nvidia, as CUDA for quantum computing. The SDK consists of “optimized libraries and tools for accelerating quantum computing workflows. Developers can use cuQuantum to speed up quantum circuit simulations based on state vector, density matrix, and tensor network methods by orders of magnitude.”

Link to blog:

Link to cuQuantum SDK:

Feature image: IBM Quantum computer insides

Source link

%d bloggers like this: