IBM - International Business Machines Corporation

07/21/2025 | News release | Distributed by Public on 07/21/2025 07:12

The dawn of quantum advantage

Quantum computing is about to enter an important stage - the era of quantum advantage. The first claims of quantum advantage are emerging, and over the next few years, we expect researchers and developers to continue presenting compelling hypotheses for quantum advantages. In turn, the broader community will either disprove these hypotheses with cutting-edge techniques - or the advantage holds.

Put simply, quantum advantage means that a quantum computer can run a computation more accurately, cheaply, or efficiently than a classical computer. Between now and the end of 2026, we predict that the quantum community will have uncovered the first quantum advantages. But there's more to it than that.

We have arrived already at a place where quantum computing is a useful scientific tool capable of performing computations that even the best exact classical algorithms can't. We and our partners are already conducting a range of experiments on quantum computers that are competitive with the leading classical approximation methods. At the same time, computing researchers are testing advantage claims with innovative new classical approaches.

So, how will we know if and when we've achieved quantum advantage?

We've published a new white paper with startup Pasqal that lays out the definition of quantum advantage, how we can scientifically validate claims, and potential ways to achieve it.

What is quantum advantage?

In our white paper, we define quantum advantage as the ability to execute a task on a quantum computer in a way that satisfies two essential criteria. First, the correctness of the quantum computer's output can be rigorously validated. Second, it is performed with a quantum separation that demonstrates superior efficiency, cost-effectiveness, or accuracy over what is attainable with classical computation alone.

This definition has several implications. The first is that we don't expect quantum advantages to be achieved by quantum computers acting alone. Instead, they will emerge from use cases where we leverage quantum computers to augment a classical workflow. So, quantum advantage really means that "quantum plus classical" can outperform classical alone.

The ideal benchmark we strive for is an unconditional quantum separation - a clear, provable gap in algorithmic performance between quantum and classical computers. These separations are typically grounded in complexity theory-based assumptions or derived from direct comparisons with the best-known classical algorithms. In certain specific cases, researchers have already identified the potential existence of such separations. However, most existing results so far do not demonstrate the exponential performance advantage that quantum computers are poised to deliver.

Given that definition, we expect the first quantum advantage claims to arise in one of three problem areas: sampling problems, variational problems, and calculating expectation values of observables.

At the same time, it can be challenging to rigorously confirm when an advantage has occurred unless the result can be checked classically or uses the variational principle (see below), which is not always the case. Instead, we will need to rely on verifying each part of the computation on its own, which can be done using trusted methods of error detection and mitigation.

The requirement for rigorous validation implies that, in practice, research groups will hypothesize that they've demonstrated a quantum advantage, and then attempt to validate the result. At the same time, the community will respond with attempts to support or falsify the hypothesis. This back-and-forth will continue until we reach a consensus. We also believe that it positions variational problems and calculating expectation values as likely delivering the first proven advantages, given our ability to validate these kinds of problems.

That leads to a critical point: quantum advantage won't be a single moment in time. Rather, we'll see a number of hypotheses tested until eventually the community settles that quantum advantages have been realized.

And that's only the start, because the search for quantum advantages doesn't end after the first ones are accepted. We must continue developing the algorithms that will bring useful quantum computing to the world. That search will continue even after the release of the first large-scale fault-tolerant quantum computers, in much the same way that computer scientists push the field of classical computing forward today.

So, let us be clear: By the end of next year, we predict that the community will coalesce around an agreement over the first demonstrations of quantum advantages. From that point forward, we will continue searching for new algorithms that extract further value from quantum computers.

A clear path to quantum advantage

IBM has long promoted a clear, incremental path to quantum advantage. We are driving innovations in quantum computing hardware to extract accurate, valuable outputs from quantum circuits. At the same time, domain experts and developers from IBM and the quantum community are searching for valuable quantum computing algorithms.

Back in 2023, IBM achieved a critical milestone along this path with quantum utility. Quantum utility demonstrated that a quantum computer could perform reliable computations at a scale beyond brute force classical simulations of quantum circuits. But now, advantage means outperforming all classical methods.

Pushing forward along that path requires that we improve three key hardware and infrastructural elements to achieve advantage: Performant quantum hardware, infrastructure to run programs synchronized across classical and quantum resources, and methods for running accurate quantum circuits. For that third piece, our partners are providing valuable help.

Near-term error mitigation to achieve long-term advantage

Fully-realized fault-tolerant quantum computing will require implementing error correction -but in the meantime, a new set of techniques have arisen that can reduce and eliminate bias in expectation value calculations caused by noise in quantum circuits. We call these error mitigation techniques - they "mitigate" the effects of noise. Error mitigation is crucial for achieving quantum advantage before the end of 2026, and is likely to play an important role in early fault-tolerant regimes.

Some of today's error mitigation techniques use classical post-processing, and those require exponential computational overhead. However, for near-term demonstrations, they scale far more favorably than classical simulation methods, and that scaling will continue to improve alongside improving hardware.

A number of our partners are building powerful error mitigation methods, which can be accessed as a service via our Qiskit Function Catalog. For example, Algorithmiq's Tensor Network Error Mitigation (TEM) circuit function manages noise in software post-processing, while lowering quantum processing unit usage. As we move along the IBM Quantum roadmap towards increasingly large quantum systems, incorporating error mitigation services such as Algorithmiq's TEM function demonstrates the use of classical HPC to extend the reach of current quantum computers, an architecture we call quantum-centric supercomputing. We expect that techniques like TEM will help the research community discover quantum algorithms that will unlock new computational territory and facilitate the push towards quantum advantage.

Another example of error-mitigation's success is Qedma's Quantum Error Suppression and Error Mitigation (QESEM) circuit function - also available in the Qiskit Functions Catalog. QESEM combines quantum error suppression and error mitigation to reduce hardware-level errors, and provides a resource-efficient service to improve quantum computation reliability. For QESEM users, this means it improves accuracy for executing utility-scale circuits and beyond, enabling researchers to unlock greater value from near-term quantum computation.

These are just two examples among many that highlight how improving and simplifying the use of error mitigation techniques is the key to realizing useful quantum computing in the near term. As the capabilities of the Qiskit Functions Catalog expand, so too will the ways error mitigation can help solve more complex problems between now and 2029.

With errors accounted for, we must now run algorithms on these systems that demonstrate a quantum separation.

The seeds of advantage

Researchers using quantum computers are already uncovering potential paths toward achieving quantum advantage. These algorithms are using quantum as a computational tool for research investigating applications beyond what classical computing can achieve alone.

In other words, our users are sowing the seeds of quantum advantage: drafting and presenting the first hypotheses of advantage to the community.

Just recently, researchers at the startup Kipu Quantum claimed a runtime quantum advantage, where their quantum algorithm ran faster than specific-purpose classical solvers for dense, higher-order unconstrained binary (HUBO) optimization problems. They identified instances that were challenging for methods like CPLEX and simulated annealing, and found that running their BF-DCQO quantum algorithm on ever-improving quantum hardware reported faster approximate solutions. They expect their runtimes to be soon orders of magnitude faster as hardware continues to advance. In addition, the Kipu team tested BF-DCQO against Quantum Annealing and LR-QAOA. They found that BF-DCQO outperformed both alternative methods in accuracy, runtime and resources - specifically in terms of qubit overhead for Quantum Annealing and circuit depth for LR-QAOA - in the tested instances.

Startup Q-CTRL has also benchmarked IBM Quantum systems against classical, quantum annealing, and trapped-ion technologies for optimization - unlocking a more than 4x increase in solvable problem size and outperforming commonly used classical local solvers. In a recent collaboration with Network Rail on a scheduling solution, Q-CTRL made the largest demonstration to date of constrained quantum optimization, accelerating the path to practical quantum advantage. In another demonstration, they generated a 75-qubit entangled state. This was achieved with the help of new method for performing the entangling CNOT gate, plus a lightweight error-detection scheme. The result was an impressive feat of long-range quantum entanglement and computational gains (85%+ fidelity over 40 qubits).

Meanwhile, one family of algorithms combines quantum and classical computing resources to return solutions with comparable accuracy to the leading classical approximation methods for chemistry and materials science. These algorithms follow the variational principle, the principle that allows us to understand a system by calculating the minimum or maximum of a function.

Variational principle: a promising direction for advantage

Techniques for solving problems obeying the variational principle are bringing practical quantum advantage tantalizingly close in the fields of chemistry and materials science. Solutions to these problems can be ranked and compared against classical methods. Therefore, if a quantum solution offers a better accuracy or lower energy, that indicates a possible quantum advantage that can be rigorously validated.

One such technique is sample-based quantum diagonalization (SQD), which aims to find a simpler way to express a quantum system's Hamiltonian - the mathematical object used to calculate the total energy of the system. Starting with an educated first guess, a quantum computer and classical computer work together to find an appropriate subspace that the Hamiltonian can be projected onto. It is as if the quantum computer is taking a picture of the Hamiltonian, and the subspace is the photographic film.

Last month, RIKEN and IBM demonstrated the use of SQD to simulate molecular nitrogen and two species of iron-sulfur clusters, 2Fe-S and 4Fe-2S. Their experiments used up to 77 qubits of the IBM Quantum Heron processor running up to 3,500 two-qubit gates alongside the Fugaku supercomputer to simulate the molecules. These quantum-centric computations went beyond the limit of exact classical simulability, operating at what we call "utility scale." Crucially, the expectation value of the Hamiltonian emerges as an accuracy metric, allowing researchers to rank the outputs of SQD against classical-only methods, as published in Science Advances.

RIKEN isn't the only group employing variational methods in the search for advantage. Researchers at the University of Tokyo are pursuing a similar method to SQD called Krylov quantum diagonalization, or KQD. As published in Nature Communications, the IBM and the University of Tokyo teams showed how KQD similarly begins by creating a subspace on a quantum computer and projecting a Hamiltonian of interest onto it, one more appropriate for materials science calculations. However, KQD has a powerful benefit: given some assumptions about the spacing of solutions, KQD is guaranteed to converge to the best answer for a wide range of initial guesses.

Advantage is just the start

This is a marathon, not a sprint. While IBM continues to release more performant quantum computers, it is essential that the quantum community keeps developing new algorithms - all in the name of creating the applications that will bring useful quantum computers to the world.

We believe that realizing advantage will also require the community to adopt a set of best practices. These are, first, the definition of standardized benchmarking problems with the help of classical experts to ensure that problems are relevant and fair. Second, teams must publish detailed methodologies and datasets so that they can be reproduced. And third, we must maintain open-access leaderboards to track improving computational performance.

We hope the community will work together to create and adopt these best practices while continuing on their explorations to realize advantage and useful quantum computing. There's never been a better time to get started.

Explore quantum computing and the potential for quantum advantage by getting started with IBM Quantum Learning.

IBM - International Business Machines Corporation published this content on July 21, 2025, and is solely responsible for the information contained herein. Distributed via Public Technologies (PUBT), unedited and unaltered, on July 21, 2025 at 13:12 UTC. If you believe the information included in the content is inaccurate or outdated and requires editing or removal, please contact us at support@pubt.io