In December 2024, Google unveiled its Willow quantum chip, claiming an exponential reduction in error rates as the number of qubits increased — a milestone that had been theorised but never demonstrated. Headlines declared a “quantum speed limit” had been broken. Yet the same week, a neutral-atom quantum startup quietly disclosed it had run a 48-logical-qubit computation with error rates below 0.01% for gate operations. Both events are real, important, and profoundly easy to misinterpret. For anyone building technology strategy, the question isn’t whether quantum computing is advancing — it’s what those advances actually mean for the cost, timeline, and feasibility of commercial applications. This article walks through the physics behind error correction’s big year, separates near-term milestones from marketing, and gives you the specific metrics to track so you don’t over-commit to a technology that’s still a decade from general-purpose utility.
Classical computers use transistors that either conduct electricity or don’t — a binary state with almost no ambiguity. Quantum bits, or qubits, exploit superposition and entanglement, which means they’re extraordinarily sensitive to environmental noise. A stray photon, a fluctuation in temperature, or even the vibration of a passing truck can flip a qubit’s state. Without some way to detect and fix these errors, a quantum computer cannot sustain a calculation long enough to outperform a classical machine on any useful problem.
Error correction works by encoding a single logical qubit across many physical qubits. The logical qubit is more stable because the group can vote on what the correct state should be. But the overhead is brutal: most current schemes require anywhere from 50 to 1,000 physical qubits for every logical qubit, depending on the error rate of the underlying hardware. Shrinking physical error rates — or, more precisely, improving the ratio of logical fidelity to qubit count — is the singular engineering challenge that determines whether quantum computers become tools or curiosities.
Willow’s headline result was that, as the team increased the size of their surface-code error-correcting grid from 3x3 to 5x5 to 7x7 qubits, the logical error rate dropped exponentially — a property known as “below threshold.” Below threshold means that adding more physical qubits actually reduces errors, rather than accumulating them. This was the first time any quantum processor had shown such scaling in a surface-code implementation at that size. Before Willow, the dominant skepticism was that engineering imperfections would prevent scaling from ever working in practice.
However, the demonstration used roughly that same 7x7 grid — about 100 physical qubits — to encode just one logical qubit. Running a useful algorithm requires thousands of logical qubits, and each of those needs 50 to 100 physical qubits at current error rates. A machine with 1,000 logical qubits would therefore need 50,000 to 100,000 physical qubits, along with classical control electronics, microwave cabling, and cryogenic cooling that doesn’t yet exist at that scale. Willow is a proof of principle, not a product.
While Google and IBM pursue superconducting qubits, a smaller cohort of companies — QuEra, Atom Computing, and Pasqal — use individually trapped atoms held in optical tweezers. Neutral atoms have a longer coherence time than superconducting qubits because they are less sensitive to electrical noise. They also have a more natural path to high qubit counts: the same laser array that traps one atom can trap 1,000 with modest engineering additions.
In September 2024, QuEra published results showing 48 logical qubits with gate fidelities above 99.9%. That is more logical qubits than any superconducting system has demonstrated by a wide margin. But neutral atoms are slower than superconducting qubits — two-qubit gates take microseconds instead of nanoseconds — and the error rates, while low, are not yet in the regime where fault-tolerant quantum computation becomes efficient. The trade-off is that neutral atoms may reach 100 logical qubits within two years, which would be enough to explore classically intractable problems in materials science and optimisation. For specific use cases, like finding lower-energy configurations of molecules or solving certain graph problems, a 100-logical-qubit machine could already offer a genuine advantage — but only on problems carefully selected to match the machine’s topology and gate set.
Faster gate times matter for algorithms that need deep circuits, such as Shor’s factoring algorithm or quantum simulation of highly entangled systems. Neutral-atom systems currently struggle with circuit depths beyond a few hundred gates before errors accumulate. Additionally, the optical tweezer arrays require periodic re-cooling of the atoms, which introduces idle time. The community’s working estimate is that neutral atoms will be competitive with superconductors for medium-depth circuits by 2028, but they will not replace them for the very deep circuits needed for cryptographically relevant factoring.
Qubit count is the easiest number for companies to tout and the most misleading. A machine with 1,000 physical qubits but a gate error rate of 1% cannot do anything useful; a machine with 50 logical qubits and a gate error rate of 0.01% might be able to. Here are the three numbers that actually indicate progress:
Amazon Braket, Microsoft Azure Quantum, and Google Cloud all offer quantum computing as a service, but their strategies diverge sharply. Amazon Braket has taken an agnostic approach, offering access to five different hardware providers including IonQ, Rigetti, and QuEra. They are betting that no single modality will dominate and that customers want to test algorithms across all of them. Microsoft, by contrast, has bet exclusively on topological qubits — a still-undemonstrated architecture that would theoretically be error-resistant by design. Their current offering is a set of simulation tools and a promise of hardware by 2027.
Google’s approach is the most aggressive: they are building a full-stack machine with their own superconducting chips, error-correction protocol, and compiler. Their stated goal is a commercially relevant quantum computer by 2030. The risk is that if neutral atoms or trapped ions reach logical qubit counts sooner, Google’s enormous investment in superconducting fabrication may not be adaptable. The safe takeaway for a cloud customer is that none of these ecosystems are interchangeable. Code written for a superconducting machine will not run efficiently on a neutral-atom one. The cost of switching hardware platforms later could be high, both in engineering hours and in lost algorithmic optimisation.
For most enterprise readers, the honest answer is not before 2030. A machine with 1,000 logical qubits and logical gate errors below 0.1% would be able to run molecular simulations that are classically intractable for specific problems — like modelling the active site of a catalyst with more than 50 atoms. The economic value in pharmaceuticals and battery materials alone could be in the billions. But that machine does not exist yet. The physical-qubit counts, control electronics, and cryogenic infrastructure all need to scale by roughly two orders of magnitude from where we are today.
What does exist today is the ability to rent time on machines with 20–50 physical qubits and run small demonstration circuits. These are useful for learning, for validating error-mitigation techniques, and for building the pipeline of skills your team will need when the hardware matures. If you are a CTO or VP of Engineering, the correct strategy is to allocate a small R&D budget — think $50,000 to $200,000 per year — to experiment with current hardware. Use that time to understand which of your problems map to quantum circuits and which do not. The worst mistake is not the one who invests too early; it is the one who invests in the wrong modality and builds a decade of expertise on a platform that does not scale.
When a vendor claims a breakthrough, ask for two numbers: the logical gate fidelity and the ratio of physical to logical qubits in the published result. If they cannot supply both, treat the announcement as a marketing statement rather than a technical advance. Also ask whether the result was achieved on a representative hardware system or on a specially tuned subset of qubits hand-picked for low error. The first class of results is impressive; the second class is a laboratory artefact.
Beware of claims about “quantum advantage” or “quantum supremacy” without specifying the problem and the classical baseline. In 2019, Google claimed quantum supremacy with a random circuit sampling task that took their Sycamore processor 200 seconds and a supercomputer 10,000 years to simulate. By 2024, that same supercomputer could do the simulation in a few hours thanks to better algorithms. Classical computing is not standing still. Any real quantum advantage must be measured against the best classical algorithm running on the best classical hardware available at that moment, not the naive simulation from three years ago.
Finally, ask about the vendor’s roadmap for logical qubit count over the next three years. If the roadmap is linear — adding 5–10 logical qubits per year — the technology is unlikely to reach commercial scale in the current decade. If the roadmap is exponential, ask for the specific physical error-rate improvements that justify that curve. A vendor that cannot explain the physics behind their own timeline is not ready to sell you a production system.
Quantum computing is progressing faster than it was five years ago, but the distance to a useful, fault-tolerant machine remains large. Error correction has moved from theoretical impossibility to a convincing lab demo. That is a genuine achievement. But the gap between a 7x7 qubit grid and a 100,000-qubit data centre is not just a matter of engineering — it is a matter of fundamental physics in how qubits interact, errors propagate, and classical control scales. Watch the logical gate fidelity and the physical-to-logical ratio. Ignore the qubit count. Invest in skills, not hardware. And wait for the vendors to show you a reproducible, size-scaling result on a system they can sell to someone else before you commit your cloud budget to a quantum subscription.
Browse the latest reads across all four sections — published daily.
← Back to BestLifePulse