And yet quantum computing continues to progress

Pissing away my life in a haze of doomscrolling, sporadic attempts to “parent” two rebellious kids, and now endless conversations about AI safety, I’m liable to forget for days that I’m still mostly known (such as I am) as a quantum computing theorist, and this blog is still mostly known as a quantum computing blog. Maybe it’s just that I spent a quarter-century on quantum computing theory. As an ADHD sufferer, anything could bore me after that much time, even one of the a-priori most exciting things in the world.

It’s like, some young whippersnappers proved another monster 80-page theorem that I’ll barely understand tying together the quantum PCP conjecture, area laws, and Gibbs states? Another company has a quantum software platform, or hardware platform, and they’ve issued a press release about it? Another hypester claimed that QC will revolutionize optimization and machine learning, based on the usual rogues’ gallery of quantum heuristic algorithms that don’t seem to outperform classical heuristics? Another skeptic claimed that scalable quantum computing is a pipe dream—mashing together the real reasons why it’s difficult with basic misunderstandings of the fault-tolerance theorem? In each case, I’ll agree with you that I probably should get up, sit at my laptop, and blog about it (it’s hard to blog with two thumbs), but as likely as not I won’t.


And yet quantum computing continues to progress. In December we saw Harvard and QuEra announce a small net gain from error-detection in neutral atoms, and accuracy that increased with the use of larger error-correcting codes. Today, a collaboration between Microsoft and Quantinuum has announced what might be the first demonstration of error-corrected two-qubit entangling gates with substantially lower error than the same gates applied to the bare physical qubits. (This is still at the stage where you need to be super-careful in how you phrase every such sentence—experts should chime in if I’ve already fallen short; I take responsibility for any failures to error-correct this post.)

You can read the research paper here, or I’ll tell you the details to the best of my understanding (I’m grateful to Microsoft’s Krysta Svore and others from the collaboration for briefing me by Zoom). The collaboration used a trapped-ion system with 32 fully-connected physical qubits (meaning, the qubits can be shuttled around a track so that any qubit can directly interact with any other). One can apply an entangling gate to any pair of qubits with ~99.8% fidelity.

What did they do with this system? They created up to 4 logical encoded qubits, using the Steane code and other CSS codes. Using logical CNOT gates, they then created logical Bell pairs — i.e., (|00⟩+|11⟩)/√2 — and verified that they did this.

That’s in the version of their experiment that uses “preselection but not postselection.” In other words, they have to try many times until they prepare the logical initial states correctly—as with magic state factories. But once they do successfully prepare the initial states, there’s no further cheating involving postselection (i.e., throwing away bad results): they just apply the logical CNOT gates, measure, and see what they got.

For me personally, that’s the headline result. But then they do various further experiments to “spike the football.” For one thing, they show that when they do allow postselected measurement outcomes, the decrease in the effective error rate can be much much larger, as large as 800x. That allows them (again, under postselection!) to demonstrate up to two rounds of error syndrome extraction and correction while still seeing a net gain, or three rounds albeit with unclear gain. The other thing they demonstrate is teleportation of fault-tolerant qubits—so, a little fancier than just preparing an encoded Bell pair and then measuring it.

They don’t try to do (e.g.) a quantum supremacy demonstration with their encoded qubits, like Harvard/QuEra did—they don’t have nearly enough qubits for that. But this is already extremely cool, and it sets a new bar in quantum error-correction experiments for others to meet or exceed (superconducting, neutral atom, and photonics people, that means you!). And I wasn’t expecting it! Indeed, I’m so far behind the times that I still imagined Microsoft as committed to a strategy of “topological qubits or bust.” While Microsoft is still pursuing the topological approach, their strategy has clearly pivoted over the last few years towards “whatever works.”

Anyway, huge congratulations to the teams at Microsoft and Quantinuum for their accomplishment!


Stepping back, what is the state of experimental quantum computing, 42 years after Feynman’s lecture, 30 years after Shor’s algorithm, 25 years after I entered the field, 5 years after Google’s supremacy experiment? There’s one narrative that quantum computing is already being used to solve practical problems that couldn’t be solved otherwise (look at all the hundreds of startups! they couldn’t possibly exist without providing real value, could they?). Then there’s another narrative that quantum computing has been exposed as a fraud, an impossibility, a pipe dream. Both narratives seem utterly disconnected from the reality on the ground.

If you want to track the experimental reality, my one-sentence piece of advice would be to focus relentlessly on the fidelity with which experimenters can apply a single physical 2-qubit gate. When I entered the field in the late 1990s, ~50% woud’ve been an impressive fidelity. At some point it became ~90%. With Google’s supremacy experiment in 2019, we saw 1000 gates applied to 53 qubits, each gate with ~99.5% fidelity. Now, in superconducting, trapped ions, and neutral atoms alike, we’re routinely seeing ~99.8% fidelities, which is what made possible (for example) the new Microsoft/Quantinuum result. The best fidelities I’ve heard reported this year are more like ~99.9%.

Meanwhile, on paper, it looks like known methods for quantum fault-tolerance, for example using the surface code, should start to become practical once you have 2-qubit fidelities around ~99.99%—i.e., one more “9” from where we are now. And then there should “merely” be the practical difficulty of maintaining that 99.99% fidelity while you scale up to millions or hundreds of millions of physical qubits!

What I’m trying to say is: this looks a pretty good trajectory! It looks like, if we plot the infidelity on a log scale, the experimentalists have already gone three-quarters of the distance. It now looks like it would be a surprise if we couldn’t have hundreds of fault-tolerant qubits and millions of gates on them within the next decade, if we really wanted that—like something unexpected would have to go wrong to prevent it.

Wouldn’t be ironic if all that was true, but it will simply matter much less than we hoped in the 1990s? Either just because the set of problems for which a quantum computing is useful has remained stubbornly more specialized than the world wants it to be (for more on that, see the entire past 20 years of this blog) … or because advances in classical AI render what was always quantum computing’s most important killer app, to the simulation of quantum chemistry and materials, increasingly superfluous (as AlphaFold may have already done for protein folding) … or simply because civilization descends further into barbarism, or the unaligned AGIs start taking over, and we all have bigger things to worry about than fault-tolerant quantum computing.

But, you know, maybe fault-tolerant quantum computing will not only work, but matter—and its use to design better batteries and drugs and photovoltaic cells and so on will pass from science-fiction fantasy to quotidian reality so quickly that much of the world (weary from the hypesters crying wolf too many times?) will barely even notice it when it finally happens, just like what we saw with Large Language Models a few years ago. That would be worth getting out of bed for.

148 Responses to “And yet quantum computing continues to progress”

  1. cwillu Says:

    If this blog was a science fiction glowfic (would that make it a sparkfic?), this would be about the point where it starts getting exciting: a society on the cusp of developing quantum computers, strong artificial intelligence, commercially practical spaceflight, _and_ world ending warfare!

  2. Scott Says:

    cwillu #1: Except, why couldn’t it have all happened when I was younger and full of energy to blog about it? 🙂

  3. Peter Morgan Says:

    I haven’t commented here in a long while, but may I suggest that an algorithm is an algorithm, not a quantum or classical algorithm, and a computer on which said algorithm runs is a computer, not a quantum or classical computer?
    What I think people argue about is whether a given piece of hardware that is used as a computer to run an algorithm to obtain an accurate enough result can be described classically or can only be described quantumly. In the end, however, isn’t it only such classical observables as the cost of building or buying the hardware and of the electricity that is needed to run a given algorithm and how long it takes to run the algorithm to obtain an accurate enough result that matters to whoever pays for the computer? That may sound like Copenhagen redux, but I suppose interpretation doesn’t matter to a well-defined algorithm’s results.
    I suggest that quantum descriptions of computers are more capable than classical descriptions of computers because classical physics does not include measurement incompatibility. We can use the Poison bracket to add noncommutativity into classical physics, however, giving us a classical generalized probability theory and Hilbert space formalism that we can use to describe different measurement ‘contexts’ just as we do in quantum physics. That change gives quantum and classical a much more level playing field.
    On this view, ‘quantum computing’ introduces new kinds of stochastic co-processors, very much as a GPU does for floating point operations (typically with reduced but fit for purpose accuracy), leveraging specific physical properties to give better cost per result. Of course some of the physics that has been and will be found will be useful for computation, so of course there will be successes, but they’re not quantum or classical, they’re applied physics.
    That was fun. Thanks for the blog, as always.

  4. Scott Says:

    Peter Morgan #3: This feels like semantic game-playing. Sure, “every computer is a quantum computer,” in the sense that everything is quantum at a small enough scale. But we all know that the term “quantum computer” refers to a machine that can apply pretty much any desired sequence of local unitary transformations to a collection of qubits, while maintaining those qubits’ quantum coherence. So it’s “just another kind of computer,” but only in the sense that the first spacecraft were “just another type of flying device,” and therefore on a continuum with helicopters and planes.

  5. Viktor Dukhovni Says:

    Peter Morgan #3: Surely it matters that there appear to be problems requiring exponential space/time on classical computers, but not on (be it still hypothetical) quantum computers.
    If/when scalable general-purpose quantum computers do appear, they could significantly expand the landscape of what is realistically computable. So even for sceptics, the separate terminology matters. Quantum polynomial-time algorithms aren’t “just algorithms”, they actually require a quantum computer to run in non-exponential time, that’s why we’re not using them to solve real problems yet, the hardware does not exist, and may not for some time, that next “9” in fidelity may not be so easy to reach, or may not be sufficient?

  6. Amanda Says:

    On the question of parenting and rebellious kids:

    Ten years ago you wrote

    “There’s a movement these days that encourages parents to ban kids from using touch-screen devices, fearful that too much screen time will distract them from the real world. To which I reply: for better or worse, this is the real world that our kids will grow up into.”

    I’m curious how your perspective has evolved. Do you think too much screen time / early exposure to iPad was a mistake? As a parent how do you deal with screen time now?

  7. Bob Pseudonym Says:

    Hi Scott. I just wanted to say that I’ve benefited a lot from your contributions to the intellectual commons. Your writing has changed how I think about a lot of things. I know from reading the beginning of this post and your ‘Open Letter to Anti-Zionists on Twitter’ that this has been a difficult time for you. I’m sorry about that, and I admire how you seem to be dealing with it in a humane and intelligent way. I wouldn’t normally try to engage a public figure in a personal way (I don’t know you personally, and you don’t know me at all), but at the risk coming across as some parasocial weirdo, I just wanted to thank you for making my life more interesting and to extend some sympathy at this difficult moment. So, for whatever it’s worth, thanks and I wish you all the best.

  8. Scott Says:

    Amanda #6: Yes, I do think allowing massive screen time has been a mistake, particularly with our younger one (age 7), who’s now addicted to playing Minecraft and watching Minecraft videos, throws tantrums when his iPad is taken away, and constantly schemes and wheedles to get more iPad time.

    And yet I feel bad taking it away. For one thing, he spends most of his waking hours at school, soccer, chess, gymnastics, swim lessons, Hebrew school, etc., away from any screens. So from his perspective, why shouldn’t he get to unwind with an iPad in his relatively rare downtime? For another, he can (and does) point out that I spend most of my waking hours staring at screens, and am therefore a hypocrite. And for a third—the real crux!—if he doesn’t have the iPad, he demands that his parents continuously entertain him instead, e.g. by playing board games (which he loves). And we do lots of that, but we can’t do it all the time!

    I more and more feel like the real solution was to have lived in a neighborhood full of kids playing outside, exploring, creating, etc. with no adult supervision, so our son could just join them and that would be even better than the iPad. Alas, that seems to have become increasingly rare in the US.

  9. Amanda Says:

    So if you’re struggling to parent, why don’t you spend more time actually parenting your kids instead of “doomscrolling” and arguing with online trolls? Maybe if you didn’t spend all your time in front of a computer screen, you could raise him yourself instead of delegating that responsibility to government strangers at the public school, clubs and the ipad.

  10. Scott Says:

    Amanda #9: It’s a good suggestion, of course. I do spend 2-3 hours per day partner-reading with the kids, playing with them, putting them to sleep, feeding them, helping the younger one with his bath, etc. But when I offer to read or play, half the time they just say “no thanks” or “later” and go back to their iPads…

  11. Scott Says:

    [Note: Like many who pursue aggressive lines of off-topic questioning, “Amanda” outed themselves as a troll in a further comment, and is now banned.]

  12. Dan Staley Says:

    As a parent, I think it’s helpful to remember that not all screen time is created equal – I consider something active and arguably creative like playing Minecraft far preferable to watching Ryan’s World on YouTube, for example.

    What I’ve found to be particularly good for the tantrum problem is consistently-enforced time limits – we let our 7-year old use the iPad for 90 minutes per day on weekends (and not at all on weekdays). She seems to handle it better when she has to stop due to the non-negotiable passage of time, rather than because Mom and Dad feel like it’s time to stop (which invites an attempt to change our minds).

    By the way, Scott, I wanted to thank you for uploading your Intro to QC lecture notes all those years ago. Count me as one more human who understands how to violate the Bell Inequality, thanks to that pdf.

  13. Birdbrain Says:

    Whatever happened to your covid era math/science curriculum for kids? I thought you mentioned you were working with someone to formalize it as a book or something? I ask mostly because I was interested in reading it

  14. Scott Says:

    Birdbrain #13: As it happens, Zach Weinersmith of SMBC Comics will be visiting us in Austin tomorrow, and one of our main goals is to get back to work on that project!

  15. Anon Says:

    One subtlety I only realized after reminding myself about ion and neutral QCs – two qubit gate fidelity averaged over all qubit combinations *is* what matters. I think 99.9% for SC gates is there, but you get much worse when you average over the full device due to connectivity. For that reason, atoms do have a short-term advantage over SC systems.

    What I don’t get is why the Quantinuum device doesn’t just hold more atoms. They could have a storage region in their “qccd” and just bring them in like Quera if they are doing gates with lasers whose gate infidelity doesn’t seem to care about the particular ion anyways. What am I missing?

  16. Scott Says:

    Anon #15: Theorist not experimentalist, but … the ions are confined to a 1D circular “racetrack,” and they have to be targeted with lasers and moved around the track to cause the right pairs of ions to interact with each other. This takes time, and becomes slower and more involved as the number of ions grows. Having said that, I fully expect that we will see Quantinuum and others scale to much larger numbers of ions in relatively short order.

  17. Lane Says:

    Scott #13: That is sensational! I’m working through “The Lion, The Witch, and the Wardrobe” with my niece and nephew right now (5 & 6), but “Bea Wolf” is on deck. I didn’t know I wanted an Aaronson/Weinersmith collaboration, but now I don’t think I want anything else. If Sean Carroll writes the foreword my Blog-Comic-Podcast trifecta will be complete.

  18. Ted Says:

    I’ve never quite understood how to think about quantum error correction with postselection. Does whatever sequence of measurement outcome they’re postselecting on becomes exponentially unlikely as time goes on (assuming an uncorrelated noise model)?

    I don’t understand the details of postselection, but the idea seems like a bit of a cheat – like it could easily lead to trivial results. E.g. if the circuit depth that you reach before getting an error is geometrically distributed, then can’t you trivially extend the expected error-free circuit depth out past any integer N just by postselecting on no errors occuring before time step N? But isn’t that a totally trivial statement, because the postselected states become exponentially unlikely? What’s your preferred heuristic for translating results involving postselection into insights about “real-world” performance? (Sorry for the naive question.)

  19. Ted Says:

    Anon #15: I wouldn’t say that two-qubit gate fidelity averaged over all qubit combinations is all that matters. If you have great fidelity for gates on pairs of adjacent qubits, but you can’t (directly) implement two-qubit gates on non-adjacent qubits, then your average two-qubit gate fidelity will be lousy, but you can still have a very powerful quantum computer! You need to be more careful about calculating required circuit depths and so on, but you can still execute many algorithms with only a modest overhead. And you can still do QEC as long as you use a local code.

  20. Anon Says:

    Scott #6

    Right, but topologically they have to have a way to reorder atoms within the track right? Otherwise you can only get quasi- nearest neighbor interactions. Once you have the reorder aspect figured out, I don’t understand the benefit of the racetrack as opposed to a square array. But you’re right, there is tons of technical details that are lost on me I guess, otherwise they would have done that.

    Trapped ions haven’t really gone beyond 50 qubits it seems, but with these fidelities maybe 300-400 qubits would actually let you do something interesting. Hoping they get there soon

  21. AF Says:

    Lane #17: An Aaronson/Wienersmith collaboration already exists, you can find it at https://www.smbc-comics.com/comic/the-talk-3. Although, it is probably not a good idea to show it to young children.

  22. Craig Gidney Says:

    On the microsoft/quantinuum result: before I complain about it I just want to emphasize that this strikes me as an impressive experiment. However, the numbers that they are focusing on in the abstract and especially in the blog posts aren’t the numbers that matter. Even though the paper does report the number that matters!

    Look at figure 7 of the paper. This is the most important figure in the paper, because it shows error rate vs rounds. By eyeball, the slopes of the lines show a physical error rate of 0.4(5)% per round and a logical error rate of 0.(4)% per round. This is the headline number you care about. This is what determines how many steps you can compute for, because it determines how much each step hurts.

    The numbers they report in the abstract are not the per-round error rate; they are the first-round error rates. Look to the left of figure 7. The first-round logical error rate is ~0.05% while the first-round physical error rate is ~0.5%. That’s a big 10x difference!… But in the context of a real computation, where you have to run for thousands or millions of steps, the slope will contribute to the error rate a thousand times or a million times while the starting value will only contribute once. This is why it’s the slope that matters, not the starting value.

    The numbers they report in the titles and splash images of the blog posts are the postselected first-round error rates. These are even more divorced from what you would do in a real computation. Like, according to figure 8 (discard rates vs rounds), the discard rate is increasing by ~5% per round. Your ability to use that in a thousand step computation will be minimal.

    So the headline number you care about is the unpostselected per-round error rate: 0.4(5)% physical vs 0.(4)% logical. The fact that they are slightly past parity or nearly at parity here is extremely impressive and they should be proud of it. The 10x better first-round logical error rates (0.5% vs 0.05%) and the 100x better first-round postselected logical error rates sound cooler and flashier, which is why they’d call them out, but those numbers are ultimately irrelevant.

    To give a point of comparison, I downloaded the public data for the google d3vd5 surface code experiment from 2022. That data includes shots with round counts ranging from 1 to 25. I took the shots with only 1 round, and I tried postselecting them vs detection event count cutoffs (a very naive way to do it). Similar to this recent paper, the first-round postselected error rates are ~100x better than the per-round error rates ( https://twitter.com/CraigGidney/status/1775641665711354258/photo/1 ).

  23. Julian van Velzen Says:

    I like the quote: ““If quantum computers only benefited chemistry and material science, that would be enough.” from Matthias Troyer. I am also not worried that AI will solve all the chemsitry and material science problems, at least the problems with quantum chemistry character. In academia, they tried this the last 20 years with only minor success. And protein folding is not purely a quantum chemical problem, at the end you can see it as an complex optimisation problem. What I think is a great opportunity is to cleverly unite these two worlds (quantum and AI), as we are trying to do within Capgemini. Also, then there is also always the second “killer use case”, breaking cryptography.

  24. Tobias Maassen Says:

    If we plot the infidelity on a log scale, we are already three quarters done.

    Great sentences of our time. Keep up the greatness.

  25. Peter Morgan Says:

    Scott#4: I agree that I’m playing a semantic game, however I’m introducing a new strategy: usually people try to add something to QM to make it more like classical CM, but if we take it from no-go theorems that QM>CM, another natural move is to add something to CM to make it closer to QM. Adding two things is enough: 1) noncommutativitity (which, as above, we can do, easily, by using the Poisson bracket appropriately); 2) given that we’re thinking more classically, quantum noise cannot be the same as thermal noise, so we have to look for what the difference might be, to which QFT gives a very clear answer that quantum noise is Poincaré invariant whereas thermal noise is not. Both ideas are classically understandable: 1) noncommutativity allows us to encode statistics for multiple experimental contexts in a single formalism; 2) a difference of Poincaré symmetry properties is classically almost trivial (let’s say it’s trivial since Maxwell).
    Just adding these two aspects into CM, giving us what I call ‘CM+’ in AnnPhys 2020, “An algebraic approach to Koopman classical mechanics”, allows us to construct isomorphisms between Quantum Optics and what I call ‘QND Optics’, a commutative algebra of measurement operators within Quantum Optics that can be understand as a classical theory by following and extending (to a field theory context) the template given to us by Tsang&Caves in PRX 2012, “Evading Quantum Mechanics: Engineering a Classical Subsystem within a Quantum Environment”.
    This has links to many other research traditions: random fields, noisy signal analysis, and stochastic processes (the last two extended to 1+3-dimensions). Crucially, CM+ is different from QFT because it does not satisfy the Correspondence Principle, but I suggest that isomorphisms between classical and quantum physics are preferable to the incomprehensibility of the CP. What I say of CM+ is that “QFT is an analytic form of CM+”, drawing on an analogy with the relationship between a real signal and an analytic signal in signal analysis. Isomorphisms are semantic games, right?!?

    With that said, I can address the idea that “the term “quantum computer” refers to a machine that can apply pretty much any desired sequence of local unitary transformations”. Noncommutativity being a naturally introduced part of CM+ makes it possible to model this also in CM+. Not saying it’s easy, indeed you may have noticed that I slipped that we have to use a field theory and if it were a short story it would have emerged earlier, but it can be done in a natural way. Finite dimensional Hilbert spaces as dimensional reductions of a field theoretic model are not impossible for CM+, but they are not as natural as for QM, so we will always want to use nonrelativistic QM approximations to relativistic theories for practical purposes.

    &Viktor#5: I agree that “Quantum polynomial-time algorithms aren’t “just algorithms”, they actually require a quantum computer to run in non-exponential time”, but such algorithms can still be expressed in an ordinary single-thread programming language, it just takes less time if we transform that algorithm into an asynchronous multi-thread form and have many GPU resources available. Quantum computing adds two new things: 1) voltages on signal lines, light levels in optical wave guides, et cetera are not constrained to be zero or non-zero at certain times, determined by a global or local system clock; 2) signal-to-noise ratios are much lower, so that noise can be used as a computing resource. Quantum computing, in particular, uses quantum noise, which, as described above, in classical terms, is Poincaré invariant and is apparently much more stable than thermal noise; in addition, the amplitude of quantum noise is determined by Planck’s constant instead of by kT.

    Obviously this is not to everybody’s taste! It’s published in decent physics journals but not in PRL, Nature, et cetera, and it’s not likely to be, given my lumbering style. Still, I’ve been been developing the ideas by giving talks to Math Physics seminars and hopefully someone else will get the credit for noticing that this gives us a different understanding of QM that makes what we’re doing with a ‘quantum computer’ just the appropriation of physical processes in a new way to perform a given, well-defined algorithm more quickly and, hopefully, at a lower cost. It’s well-known that ‘analog’ doesn’t introduce anything new, but ‘noisy analog’ in the style of CM+ is demonstrably isomorphic to QFT, so a new game is in town, albeit it’s all only semantics.
    I’ve remembered why I don’t comment here often. I’m on the same highway, I hope, but in a significantly different lane from most quantum physicicsts, so that attempts at an explanation after an initial relatively short comment have to be outrageously long-winded. Sorry.

    &Scott#14: Best wishes for that.

  26. Mark Spinelli Says:

    I wonder – is error correction robust enough nowadays to implement a toy version of Wiesner’s quantum money bill, with each logical qubit in one of the BB84 states?

    With Wiesner’s scheme, each logical qubit is prepared in only one of four states; as long as you can (logically) prepare \(|0\rangle_L\) qubits, the bank only needs to run logical single-qubit NOT and/or Hadamard gates.

    No logically entangling two-qubit gates are required; the whole bill is in a product state. Indeed, with a single bill with three qubits of security I could envision one processor in one lab in California using 32 physical qubits to keep one logical qubit idle; another processor in another lab in Ontario using 23 qubits to keep another logical qubit idle; another processor in Shanghai keeping a third qubit idle…

    Can we keep unentangled logical qubits (prepared in one of the BB84 states) idle for an ~arbitrary amount of time, or does the old quip about quantum money “literally burning a hole in your pocket” still land?

  27. Random Circuit Sampling: Fourier Expansion and Statistics | Combinatorics and more Says:

    […] announcements see here and here; for an enthusiastic yet cautious blog post over Shtetl optimized see here; for a praising yet more cautious comment by Craig Gidney see […]

  28. fred Says:

    “while you scale up to millions or hundreds of millions of physical qubits!”

    Hundreds of millions of physical qubits would be hundreds of thousands of logical qubits.
    Let’s say it’s a million logical qubits to be generous.
    That’s the equivalent of a 200 x 200 pixels RGB screen (with 8 bits per color channel) or a 1000 x 1000 screen with on/off (white/black) pixels.
    Now try to imagine how many images can be represented on such a 1000×1000 black & white screen, that’s 2^1,000,000 possibilities.
    Not only it includes every possible passages from all the books ever written (not just by humans but also by every alien race that ever existed), and also all the text that will ever be generated by AI from now till the end of the universe. It also includes all the still frames from every movie/documentary/tv show ever shot, and also all the still frames that will ever be generated by AI from now till the end of the universe, and all the permutations between any fraction of such images. It also includes all the proofs for an almost limitless set of mathematical questions (not only accurate proofs, but also most of all the possible wrong ones). It would also capture all the possible chess games and Go games (with legal and illegal moves), all frames from every possible (and impossible) Minecraft session, etc.
    And then all those prior examples are still just an infinitesimal fraction of all the possible images that can be displayed on such a screen…

    So the corresponding QC with a million logical qubits is somehow able to maintain a superposition of any arbitrary subset of all those potential images (could be all of them) at once in its state and then manipulate such set in almost infinite ways.
    And the superposition of the images is also based on ‘weights’ that are continuous.
    The qubits being logical means that their manipulation is error free and done in a perfectly idealized way, i.e. the million qubits can be perfectly entangled with one another (any complex subset) and stay this way through all the steps of the computation (i.e. applying arbitrary gates on them).
    It becomes apparent that one practical limitation is in the preparation of the initial state… for a classical computer, all you need to do is flip the one million bits one by one to some desired initial state, but for the QC you need to prepare some complex superposition, and we can realistically prepare a state that only includes an infinitesimal fraction from the totality of 2^1,000,000 images (using ‘regular’ weights). Preparing are really ‘arbitrary’ initial state would take infinitely more time and effort than doing any subsequent computation on it.
    E.g. it’d be as if you were tasked with preparing an initial state that would only cover all the possible images of Scott Aaronson riding Trump like a donkey, in all possible configurations and backgrounds… it seems impossible even though somewhat well defined, and the vast majority of possible initial states can’t even be described succinctly, therefore can’t be generated. So the initial states that are realizable often start with an obvious non-superposition state (e.g. all qubits at 0) or an equal superposition of all possible 2^1,000,000 images, and then a limited series of operations is applied on them (which could be considered to be part of the QC algorithm itself, so the preparation can’t include an exponential number of steps).

    Anyway, this shows that adding another qubit to a QC can’t be that easy.

  29. Jon Lennox Says:

    Julian van Velzen#23: The transition to quantum-resistant cryptography is under full steam in the relevant places (the IETF, NIST, etc). Other than “store now decrypt later” attacks (where by the time a Cryptographically-Relevant Quantum Computer is built, it’s still of interest what someone said today), or the lingering tail of old devices that don’t get updated, it seems like breaking cryptography mostly won’t be an available use case for QCs.

  30. Vadim Says:

    Are there any promising possibilities for quantum machine learning? Or is that completely unknown at the moment?

  31. Scott Says:

    Ted #18: I completely agree with you that postselection is “a bit of a cheat” … which is exactly why I focused on their result without postselection, even while the Microsoft/Quantinuum people put more stress on the superficially better-sounding postselected results (which annoyed me a little)!

    Having said that, postselecting on the measured error syndromes having certain properties, isn’t quite as trivial as postselecting on “the entire computation going the way you wanted” (in the latter case, the postselected error probability would go all the way down to 0, by definition 😀 ).

  32. Scott Says:

    Anon #20: If you look at the conceptual designs for scaled-up trapped-ion systems, they all tend to look like a bunch of racetracks connected by intersections … like the street plans for a microscopic city, actually! Reflecting the fact that, yes, you want to take full advantage of 2 dimensions (if not 3 dimensions), but you also need the ability to target any ion individually from outside the trap and move it to where you want it. As I said, I expect to see lots more progress toward this “microscopic city” vision in the near future.

  33. Scott Says:

    Craig Gidney #22: Ah, thanks very much! I certainly knew enough to focus on the non-postselected rather than postselected numbers, but I didn’t know about the issue with first-round error versus per-round error, which indeed sounds like it would eat into their error-correction advantage by a further order of magnitude (even while, you say, a nonzero advantage survives at the end).

    If any Microsoft/Quantinuum people are reading this thread, I strongly invite them to respond to that point; then I’ll update the post with what I’ve learned.

  34. Scott Says:

    Julian van Velzen #23:

      And protein folding is not purely a quantum chemical problem, at the end you can see it as an complex optimisation problem.

    The trouble is that, to whatever extent it’s an optimization problem rather than a quantum chemistry problem, to that extent we have less reason to expect a speedup from QCs.

      Also, then there is also always the second “killer use case”, breaking cryptography.

    That’s certainly a short-term use case for certain 3-letter agencies! But, besides the issue mentioned by Jon Lennox #29 (that we’ll all just switch to quantum-resistant cryptosystems), it’s also very hard to present this one as a positive application for humanity.

  35. QMA(2) Says:

    In https://scottaaronson.blog/?p=2516 you write you see QMA(2) in PP as plausible. Has your views changed since? What if counting hierarchy collapses? Is QMA(2) in PP more plausible?

  36. Anon Says:

    Hi Scott, thank you for reporting on the Microsoft-Quantinuum result. I want to point out a few things that are incorrect or at least misleading in your description and the associated press releases.

    Before that, however, it is worth prefacing by saying that the hardware results are very impressive, and follow a long line of excellent work from the Quantinuum team on quantum error correction.

    First, the headline number 800x is highly misleading. This comes from a post-selected error rate of 0.001%, compared to a physical baseline of 0.8%. Notably, the 0.001% is estimated from a meager 15k shots, where 1/15k=0.006% >> 0.001%. This leads to a 15x error bar on the number 0.001% (Table 2), so really, 800x should be changed to: some number that is between 60x and infinity, which we think is on average around 800x. Quoting it as >60x (still an impressive number!) would have been much more honest, but the PR and even abstract focuses on the larger number without giving the order-of-magnitude-large error bars, which is pretty misleading.

    Second, the magic state factory analogy is interesting, but also not representative of actual usage. If you want to use this Bell pair in an actual circuit, you would need to do yet another CNOT gate to teleport it into the main circuit, which would add noise and which you can’t do pre-selection on. Thus, although the preselection is a valid thing to do for state preparation, implying that this would lead to a two-qubit gate of this fidelity in an actual circuit seems inaccurate. This is further compounded by the fact that in a deep circuit, repeated rounds of error correction are needed (more on this in a bit), which will affect the achievable logical error rate (here, they are directly measuring the state after prep and pre-selection). In fact, by the standards applied in this paper, Quantinuum’s very nice paper arXiv:2208.01863 from 2022 already meets the bar of calling the logical qubits better than physical!

    Third, the repeated error correction experiment in this paper relies on post-selection, and even then the per round error rate barely breaks even. Note that the per round error rate is the actual relevant quantity, as Craig pointed out above. If one applies the same criteria to Quantinuum’s or Google’s past experiments, you would likely find that they are also close or maybe even past the pseudothreshold. They try to argue that a two-qubit gate is done here and benchmark against physical two-qubit gate fidelities, which is misleading as (a) the two-qubit gate here is only doing a permutation and does not involve any physical gates and therefore (b) can only perform a limited set of in-block operations, and cannot execute a two-qubit gate between blocks that you would actually want in a real circuit. The fact that you need to post-select every QEC round also makes it not scalable to a deep circuit, even if you had more qubits.

    It is also instructive to compare this result with Quantinuum’s very cool previous demonstration of two-qubit logical gates arXiv:2208.01863: 1. They demonstrated that they could perform a transversal CNOT and measure a logical Bell pair with better fidelity than the physical Bell pair. The numbers are a bit worse, but that is partially because they didn’t do the extra pre-selection thing; however, this actually makes the previous paper more applicable to an actual circuit. 2. They found that when adding in a round of QEC, which you would probably want to do when running a deep circuit, the fidelity degraded enough to cause it no longer break even; this however was in the absence of any post-selection, as required for a scalable implementation, and applying the same criteria to the current paper would have resulted in it failing to meet the bar of two-qubit gate break-even.

    Thus, in conclusion, it would appear that while this paper is an impressive quantitative improvement over previous work, it does not bring anything qualitatively new to the table.

  37. Sean Feeney Says:

    As a young quantum algorithms engineer with a strong focus in quantum programming and not necessarily hardware. After skimming the paper and reading this blog post and comments, I am still unsure why the trapped Ion device. Is it just happenstance for the partnership between the two companies, or is the all to all connectivity a key factor in the error correction codes they implemented with the CNOTs? Is there a trapped ion conspiracy or am I reading to much into this?

  38. fred Says:

    It seems to me that, by definition, QC aren’t ‘scalable’.

    The naming of qubit after the ‘classical’ bit is really unfortunate.
    Classical computers are scalable because bits are isolated from one another.
    If you have 2 classical computers with N bits, it’s not too hard to make it behave as one classical computer with 2N bits. Fundamentally it’s this way because of how you can assemble electronics circuits into bigger circuits using impedance matching, which makes the output of one circuit insensitive of what’s done downstream with that output (fundamentally one bit is one transistor, working at high frequency regime, which are very easy to assemble using impedance matching).
    https://en.wikipedia.org/wiki/Impedance_matching
    And while any pair of bits in a classical computer can be brought together, at every step only a single pair is being operated on, with some degree of parallelism. So, they’re slow, but are incredibly easy to scale up, and parallelism is an easy win.
    This is why an RTX 4090 has 76 billion transistors in it. This is why we have Moore’s law.

    But with a QC all this totally breaks down.
    The point of a QC is to be able to make all its ‘bits’ sensitive to all the other ‘bits’ that are in the system, all at once and all the time (through entanglement).
    This is why you just can’t take two QCs that each have N ‘bits’ and easily make it all behave like a single QC with 2N ‘bits’, and even just adding one more ‘bit’ isn’t a trivial operation.
    And whatever technique works to bring N qubits together could be totally useless to try and bring 10xN qubits together.

  39. Breck Yunits Says:

    I always wonder if the Chinese proverb applies to QC: “It is difficult to find a black cat in a dark room, especially if there is no cat.”

    Whether QC happens or not, I think you hit the nail on the head here: “it will simply matter much less than we hoped in the 1990s”. AI is devouring much of the upside potential of QC so the EV of QC might be very low if it happens.

  40. Scott Says:

    Mark Spinelli #26:

      I wonder – is error correction robust enough nowadays to implement a toy version of Wiesner’s quantum money bill, with each logical qubit in one of the BB84 states?

    Yes, you could do such a demo right now, but I don’t know how interesting it would be. The error-correction that’s been demonstrated so far isn’t nearly good enough to keep a qubit alive for arbitrary time — only (as of this past year) for slightly longer than it would’ve lived with no error-correction. And once you can keep a qubit alive arbitrarily, that’s the headline result — the application to Wiesner money is just a simple afterthought! 🙂

  41. Scott Says:

    Vadim #30:

      Are there any promising possibilities for quantum machine learning? Or is that completely unknown at the moment?

    I think the most promising possibility, by far, is the use of QCs for inherently quantum machine learning problems — like learning properties of unknown states, Hamiltonians, and unitary transformations.

    Beyond that, a huge number of classical ML problems admit a Grover-type speedup, but it will probably be a very long time before that’s a net win in practice, once you factor in error-correction.

    And then there are many other potential speedups for classical ML, for instance those based on HHL, that however I’d have to place into the “unknown” bucket when you do a fair end-to-end comparison against the best classical algorithms. (Note that some previously promising leads for such speedups have been closed off, by Ewin Tang’s 2018 dequantization breakthrough and its aftermath.)

  42. Scott Says:

    QMA(2) #35: Oh man, a lot has changed since I wrote that post back in 2015! It now looks entirely plausible that QMA(2)=NEXP — check out the recent work on the problem by Jeronimo and Wu.

  43. Scott Says:

    Sean Feeney #37: No conspiracy. 🙂 Trapped ions really are one of the frontier hardware platforms in terms of what you can do right now, along with neutral atoms and arguably superconducting.

  44. Scott Says:

    fred #38: No, you’re mistaken. Each qubit only needs to interact with one other qubit at any given time. If QC doesn’t scale, it’s not for some trivial reason like you say. Please read my undergrad lecture notes for example.

  45. Scott Says:

    Breck Yunits #39: No, if there’s no cat, then that itself will be a revolutionary discovery about physics, way more exciting than if there is a cat (as our current understanding of QM says there must be).

  46. fred Says:

    Scott #44

    Ah ok, so it wouldn’t be that hard to take two QCs with each N qubits and create a single QC with 2N qubits out of them? And, if not, then the reason is very subtle and it isn’t because it’s simply way harder to keep 2N qubits entangled rather than keeping N qubits entangled?

  47. fred Says:

    Scott #44
    “Each qubit only needs to interact with one other qubit at any given time”

    I meant that all the qubits can be entangled at once, to form an arbitrary state, and if a single of them decoheres, then the entire “house of cards” effectively collapses.
    Also, regarding the fact that it’s sufficient to have just one qubit interact (get entangled) with another one at any given time – doesn’t that require to be able to bring any qubit in somewhat close physical proximity with every other qubit (as opposed to a classical computer where the state of bits can be easily ‘cloned’ and propagated through electric potential on wires)? That would seem very hard to achieve in practice as the number of qubits increases… unless there are ways to make a qubit interact with another one that’s a foot away, without some kind of close contact?

  48. physicist Says:

    I think the 48 logical qubit experiment was performed in Lukin’s lab at Harvard? Perhaps it might be more appropriate to refer to it as the Harvard/QuEra experiment like some recent papers did, or Harvard-QuEra-MIT-UMD to include all coauthors?

  49. Prasanna Says:

    “30 years since Shor’s algorithm” – this is quite ominous for state of quantum computing, already it is looking like it will join the ranks of fusion power, room temp superconductor, curing cancer…and lo behold, Singularity. History of Humanity is to make progress incrementally, with the notable exception during the first 50 years of 20th century when starting with relativity, GR, QM, Godel, Turing, and the most impactful breakthrough in biology – antibiotics, not to mention the merely engineering breakthroughs like airplanes ….

    Scott, given that you are in the enviable position of trying to deflate two enormous hype bubbles of QC and AI, do you think humanity has reached the limits of its potential for major breakthroughs and settle at incremental small steps, though the benefits from that is by no means trivial to society as a whole

  50. Ted Says:

    Fred #46 and #47: It depends on what you mean by “very hard”. At the level of fundamental physics and “blackboard” algorithms, it’s pretty straightforward – it basically just amounts to (repeatedly) quantum teleporting single qubits between the QCs via the exchange of entangled photons (or any other qubits). As an engineering challenge, it’s extremely difficult, although progress is being made. Last month, IonQ put out a nice blog post explaining quite clearly how they plan to eventually do it: https://ionq.com/posts/enabling-networked-quantum-computing-with-ion-photon-entanglement.

  51. Scott Says:

    Prasanna #49:

    If your baseline is (say) the ~120 years that it took to go from Charles Babbage to the building of the first useful classical electronic computers, then I actually think the distance we’ve managed to come in the past 30 years of quantum computing research holds up really well.

    – I think that even today, even after the hype there’s been these past few years, many more people still underestimate than overestimate the effect that AI is going to have on their lives! Or they abstractly agree that the effect could be bigger than anything our ancestors experienced, but they haven’t yet internalized it (I’m not sure that I’ve yet internalized it).

    – With quantum computing the story is different — when people wildly overestimate its likely effects on the world (as they do), it tends to be because of misunderstandings that are child’s-play for any expert to debunk. With AI, by contrast, even the experts have no real idea where ceiling is, more than the rest of us do.

    – Progress really does seem to me like it became much more incremental starting in the 1970s … in the “world of atoms.l But in the “world of bits” (whether in silicon or in DNA), the progress seems like it’s remained pretty revolutionary. I certainly mourn the slowdown in progress on transportation, energy, and so on (whatever all the causes were). But I do submit that one of the causes was simply that so many inventors and discoverers moved from the world of atoms to the world of bits, where there seemed to be so much fruit lower on the tree. Certainly the timing works out right.

  52. Scott Says:

    physicist #48: Thanks, you’re right. Changed it to “Harvard/QuEra,” consistently with how most people described it.

  53. Scott Says:

    fred #46, #47:

    – Yes, that’s right: once you have fault-tolerant qubits, there is absolutely nothing in known physics that should make it “more than twice as hard” to entangle 1000 of those qubits than it is to entangle 500. If there is such a thing, then something is wildly wrong about our current understanding of QM. But more than that — the Harvard/QuEra experiment (to take a recent example) has already successfully demonstrated entanglement of 48 logical qubits.

    – Yes, you may have to move the ions all the way across the trap, and yes, that takes time. And yet the Quantinuum people (among others) have not only already successfully shown that they can do it, they’ve shown they can do it in a way where it’s not even close to the dominant error source (that would be the 2-qubit gates).

    – Notice that, while QC still has a long ways to go, it’s now firmly into the stage of its development where the skeptical arguments in blog comment sections can now regularly be answered by experiments.

  54. Ashley Says:

    Scott,

    This is off-topic (but promise this will not get aggressive and that I am not a troll 🙂 ):

    You said you are an ADHD sufferer. How do you then do mathematical research, which would require tremendous concentration powers? Many scientists, apparently including Einstein, are said to have suffered from ADHD.

    Is that you use pen and paper to help you concentrate?

  55. Julian Says:

    Jon Lennox#29: Sure, but the challenge is not with IETF, NIST, etc. Most vendors providing cryptographic products will enable their solutions in time with proper and strong cryptography. Instead, the challenge is with legacy systems, complex crypto ecosystems with many stakeholders and dependencies, etc. Every migration in the past took many years, and this will be no different.

  56. Julian Says:

    Scott #34 Exactly. As protein folding can be seen as a complex optimisation problem (instead of a quantum chemistry problem), it’s not surprising that classical AI performs well. For actual quantum chemistry problems, for example, those seen in batteries or drugs, classical AI has had very minor successes over the past decades, and we can still hope for a genuine quantum advantage.

  57. gentzen Says:

    Back in December 2023 I read at
    https://quantumfrontiers.com/2023/12/09/crossing-the-quantum-chasm-from-nisq-to-fault-tolerance/

    Erasure conversion can also be arranged in superconducting devices, by using a so-called dual-rail encoding of the qubit in a pair of transmons or a pair of microwave resonators. With two resonators, for example, we can encode a qubit by placing a single photon in one resonator or the other. The dominant error is loss of the photon, causing either the 01 state or the 10 state to decay to 00. One can check whether the state is 00, detecting whether the error occurred, without disturbing a coherent superposition of 01 and 10.

    Erasure detection has been successfully demonstrated in recent months, for both atomic (here and here) and superconducting (here https://arxiv.org/abs/2307.03169 and here https://arxiv.org/abs/2307.08737) qubit encodings.

    Without dual-rail encoding, it seems to me that scaling up superconducting qubits either has to scale down temperature, or scale up classical control speed. I thought there were good reasons why people didn‘t want to use dual-rail for superconducting qubits. At least I never worked out how scaling could fail in that case.

  58. Gil Kalai Says:

    The post and the discussion are nice and useful.

    A question which, in my opinion, is important for understanding where we stand *today* is about the ability of IBM quantum computers to demonstrate random quantum circuits even in the 10-20 qubit range with a similar quality to (or even somewhat worse quality than) the claims of the Google 2019 Sycamore experiment.

    There is a paper by IBM researchers about random circuit sampling with 6 qubits and I am aware (private communication with Seth Merkel from IBM) of some experiments even with 7 qubits, but not with more qubits. (We ran ourselves experiments with 5 qubits.) I will be grateful for further information about IBM capabilities.

    The claims about the Sycamore quantum computer is that it achieves, for 12-53 qubits, XEB-fidelity which is very close to the prediction by an a priori estimate based on qubit- and gate- fidelities (Formula (77)), and which is also close to the predictions of the Google Weber-QVM simulator. As far as I know, this is not the case for IBM quantum computers even for 12 qubits, namely, researchers working with IBM quantum computers did not present (and perhaps cannot present) experimental samples that matches (or come close to) the fidelity of the IBM simulators. (We asked about it but did not get a direct response from IBM researchers.)

    Part of the disagreement between Scott and I in assessing the situation *at present* regarding quantum computers is Scott’s belief all along that IBM quantum computers are capable of achieving comparable performance to the Sycamore 2019 experiment. But I don’t know If Scott’s assertions regarding IBM are just off-hand assertions or if they are based on some sources or serious reasons. Scott, please explain.

  59. fred Says:

    To clarify, my skepticism is only at the engineering level.
    We see plenty of way more trivial technologies that are struggling to take-off beyond the experimental stage and become actual products.
    Having enough investment is a necessary step but not sufficient.
    For example the production of energy from fusion, space elevators, curing cancer, …

    Engineering progress is often very iterative, over decades, but also relies on unexpected breakthroughs, so, at this point, trying to guess how QC is gonna scale from a few dozens qubits to millions is an almost impossible guessing game.
    It’s as if I was given the blue print of the LHC and asked to use it to come up with a budget and plan to build a collider that deals with energies that are 100 times higher.

  60. Mark Spinelli Says:

    Gil Kalai,

    (Not really responsive to your post #58, sorry). But, are you skeptical that even Wiesner’s quantum money scheme would eventually be achieved with logical qubits? In particular, are you doubtful that Alice could commit to and prepare even a single logical qubit, unknown to Bob, into one of the BB84 states of her choosing, and then use active error correction with, say, Shor’s 9-qubit code to keep her qubit idle for an ~arbitrarily long time until Bob asks her to break the commitment and measure?

    I might be significantly misunderstanding error correction but it seems to me that holding idle a single logical qubit that is logically unentangled with other logical qubits seems much easier than fully fault-tolerant error correction with logically entangling two-qubit gates.

  61. Scott Says:

    Gil Kalai #58: I don’t understand why you’re asking me about the current status of IBM’s superconducting devices — I didn’t mention them in the post and I confess I haven’t looked into them closely for a while (keep in mind, I’ve been on leave at OpenAI for the past couple years!). Wouldn’t your question be better directed to IBM?

  62. Scott Says:

    fred #59: At this point, I feel like skepticism of fault-tolerant QC is a little bit like skepticism of manned moon missions in the mid-1960s. That is, it’s not that I know for sure that the skepticism is wrong; maybe there will be unexpected obstacles. It’s just that the skepticism isn’t interesting if it doesn’t deeply engage with the actual huge effort that’s now underway to do the thing being theorized about — an effort that’s recently produced programmable systems with hundreds of qubits and 99.9% fidelity 2-qubit gates, enabling circuits with tens of thousands of gates, and that’s also now started to demonstrate quantum error-correction that produces a net gain.

  63. fred Says:

    Scott #62

    Well, my skepticism is about the scaling, and while there seems to be steady gains, I just can’t tell if there’s a clear path at the moment.
    So, you definitely make a good point about the skepticism only being interesting if it’s grounded in the actual details of what’s currently done, but one huge difference between classical electronics and QC is that it only takes a year or two to train well an electrical engineer, teach him everything from the details of how a transistor work (many realizations, in the analog or digital realm), logical gates, putting it together to come up with a CPU and RAM, including lots of stuff on the software side (writing a compiler, etc). E.g. back in the early 90s, two of my graduating friends in college were able to entirely reverse engineer a popular processor within a year so it could be re-manufactured independently.
    But all the tech mentioned here seems only comprehensible to the few people who are actually working on it, mostly fresh phd/post-docs who are laser focused on a particular realization of qubits… so, yea, that doesn’t make it any easier for anyone else to get a sense of what’s achievable (as opposed to the theory of writing an actual QC algorithm, which is obviously independent of the hardware details, ideally).

    Even you, as a top expert in the field, wrote
    “This is still at the stage where you need to be super-careful in how you phrase every such sentence—experts should chime in if I’ve already fallen short; I take responsibility for any failures to error-correct this post.”

    About
    “I feel like skepticism of fault-tolerant QC is a little bit like skepticism of manned moon missions in the mid-1960s.”

    It’s interesting you use this as an example given how it took 10 years to build the original moon program, and how much everyone is currently struggling to go back to the moon by re-inventing technology that was done 60 years ago, using material science and computer resources that are order of magnitudes better… a startling example how engineering progress can take a step back when all the know-how and discipline is lost… showing that engineering challenges are rarely trivial.
    The original stuff makes the current SpaceX effort (Musk totally over-promising and missing the mark by years) look like a bunch of total amateurs, wasting billions of dollars of tax payer money on each Spaceship launch attempt (sorry, but it’s true):

  64. Scott Says:

    fred #63: Absolutely no one claims that the engineering problem is “trivial.” It’s clearly among the harder engineering problems that humans have ever tackled.

    But I tell you this: if scalable fault-tolerance is finally achieved, after thousands of people labored toward the goal for decades, and then the very same people who confidently said it would never be done turn around and declare that of course it was doable, so what was the point of going to all this effort to demonstrate it … I will fly into a rage the likes of which even regular Shtetl-Optimized readers have never seen. 😀

  65. Filip Dimitrovski Says:

    As someone who doesn’t understand fancy words like “Level 2 Resilient quantum computing”…

    Let’s say I don’t care about the classical part of Shor’s algorithm and just want to try out the quantum circuit in practice. Does this mean there’s a huge chance I will be able to calculate 5 * 7 = 35 very soon?

  66. fred Says:

    “Meanwhile, on paper, it looks like known methods for quantum fault-tolerance, for example using the surface code, should start to become practical once you have 2-qubit fidelities around ~99.99%—i.e., one more “9” from where we are now. “

    Btw, where’s this 99.99% number coming from?
    It’s some heuristic or actually backed by a precise calculation?
    If heuristic, is it because of our limited understanding/modelling for all the possible sources of noise?

  67. fred Says:

    Scott #64

    “declare that of course it was doable, so what was the point of going to all this effort to demonstrate it”

    I think one problem the field is facing is very similar to fusion research – we keep hearing lots of hype about fundamental breakthroughs, but when you look at the details we’re still very far from using the tech in a way that could achieve actual net production of energy.

    In the case of QC the goal is to actually run some QC algorithm on some minimum set of qubits. I’m not talking about proving “supremacy” (this is useless/pointless at the moment imo), but simply showing the various gadgets can indeed implement enough ‘perfect’ qubits to show it all works well in a some OBVIOUS WAY, without all this dancing around about pre-selection, post-selection, and whatnot.

    If it’s already considered done, then great, I’ll never bother posting about skepticism ever again.

    The vibe I get is that Scott is projecting his own evolution about being a skeptic on AI a few years ago to now being totally convinced onto the folks who have been skeptical about QC… but it’s the case that it’s very plain/obvious to convince oneself that LLMs give amazing practical results that noone expected (of course some people are still being dishonest about it)… but when it comes to QC, there are so many fine prints coming with every new result that only a handful of people can interpret them (and they still seem to argue with one another about what it all means).
    Finally, it seems that AGI will happen way before humans figure how to scale up qubits to the level needed… at which point it’s probable that the AGI will massively boost progress in QC anyway (as in many other fields) 😛

  68. dennis Says:

    Regarding (potentially unexpected) progress, do you have an opinion on the recent paper by King et al, “Computational supremacy in quantum simulation”? https://arxiv.org/abs/2403.00910

  69. fred Says:

    I’m actually super impressed by the reported results (hundreds of gates, dozens of qubits), in a very similar way that ChatGPT is super impressive. And going from 99.5% or 99.9% to 99.99% clearly seems totally achievable.

    I’m actually more skeptical about going from ChatGPT to AGI than going from a dozen qubits to thousands of qubits…
    the main reason is that AI progress doesn’t seem backed by actual theory, progress is all lots of smart software tweaks and throwing tons of hardware at the problem, and we have no idea if there’s even a path to AGI or let alone even define what AGI is/would be… results are not well defined (unlike “factoring a number on a QC”), we just hope to recognize it when we see it… just like how we realized that the Turing test turned out to not be that useful once LLMs started to “work”.

  70. QMA(2) Says:

    Scott #42 It seems the paper tried to show QMA_log^+(2) contains NP and by amplifying get QMA^+(2) contains NEXP. Correct?

    Here in remark 3.2 (https://www.cs.cmu.edu/~odonnell/quantum15/lecture25.pdf) it says it is unlikely QMA_log(2) contains NP or else SAT is in quantum 2^o(n) time which would be a surprise.

    So the same technique is unlikely to show QMA(2) contains NEXP. Correct?

    Is the QMA^+(2) inherently too strong of a definition and that QMA(2)\subsetneq QMA^+(2) is more plausible than QMA(2)=QMA^+(2)?

  71. Christopher Says:

    What about combining quantum computers and AI for drug discovery? Do you think they would synergize? Or perhaps you could have quantum AI for drug discovery?

    Like, if you had a quantum computer and also control of OAI and DeepMind, what would you do?

    (I’m guessing you’ve covered this before, but just wondering what the answer looks like now that AlphaFold exists.)

  72. Scott Says:

    Filip Dimitrovski #65: One can already factor 35 into 5×7 (and probably even bigger numbers) using Shor’s algorithm, you’ll be happy to learn. But at this point, it’s not particularly interesting unless you can do it with fault-tolerant qubits.

  73. Scott Says:

    QMA(2) #70: I was confused because I knew of no implication like that—but then I looked at Remark 3.2 in your link, and it specifies QMAlog(2) with c=1 and s=0, which is way more restrictive than the usual c=2/3 and s=1/3. Indeed, my confusion is that s=0 is so restrictive that it seems to contain nothing whatsoever—why can’t a dishonest prover always get a nonzero acceptance probability by sending the maximally mixed state? I should email Ryan O’Donnell for clarification on this.

  74. Sharmake Farah Says:

    Re quantum computers and their uses, the main use cases I can think of right now are an on-ramp to reversible computation, and (potentially!) time travel/causality violations, assuming the size of a quantum computation is closer to billions of qubits or even a quantum computer that covers an entire city or country or megastructure, rather than the entire observable universe. Also another good use case, assuming the time travel use case turns out to be possible in the future, would be quantum key distribution, as this basically breaks all of cryptography by allowing algorithms for PSPACE-complete problems to be solved in AC0, which is smaller than polynomial time. It has far more impacts than that, like solving circuit optimization completely, or solving SAT and Tautology problems efficiently for all propositional logic formulas, but this comment is already getting way too long to talk about it.

    The main use case, and the more plausible use case, is to use it as an on-ramp to full-blown reversible computing. One of the big problems with current classical computation is since they rely on irreversible computations for basically everything, they are subject to Landuaer’s Principle, which sets a minimum limit on how much energy you need to dissipate every time a bit is erased, and we are getting uncomfortably close to the limit today, and this kind of implies Moore’s law has to break down soon-ish.

    It’s actually a little worse than this, as the pure Landauer Limit is only achievable with either 50% error rates or infinite transition time, and the practical limit is around 1e-19 joules per bit erased.

    I got this information from Jacob Cannell’s post, but you only need to focus on the thermodynamics part.

    https://www.lesswrong.com/posts/xwBuoE9p8GE7RAuhd/brain-efficiency-much-more-than-you-wanted-to-know#Thermodynamics

    This is where the hope for quantum computing gets in, as pretty much all of a quantum computer’s operation has to be reversible, except for the measurement part, and thus is far less reliant on having to erase bits everywhere to complete a computation, which means you probably save lots of energy.

    You do have another limit to contend with, which is the Margolus-Levitin theorem, but that’s much better as a bound here, since it’s 6×10^33 operations per second per joule, and this allows for some crazy stuff, like far-future Matrioskha brains of 10^34 people in a pocket quantum computer, as Isaac Arthur discussed below on Civilizations at the End of Time: Iron Stars:

    https://youtu.be/Pld8wTa16Jk?t=1627

    Even more limited applications of this sort of shrinkage would do a lot to help computation in general, as you could make more intelligent AI in very small computers, making preventing/controlling superhuman AI from being given to the masses completely hellish at best and impossible at worst, meaning AI would inevitably diffuse like emulators or cryptography tech. Note that if we assume quantum memory is there, then we can avoid the bound, but that strengthens the point here.

    The most realistic and general use case for quantum computers is for energy efficiency and keeping alive Moore’s law for longer, and while it’s nothing in complexity-theoretic terms, it means a lot in practice.

    Re the time travel use case, the biggest hint, albeit a weak and very faint hint comes from John Preskill who said that in certain circumstances, you can actually violate causality and travel faster than light speed allows you to via quantum computation, and if this is either relying on known results, just not linked in the Quanta Magazine podcast with Janna, or physicists have strong reasons to believe the claims are correct, this could quite literally be the most important use case and quantum computers could catapult from a very useful technology to the most important technology in history, no contest. I will provide both a link and the parts where John Preskill said this, as well as some context.

    I do dearly wish he gave us actual numbers, and/or someone else analyzes the claim further to be more specific on what John Preskill is claiming that this use case is impractical, as well as actual numbers on how large a quantum computation would need to be done. I do not need this to be answered, but it would be really helpful to answer that question.

    (To respond to Scott Aaronson’s previous objection that solving NP-complete/hard problems with a computer algorithm possible in the physical universe would foundationally be contradictory with the world that we are living in, or at least that it would be very surprising if NP-complete/hard problems turned out to be easy on at least one type of computer possible to build in real life, the response would be that even if the claim that John Preskill’s claim about violating causality with a quantum computer was correct, and it was possible to exploit this for solving PSPACE-complete problems in AC0 time/polynomial time, and we have a known path, or at least strong reasons to believe we could scale quantum computers to a point where we could simply rip apart space with quantum computers to violate causality, and a quantum computer closer to building or laptop sizes than entire universe sizes, and the number of qubits is closer to hundreds of millions of qubits, rather than the amount of qubits in the entire observable universe it would take quite a bit of time measured in several decades at least, because right now we simply don’t have the error-corrected quantum computers necessary to realize this use case, we’d have to engineer and actually build them, which takes time, and we’d have to write up the time travel results in papers to convince others, etc, so there’s no contradiction between not being able to solve NP-complete/hard problems today and the world looking like this and say being able to solve NP-complete/hard problems in say 30-50 years, so the NP-complete/hard solver making the world today different paradox is solved.)

    Here’s the podcast link with Janna Levin and John Preskill below, as well as the specific quote where he said that quantum computation might be able to violate causality sometimes:

    https://www.quantamagazine.org/what-is-quantum-teleportation-20240314/

    PRESKILL: “No surprise there.

    Actually, I think what I just described does give us insight into the process by which information escapes from black holes, which we believe that it does. The laws of physics don’t allow information to be destroyed, even when it falls into black holes and the black holes evaporate. It just gets scrambled up into some form that’s exceedingly hard to read. There’s some kind of violation of locality. This is the most, or one of the most, fundamental principles in physics. We referred to it earlier — that information can’t travel faster than the speed of light.

    But, in some sense, to get out of a black hole, by definition information is traveling faster than light. Light is trapped inside, information gets out. And what that indicates is that the notion of causality — the way we usually think about it, that there’s a speed limit on how fast information can travel — is not rigorously true under all circumstances. That principle can be violated.

    And space-time itself may not really be a fundamental notion. Rather, it’s an emergent property of some complex quantum system in which things are highly entangled.

    So how is it that we think, under normal circumstances, that this notion of causality seems to be so rigorously satisfied? Well, I think we have an answer for that, and it’s quite interesting that it connects with quantum computing.

    We think it’s possible to violate causality, to send information faster than light. But in order to do so requires a quantum computation of the sort that you might do on a quantum computer, which is so complex and so powerful that we’ll never be able to do it in practice.

    So, we should be able to rip apart the space in between me in California and you, Janna, in New York. In principle, we can. In practice, it’s so exceedingly hard to do it, it would require such a powerful computation, that no one will ever succeed.”

    (I would have asked immediately for whether this is a known result, and if so where this was written up, and/or something that physicists have strong reasons to believe it happens, and I would have asked him to clarify or to be more specific on how powerful a computation needs to be in order to violate causality by ripping space apart, preferably with numbers attached to a metric, because right now the claim of impractically doesn’t distinguish between impractical now, impractical to do ever, and impractical to do this soon, say within 20 years or 30-50 years.)

    Finally, there’s a new article by Quanta Magazine that talks about atom-based quantum computing that has seen success error-correcting them, so this will be useful to add to the discussion, and the link will be below:

    https://www.quantamagazine.org/the-best-qubits-for-quantum-computing-might-just-be-atoms-20240325/

  75. Scott Says:

    Ok, Krysta Svore has authorized me to share the following reply from the Microsoft folks to the objection of Craig Gidney #22.

      We appreciate both the compliments about the paper, and the feedback!

      We agree that, in the context of a (error corrected) quantum algorithm, the result that matters most is the error rate per round of error correction. It’s the slope of the lines in Figure 7 that is most important. More specifically, the figure of merit is the error rate per logical clock. We were very pleased to see that our results were consistent with an error rate per logical gate that is better than the corresponding physical rates.

      We agree that using post-selection has a cost in terms of the number of computational steps that can be executed. The vast majority of rejections in our demonstrations are due to pre-rejection (rejection prior to interaction with computational data) and pre-selection of state preparation is standard practice going back to early quantum fault tolerance proposals (also, it is scalable with the use of state preparation factories). While our current post-rejection rates don’t support thousands of computational operations, they do support tens to hundreds of such operations. Using an even-distance code with a mixture of error correction and post-selection as we do can lead to worthwhile tradeoffs (see, for example, https://arxiv.org/abs/2112.03785). Furthermore, there are ways to use post-selection in a scalable system. For example, in a concatenated scheme, post-rejection at one level will largely be converted into pre-rejection at the next level.

      As a point of clarification, we would like to make a distinction between rounds of “syndrome extraction” as in the surface code, and rounds of “error correction” as in our result. In Figure 7 of our paper, each repetition corresponds to a full error correction gadget and as many as two logical CNOT gates. This is different from a single round of syndrome extraction for the surface code, which is some fraction of a full error correction gadget or logical gate. So, while we agree that there are boundary effects—and again, the slope is what matters most—comparison to a single round of surface code syndrome extraction isn’t appropriate.

      Regarding the linked plot, we achieve a post-rejection rate of ~5% for rounds of error correction, and much lower for the Bell state experiments (less than 0.5%). The linked example in which there is found to be 100x improvement by rejecting 60% of syndromes in the surface code experiments is not representative of our experiments.

  76. Michael Says:

    Sharmake Farah #74: i ain’t reading all that. im happy for you tho, or sorry that happened.

    But also, this reversible computing thing won’t work. Unless someone manufactures an intrinsically self-correcting quantum memory, and then somehow figures out how to compute with it, each protected quantum bit will have a bunch of classical compute running behind it to keep it correct. And this is totally ignoring the large amount of control electronics. You can’t build a quantum computer without a bunch of classical computers running behind it.

  77. QMA(2) Says:

    Scott #73 Liked ” I should email Ryan O’Donnell for clarification on this.” Please enlighten us here on his feedback and your perspective. With bounded c and s he also says QMA_log(2) cannot contain NP which can be amplified to NEXP. So either way QMA_log(2) seems unlikely to contain NP.

  78. Scott Says:

    QMA(2) #77: No, I was trying to explain to you that with c=2/3 and s=1/3, there is really no known obstruction to QMAlog(2) containing NP and QMA(2) containing NEXP. Furthermore, Jeronimo and Wu have now shown that the nonnegative-amplitude versions of these classes do contain NP and NEXP respectively.

  79. Scott Says:

    Christopher #71: Well, I suppose one natural idea for how to “synergize” quantum computing and AI for drug discovery, would be to use a generative model to generate lots of drug candidates and quantum simulation to evaluate each given candidate, with the results of the simulation fed back into the AI.

  80. Scott Says:

    dennis #68: That D-Wave paper is actualy interesting! I’m glad D-Wave has moved on from the sort of stuff I took them to task for back around 2007-2013, to substantially more serious experiments.

    Having said that, there’s a crucial drawback of their new result, considered as a demonstration of “quantum advantage.” This is that

    (1) to the extent they’re able to verify the output of their simulation, it’s by comparing certain correlation functions against predictions that are easily calculated on a classical computer, whereas

    (2) to the extent there’s something in their experiment that’s intractable to simulate classically, we also don’t know how to verify that thing.

    (This isn’t even the first such case! The USTC BosonSampling experiments, to take an example, had a similar issue.)

    With quantum simulations, the ideal would be use a QC to calculate something that we can neither calculate nor verify on a classical computer in any reasonable time, but that can nevertheless be verified by comparing it against actual experimental reality. But I don’t think either D-Wave or anyone else has really done that yet.

  81. QMA(2) Says:

    Scott #78 Please see his next remark 3.3 https://www.cs.cmu.edu/~odonnell/quantum15/lecture25.pdf. He clearly states QMA_log(2) containing NP is unlikely with c=2/3 and s=1/3. So perhaps QMA_log^+(2) has too much power.

  82. Craig Gidney Says:

    Microsoft #74:

    > We agree that […] the result that matters most is the error rate per round of error correction.

    Good. This was the major thing that mattered. Everything else is minor disagreements.

    > The vast majority of rejections in our demonstrations are due to pre-rejection

    Right. This wasn’t clear from my comment, but I am totally on board with pre-rejections. As you note, retry-until-success is key to constructing efficient magic state injection, magic state distillation, and entanglement purification gadgets. As long retrying is fast, and the retry rate doesn’t stray into the high 90s, I say go nuts with it.

    (That said, this probably exacerbated the difference between first-round and per-round error rates in this particular experiment. But don’t avoid using a good technique just because it juices the number that matters less than the number that doesn’t.)

    > in a concatenated scheme, post-rejection at one level will largely be converted into pre-rejection at the next level.

    I think this needs to be backed up by a concrete construction. I agree that it’s possible for concatenation to turn detection into correction. For example, a recent paper I co-authored https://arxiv.org/abs/2312.04522 cut the expected size of surface code qubits by 2x, under the exact same hardware requirements as normal surface codes, by concatenating the simplest possible error detecting code over the surface code patches. But that relied crucially on the richness of soft information provided by the underlying surface codes, allowing us to pick the most suspicious one as the likely correction when the overlying code flagged a problem. That particular idea would break if I replaced the surface codes with a tiny code like a carbon code. So details are very much needed in order to estimate the detection-to-correction-ness of a concatenated system.

    > comparison to a single round of surface code syndrome extraction isn’t appropriate

    The “size of an operation” in the surface code is actually a very complicated issue; with different regimes based on the connectivity of your system, the number of magic state factories you have, and the reaction time of your control system. But I agree that under lattice surgery it is d rounds of surface code that is the most obvious choice of unit, and so 1 round is an unfair uncomparison.

    > we achieve a post-rejection rate of ~5% for rounds of error correction [..] The linked example [rejected] 60% of syndromes in the surface code experiments is not representative

    Yes, the postselection rate in your experiment is impressively low, whereas my analysis used a very high rejection rate. Only the improvement factor of 100x was similar.

  83. Scott Says:

    QMA(2) #81: He says it’s “unlikely,” but not because of any crazy consequence known to follow like 3SAT in subexponential time.

    What I keep trying to tell you is that opinions have changed recently, in large part because of the work by Jeronimo and Wu. It now seems like QMA(2)=NEXP might turn out like IP=PSPACE, or MIP=NEXP, or MIP*=RE: that is, “unlikely” and yet true!

  84. Scott McKuen Says:

    Anon #20: I saw a talk by Dr. Kathrin Spendier about the Quantinuum racetrack structure, and I *think* I kind of understood what was going on. Very roughly, there’s a groove that cuts across the racetrack – the laser is aimed through that groove. Apparently the ion speeds around the track can be controlled very finely, so if you need to entangle #6 with #17, you manipulate the fields to force those two ions to be at opposite sides of the track, each passing by one end of the laser-groove simultaneously. The other ions don’t change their order, they just get bunched up or spread out at different parts of the track enough to allow this arrangement to happen. When the two ions are in place, the laser fires at the right moment, hits both, and entangles them. So by manipulating the individual ion travel times, they can get any-to-any entanglement topology in their racetrack without changing the physical adjacency. I think they’ve already moved on to a second-generation form of this idea, which Scott alluded to above with the city-grid description.

  85. Wyrd Smythe Says:

    Just to chime in agreeing with what Dan Staley (#12) wrote, one study I read about a while back seemed to fine nothing detrimental about internet use in general (exploring Wikipedia can be quite educational, for instance, not to mention all the educational videos on YouTube), but what does seem to create issues is social media. Twitter, as I think you’ve found, is the enemy of peace and serenity.

    Obviously, playing outside with kids their own age is preferable, but that’s sadly not the world we live in anymore. Minecraft, from what I understand, is a bit like the various kinds of physical construction sets us old farts grew up with. They do foster imagination and the satisfaction of bringing you own designs to life.

  86. Prasanna Says:

    Scott #51

    The divergence in uncertainties about the future of AI and QC is due to following imho
    1. QC is theoretically well grounded both in QM and CS, which makes those outlandish claims refutable and even rule out in principle some of them.
    2. AI is so far exclusively empirical, so even experts diverge wildly on where or if a ceiling exists. The error bars are so high, it makes experts’ prognosis indistinguishable from that of the novice.

  87. Scott Says:

    Prasanna #86: Basically, yes.

    The line I’ve been using lately is, “for 20 years my job was to tell people, I’m sorry, but quantum computing won’t revolutionize your field the way you think it will, and if you give me an hour I can explain exactly why. But now I have a new job, which is to tell people, yes, generative AI probably will revolutionize your field, and I can’t explain why!”

  88. Sharmake Farah Says:

    Michael #76: The question now becomes, is it possible to make the quantum computer’s operations more self-contained such that error correction is done either by purely reversible computers or quantum computers themselves, and is it possible to make the entire process of how a quantum computer does it’s computing quantum mechanical so it doesn’t need any classical or irreversible components to work in theory?

    And if it can be possible to do this, at least in theory, is there a known path or at least strong reasons to assume that purely quantum computers without relying on any classical components for either it’s control electronics or error correction can scale well like how physicists believe that quantum computers can scale well in to the millions of qubits regime?

  89. Gil Kalai Says:

    1) Mark Spinelli (#60) This is a good question. I don’t know to what extent the safety of BB84 depends on quantum fault tolerance. Some years ago Or Sattath asked me this and related questions and we had some nice discussion also shortly with John Smolin. (John pointed out that in a world without quantum fault-tolerance also the attacker might have less power.) In any case, this is a very interesting problem even if quantum fault tolerance is around the corner.

    2) Scott (#61) my question regarding IBM was in connection to your question:

    “What is the state of experimental quantum computing, 42 years after Feynman’s lecture, 30 years after Shor’s algorithm, 25 years after I entered the field, 5 years after Google’s supremacy experiment?”

    I think that the situation with IBM quantum computers is quite important in understanding the state of experimental quantum computing today. (In several aspects IBM is ahead in experimental quantum computing.) Of course, my question is *mainly* directed to people who work with IBM quantum computers; either within IBM or outside IBM, and I hope that some such people who read your blog can give some information.

    The situation regarding IBM is important in its own right and it is also relevant to my study of the Google 2019 supremacy experiment. (To the best of my memory you also raised the IBM capabilities in the context of my concerns regarding the Google 2019 experiment, which is why I also directed the question to you.)

    3) Scott (#62)
    “It’s just that the skepticism isn’t interesting if it doesn’t deeply engage with the actual huge effort that’s now underway to do the thing being theorized about — an effort that’s recently produced programmable systems with hundreds of qubits and 99.9% fidelity 2-qubit gates, enabling circuits with tens of thousands of gates, and that’s also now started to demonstrate quantum error-correction that produces a net gain.”

    Let me make three points:

    a) Skepticism of quantum fault tolerance could be potentially interesting to draw conclusions for quantum physics even in a reality of “small island of quantum fault tolerance”. (And certainly when the reality will not allow such islands.)

    b) Personally, I am deeply engaged with the actual huge experimental efforts. I spent a lot of effort studying the Google 2019 quantum supremacy experiment. With Yosi Rinott and Tomer Shoham we have a project to study various statistical aspects of the Google 2019 experiment and we developed tools that could be relevant to other NISQ experiments. (Testing these tools for the recent Harvard/MIT/QuEra experiment gave a nice twist to our most recent paper and we look forward to more data from this experiment.)

    c) My overall judgement is still that quantum supremacy and quantum fault tolerance are not possible. (That said, I can enjoy theoretical and experimental advances in quantum computation, including those that go against my overall view and require taking my skeptical hat off.)

    4) Scott (#63) ” if scalable fault-tolerance is finally achieved, after thousands of people labored toward the goal for decades, and then the very same people who confidently said it would never be done turn around and declare that of course it was doable, so what was the point of going to all this effort to demonstrate it … I will fly into a rage the likes of which even regular Shtetl-Optimized readers have never seen. ”

    Talking about myself, when Google made their supremacy announcement I admitted that if true it will refute my argument. In the meanwhile the Google supremacy claims were largely refuted and there are concerns regarding other claims in this 2019 paper. So far, the various claims regarding quantum error-correction are not strong enough to refute my argument but they are interesting and they should be discussed (and for this Shtetl Optimized plays an important role) and scrutinized. Based on this trajectory I don’t think that I will “declare that of course it was doable”. (Also for me “confidently said” should probably be replaced with “cautiously said”.)

    Come to think of it, among the quantum computers skeptics I am aware of: Robert Alicki, Michel Dyakonov, Leonard Levin, Oded Goldreich, John Baez, Moshe Vardi, Robert Laughlin, Serge Haroche, Xavier Waintal, and quite a few others, I don’t think anyone will say “of course it was doable” and if somebody will, in those future moments of triumph and glory, why won’t you let him get away with it, Scott?

    5) Scott (main post) “Meanwhile, on paper, it looks like known methods for quantum fault-tolerance, for example using the surface code, should start to become practical once you have 2-qubit fidelities around ~99.99%—i.e., one more “9” from where we are now. And then there should “merely” be the practical difficulty of maintaining that 99.99% fidelity while you scale up to millions or hundreds of millions of physical qubits!”

    I agree with Scott that the crucial issue is if gate errors could be improved to 99.99% (or so; the precise value may depend on details of the noise behavior) to enable quantum fault tolerance. Scott (and many experts) think that the required threshold can be achieved and I proposed an argument for why it cannot be achieved.

    6) Scott (main post) “But, you know, maybe fault-tolerant quantum computing will not only work, but matter—and its use to design better batteries and drugs and photovoltaic cells and so on…”

    Actually, here I also agree with Scott. I agree that it will be pity if the efforts to reach quantum fault-tolerance will not be pursued; if quantum computers that can give huge computational advantage for some problems be built, it is reasonable to expect that there will be some computational advantage for many other purposes, not necessarily coming from algorithms that admit full mathematical analysis but rather from trial and error heuristics; just like for classical algorithms. And it is reasonable to expect applications for applied physics and chemistry.

  90. QMA(2) Says:

    Scott #83 QMA(2)=NEXP implies Q\sigma_2=Q\Pi_2 (https://complexityzoo.net/Complexity_Zoo:Q#qma2). Isn’t that crazy?

  91. Scott Says:

    QMA (2) #90: Do you have a reference for that implication, which I’d never seen before?

  92. QMA(2) Says:

    Scott #91 Zoo in https://complexityzoo.net/Complexity_Zoo:Q#qma2 provides this reference https://arxiv.org/abs/1805.11139

  93. Scott Says:

    QMA(2) #92: Ah, thanks for reminding me about a paper that I now vaguely remember! One way to interpret the paper would be to underscore just how little we understand the QPH (quantum polynomial hierarchy). But if for some reason, you had a strong intuition that the QPH should not suffer any drastic collapses, then yes, it looks like you then have to believe that QMA(2) is strictly contained in NEXP.

  94. QMA(2) Says:

    Scott #93 Thank you. I don’t think it is known QMA(2) even contains EXP. Would the techniques in the Jeronimo and Wu paper help establish this? Then it seems all the more plausible QMA(2) contains NEXP.

  95. Edan Maor Says:

    Scott #8:

    I have similar feelings about my kids and their iPad usage. One problem we’ve discovered is that our youngest one started being exposed to screens much younger than his siblings, because he saw them – which definitely feels sub-optimal.

    You write:

    > I more and more feel like the real solution was to have lived in a neighborhood full of kids playing outside, exploring, creating, etc. with no adult supervision, so our son could just join them and that would be even better than the iPad.

    I imagine not, but have you ever considered moving to Israel? There are lots of downsides to living here, but one upside is that there are more walkable-neighborhoods with lots of time to play with friends outside. (Though to be fair my kids still have not-insignificant screen time.)

    And totally off-topic to that – do you have anything recent about things you’ve learned at OpenAI, especially with regards to AI safety?

  96. Scott Says:

    Edan Maor #95:

    – Yes, we considered living in Israel. We even spent a sabbatical year there (2017-2018), where we got to experience the glories of a Tel Aviv neighborhood designed around kids, the way few urban neighborhoods in the US are any longer. One thing we didn’t find was a reasonable solution to our two-body academic jobs problem. Anyway, we plan to continue visiting every year (even hopefully this year) — two of the kids’ grandparents are there!

    – My most recent AI-related post was this one. More hopefully coming soon.

  97. Scott Says:

    QMA(2) #94: I can’t think of any Jeronimo-Wu-related approach that would put EXP into QMA(2) without putting all of NEXP into it, but conceivably!

  98. Vanessa Kosoy Says:

    Scott #96

    > One thing we didn’t find was a reasonable solution to our two-body academic jobs problem.

    Well, if you want a job doing AI safety research based in Israel then HMU, I might get one for you 🙂

  99. gentzen Says:

    fred #63:

    But all the tech mentioned here seems only comprehensible to the few people who are actually working on it, mostly fresh phd/post-docs who are laser focused on a particular realization of qubits… so, yea, that doesn’t make it any easier for anyone else to get a sense of what’s achievable (as opposed to the theory of writing an actual QC algorithm, which is obviously independent of the hardware details, ideally).

    I don’t think that the tech is so hard to comprehend. But of course, you have to invest the time to read some of the papers. Reading/understanding a 20-page paper takes more than a day in my experience, so you do need some reasons why you should invest that time. While Scott is on leave at OpenAI, investing that amount of time just to keep up-to-date with an adjacent field simply feels hard to justify.

    But even if you had invested the time to get a good understanding of what is currently achievable, how comfortable would you be to summarize the current state or discuss next steps? It is an adjacent field after all, so you are bound to make a mistake or two, or use language slightly different than the experts. For example, I personally was uncomfortable even asking some of my questions that came up after I read the following keynote address (a keynote is typically intended to be understood by a wider audience compared to technical presentations)
    https://quantumfrontiers.com/2023/12/09/crossing-the-quantum-chasm-from-nisq-to-fault-tolerance/

    But quantum error correction is not practical yet. To make use of it, we’ll need better two-qubit gate fidelities, many more physical qubits, robust systems to control those qubits, as well as the ability to perform fast and reliable mid-circuit measurements and qubit resets; all these are technically demanding goals.

    Do we need fast and reliable mid-circuit “measurements and qubit resets” or merely fast and reliable mid-circuit “measurements or qubit resets”?

    On the one hand, “qubit reset” by itself seems to be enough, as “suggested” for example in N. David Mermin “Quantum Computer Science – An Introduction”:
    https://www.physicsforums.com/threads/quantum-computation-and-entropy.1017112/post-6656488

    Note that both figures contain two ancilla qubits, which are both initialized to |0>. This known state is what allows them to absorb entropy from the main qubits. With measurements, the state of the ancilla qubits is known once again after they absorbed the entropy, and hence the ancillas are “ready” to absorb entropy once again. Without measurement, the ancillas still aborbed the entropy, but they cannot be reused yet.

    On the other hand, “intermediate mid-circuit measurement” by itself also seems to be enough, as “suggested” for example by this discussion between Scott and Craig Gidney:
    https://scottaaronson.blog/?p=7209#comment-1949025
    Craig Gidney #11

    Scott #5:

    I agree that the decomposition of a reset into an adaptive flip is still a clean qubit insertion. But I want to emphasize that I then removed the adaptive flips.

    I would describe the transformation as replacing low-entropy qubits (“clean qubits”) with high-mutual-information-vs-control-system qubits (“known qubits”).

    Scott #5:

    Craig Gidney #1: I would say that adaptive measurement in itself effectively amounts to the insertion of clean qubits, the way it’s used in QECC. But you’re right, the question was unclear as stated, and I wouldn’t use it again without changing it.

    Now you might say that my way of quoting is simply confusing, and that this justifies ignoring my questions (TLDR). (Or that I didn’t address the “recipient” of my question directly enough.) Or you might say that the difference between “and” and “or” is not so important anyway, and that I know the answer to my question already, so I only ask it because I am nit-picking. But what I actually don’t know is what is “common-knowledge,” what is “relevant,” and maybe even where I might have overlooked something or made some stupid mistake. But all this has nothing to do with the tech itsef being hard to comprehend.

  100. QMA(2) Says:

    Scott #97 QMA(2)=NEXP will be a nice result. A pretty good one for the past 5-10 years. Hope we can see this result soon.

  101. Concerned Says:

    The history of quantum computing experiment is one of the same milestones being crossed again and again, with a little less cheating each time. This is the “training wheels” model of surviving in the funding ecosystem. 🙂

  102. Scott Says:

    Concerned #10: If there’s a little less cheating each time, that sounds like progress, no? 🙂

  103. Russell Stutz Says:

    Anon #20, Scott #32 and McKuen #84: I wanted to quickly address some of the trapped ion QC scaling questions brought up here.

    McKuen, we actually do re-order the ions and place the two ions we want to perform an entangling gate on into the same potential well. This is done so that the shared motional modes can be used during the entangling operation. This is described in more detail in Pino et al, Nature 592, 209-213 (2021).

    Scaling the ion sorting as you increase the number of ions is one of the challenges, and for this you want to move beyond the linear sorting we are currently doing with ion chain rotations or swaps with something we call junction transport. Our team was able to demonstrate junction transports in test devices that were not only relatively fast but with very little motional excitation of the ions and even more impressively with not just single ions but with pairs of Yb-Ba ions. This is very enabling for future quantum computers. Of course, it takes some time to take things from demonstrations to commercially available quantum computers, but rest assured we are hard at work on doing just this. You will see Quantinuum continue to scale to larger qubit counts while working very hard to maintain and even improve on the qubit quality that we have been very focused on since our creation.

    For details on the junction transport work you can look at WC Burton, B Estey, IM Hoffman, AR Perry, C Volin, G Price, Physical Review Letters 130 (17), 173202 (2023).

    Huge thanks to Scott for the blog post and everyone for their feedback!

  104. Nick Drozd Says:

    Hey Scott, did you catch the eclipse? I was extremely fortunate to be able witness totality from my porch, and the sky was perfectly clear.

    Obviously the total eclipse is a pretty spectacular sight, but I thought that the partial eclipse was cooler than I had been led to believe. People say stuff like, even 90% coverage is barely noticeable. Maybe that’s the case for people who are standing outside to watch the whole thing. Their eyes adjust or whatever, plus they’re in a public park or some place where they wouldn’t normally be, and are therefore unfamiliar with the normal course of sunlight through the day. I was just in my house going about my usual daily routine, and starting about a half hour before totality I noticed it started to get irritatingly dark. For a second I thought, WTF, why is it so dark in here? Then of course I remembered what was happening.

    Partial eclipse is quite distinct from two other darkenings.

    1. Cloud cover. Things go dark when the sun is obscured by clouds. But when that happens, there are no shadows. With the partial eclipse, it’s dark outside and yet there are still shadows. I found this very weird.

    2. Sunset. Things go dark when the sun goes down, and eclipses are often described as a fast sunset. But when the sun sets, the sun is at the horizon, and long shadows are cast, and the light takes on a different quality. Whereas with the eclipse, it’s dark, but the shadows are the usual midday shadows, and the light is the usual midday light. It’s just darker, and so the world takes on a strange color profile.

    My favorite part of the whole event was the transition out of totality. I was looking at the grass and the trees and enjoying the darkness, and then in a split second things got just a little bit more colorful. I thought hey, it looks like sunrise, and that meant the un-eclipsing had begun.

  105. Areader Says:

    Gil Kalai (#89) “I don’t think anyone will say “of course it was doable” and if somebody will, in those future moments of triumph and glory, why won’t you let him get away with it, Scott?”

    Oh how I dislike this type of thinking. It’s one thing to acknowledge and realize ones mistake and say something along the lines of “I realize now it was of course doable and was obvious all along but I couldn’t see it back then” and it’s a whole different type of thing to be the insufferable type who completely reverses years of opinions and suddenly claims “I knew it all along!”. We shouldn’t be going out of our way shaming people with wrong opinions saying “I told you so” in their faces but we should absolutely not let them get away with that kind of behavior either.

    At minimum people need to own their beliefs, right or wrong, and have the decency to admit it if proven wrong. I care much less about someone making a wrong prediction, we all have at some point in time and we definitely all will again. But people who never admit to being wrong by virtue of altering their past claims a posteriori lose all credibility in my eyes and their behavior should be called out. Scott is right on this one.

  106. Scott Says:

    Nick Drozd #104: Absolutely yes, we did! We saw it from Buchanan Dam, in Texas hill country and smack in the center of totality, where friends invited us to their beautiful lake house for an eclipse party. Right up until the moment, the clouds threatened to block the sun, but luckily they cleared up enough to give us a spectacular 4-minute view of the main event. While I’d seen the 2017 eclipse, this was the first total eclipse that Dana and the kids and my parents had ever seen, and I’m so glad they got to experience it.

  107. QMA(2) Says:

    Will this news (https://blogs.microsoft.com/blog/2024/04/03/advancing-science-microsoft-and-quantinuum-demonstrate-the-most-reliable-logical-qubits-on-record-with-an-error-rate-800x-better-than-physical-qubits/) of better error correction help beat the 433 qubit QC (https://www.technologyreview.com/2023/05/25/1073606/ibm-wants-to-build-a-100000-qubit-quantum-computer/) in near future? How exactly does this breakthrough help in larger view of things?

  108. Johnny D Says:

    Does it matter that the gates in the Microsoft Quantinuum paper only include Clifford gates? Can you explain or point me to a source that explains how to perform a non-Clifford gate on a logical qubit?

  109. Scott Says:

    Johnny D #108: Yes, it matters. I believe they’re using a code where Clifford gates like CNOT can be done transversally (someone correct me if I’m wrong). Doing non-Clifford gates involves either doing a non-transversal operation and swallowing the hit, or (probably more realistically) using magic state factories. See eg the seminal paper by Bravyi and Kitaev from 20 years ago or my QIS II lecture notes.

  110. Sabee Says:

    Comment #93: Hi Scott and QMA(2). Take a look at Corollary 1.9 in the paper linked by QMA(2) (https://arxiv.org/pdf/1805.11139.pdf). The second levels of these hierarchies are already known to be equal. (A decent bit is known about QPH by now, which Justin and I tried to summarize in our recent paper (see, e.g., Figure 1 in https://arxiv.org/abs/2401.01453)).

  111. fred Says:

    gentzen #99

    thanks.

    as an example of my point, Scott linked in his post to this article
    https://arxiv.org/pdf/2312.17570.pdf

    and Scott wrote about this article:
    “[…] mashing together the real reasons why it’s difficult with basic misunderstandings of the fault-tolerance theorem”

    Then I looked up the author, who has 30 years of experience in related fields:

    “I am a condensed matter theoretical physicist with research interests in quantum nanoelectronics, spintronics and strongly correlated systems. I work at CEA Grenoble, France.
    I graduated from Polytechnique school, Paris area, in 1995 (X92), did a master of theoretical physics at Ecole Normale Superieure Paris, and moved on to the group of J-L Pichard in Saclay for my PhD (1999). I spent two years as a postdoc in P. Brouwer’s group in Cornell University, USA and was hired as a permanent researcher in CEA Saclay in 2002. In 2009, I moved to INAC, CEA Grenoble where I have remained since.”

    Okay, so either the topic is so dense than even ‘true’ experts can’t agree… begging the question – how big is the field of actual “experts” anyway, if the field is tiny, it could be because the subject matter is indeed dense and difficult… or the topic is so dense that it attracts a bunch of imposters (who may look competent from their resumes, but really are totally confused, or professional trolls whose reason to be in the field is to annoy Scott), who claim skepticism in a way that no-one really can outright negate with clear arguments, besides “somehow, progress is being made”.
    Either way, as in any field, it’s really up to the experts to put more effort to clarify things and put order in their own house, which is probably being done but takes time, because one needs experimental results to refine error-correction theory (correct modelization of noise, etc).

  112. Concerned Says:

    Scott #102

    Undoubtedly! I just wish that kind of impedance matching coupling between funders and scientists wasn’t necessary. Sometimes, it feels like almost all basic research is being done under the cover of “failed” direct-to-industry programs, the only type of proposal that can survive funding competition with other “doomed” (but secretly basic research) direct-to-industry programs in an industry-focused environment. That is why matching expectations with reality is so important, something you have contributed a lot to – it is the only way out of this nash equilibrium.

  113. QMA(2) Says:

    Another quantum breakthrough? https://eprint.iacr.org/2024/555.pdf

  114. Scott Says:

    QMA(2) #113: Yeah, someone sent me that earlier today. I asked two experts, but neither had any idea yet whether it’s correct. Give it time!

  115. QMA(2) Says:

    Just a comment. The paper says NIST standard LWE candidates are not broken because they have small (constant size) error modulus. I know Arora-Ge algorithm breaks LWE at constant size modulus (https://martinralbrecht.wordpress.com/2011/05/07/algorithms-for-lwe-and-the-approximate-gcd-problem-over-the-integers/). I am confused. Refer chapter 21 here https://people.csail.mit.edu/vinodv/CS294/lecturenotes.pdf. May be number of samples (to break proposed NIST LWE schemes) being polynomial is OK for NIST?

  116. gentzen Says:

    fred #111

    thanks.

    I have now read the Xavier Waintal 9-page paper (or rather 6 double column pages). Let me give my own opinion, and then compare it to Scott’s words. The section ‘The fundamental law of analog machines’ starts with

    At the root of this analysis is the simple observation that a quantum computer is an analog machine [3], i.e. its internal state is described by a large set of complex numbers that can vary continuously. This is in contrast to digital machines, like our desktop computers, whose internal states are described by bits that can take two (macroscopically distinct) values: zero and one.

    Citing “[3] M. Dyakonov, Prospects for quantum computing: extremely doubtful, International conference on Hot Topics in Physical Informatics , Arxiv:1401.3629 (2013)” is problematic, because Waintal is normally more reasonable in his scepticism than Dyakonov. (Which is probably also the reason why Scott linked to his paper at all. You see, even mixed publicity is publicity. And any publicity is better than none!) The claim that a quantum computer would be an analog machine is also problematic all by itself, but let’s first see how Waintal spins his “analog machine” claim in section ‘The “salvation is beyond the threshold” myth’ against the “common defense”:

    QEC is often summarized with statements of the form “We can anticipate that analog quantum simulators will eventually become obsolete. Because they are hard to control, they will be surpassed some day by digital quantum simulators, which can be firmly controlled using quantum error correction.”[23] or “so, once the desired threshold is attainable, decoherence will not be an obstacle to scalable quantum computation.”[24] In short, QEC would turn an analog machine into an essentially digital one. However, QEC is itself entailed with considerable difficulties that I now detail.

    My problem with this spin is that it mashes together the existence of a “discrete set of universal operations” (or rather the Solovay–Kitaev theorem that it is actually quite efficient) with the challenges of QEC.

    How does my opinion compares with Scott’s judgment of “basic misunderstandings of the fault-tolerance theorem”? OK, if it is “basic”, then “Mike and Ike” (ref [22] in Waintal’s paper) should already clarify it. And indeed, they do:

    10.6.2 Fault-tolerant quantum logic
    A key technique in the construction of fault-tolerant quantum circuits is the method of constructing fault-tolerant operations to do logic on encoded states. In Section 4.5.3 of Chapter 4 we learned that the Hadamard, phase, controlled-CNOT and π/8 gates form a universal set in terms of which any quantum computation may be expressed. We now explain how each of these gates can be implemented fault-tolerantly.

    So I agree with Scott that basics of fault-tolerance have been presented in a very dubious way. And trying to argue the question of whether quantum computers should be considered as analog or digital machines by analysing details of the operation of surface codes feels misguided to me. Describing it as “mashing together” unrelated stuff is basically correct, if you ask me.

    Should Scott, or anybody else (for example me) try to clarify with Waintal those objectionable points, either in private, or in a public blog-post, or in a blog-comment? If somebody feels like it, sure, he can go for it. Especially if he hopes to also “gain” something for himself by the invested time. If not, this is also fine. The issue of whether “even ‘true’ experts can’t agree” won’t significantly be affected by it.

  117. gentzen Says:

    I forgot one more quote from section ‘The “salvation is beyond the threshold” myth’, which illustrates even better how Waintal mashes together the Solovay–Kitaev theorem with details of the operation of surface codes:

    Performing this simple rotation in what some have called a “digital way” (that is with an arbitrary high control on its precision), one decomposes it into several nested levels of operations. At the first level of the Russian doll, …

  118. Viktor Dukhovni Says:

    QMA(2) #115: The authors state:

    Let us remark that the modulus-noise ratio achieved by our quantum algorithm is still too large to break the public-key encryption schemes based on (Ring)LWE used in practice. In particular, we have not broken the NIST PQC standardization candidates. For example, for CRYSTALS-Kyber [BDK+18], the error term is chosen from a small constant range, the modulus is q = 3329, the dimension is n = 256 · k, where k ∈ {3, 4, 5}, so we can think of q as being almost linear in n. For our algorithm, if we set αq ∈ O(1), then our algorithm applies when q ∈ ˜Ω(n^2), so we are not able to break CRYSTALS-Kyber yet. We leave the task of improving the approximation factor of our quantum algorithm to future work.

  119. fred Says:

    gentzen #116

    https://en.wikipedia.org/wiki/Quantum_logic_gate#Universal_quantum_gates

    “To solve this problem, we only require that any quantum operation can be approximated by a sequence of gates from this finite set.”

    The keyword is “approximated”.
    I guess this type of error is more fundamental than what QEC is dealing with (which is decoherence, just similar to what is done in classical electronics to deal with various channel noises), and the solution would be to just add more and more qubits and gates until the “desired” approximation is achieved… it’s a bit like a classical digital computer, where you also can’t do true continuous operations (i.e. to represent a real always entails some loss of precision), and you can just throw more bits and compute time to achieve better precision (except that in the case of the QC the problem of continuous -> discrete is more fundamental, at the level of the entire wave function/state)…
    but, with a QC, as soon as you say “just add more qubits and gates”, you’re now relying even more heavily on scaling, which relies on effective QEC…
    so, yea, the author is indeed mashing different difficulties together, but it’s not clear either whether all those challenges are cleanly separable without a full proper analysis.
    Just to use a basic analogy, it’s like the rocketry issue where to put more weight in orbit one could say “just add more fuel”, but you can’t say that because the weight of the fuel is itself part of the payload, and one can’t build an intuition whether it’s even possible to put a mass M into orbit assuming limitation of fuel chemistry and a certain escape velocity (it just happens to work on earth, but maybe classical rocketry isn’t achievable at all on a world with much higher gravity). Of course in this case everything can be correctly worked out using simple equations.
    Here, that’s like asking:
    has anyone ever done some computation of how many physical qubits/gates (at a certain fidelity) would be required to actually implement Shor’s algo (for a N bit number) in a way that takes into account all the approximations and errors? Is the computation always actually converging to a finite number of qubits, for all N and fidelity?

  120. fred Says:

    Even before rocketry, it’s not obvious a-priori that heavier than air powered flight was possible.
    It just happens that our universe and our earth are such that gravity, atmosphere characteristic (density, etc), combustion engine weight/power, fuel chemistry, material science (frame, its weight/strength) all come together to make it possible.
    And one could imagine that if a few of these were different, heavier than air powered flight would be impossible in practice even if it could work on paper…

    And clearly the Wright brothers didn’t work all the theory in detail to prove it was all feasible, they just looked at state of the art gliders and optimistically started to make it powered by tackling the various engineering challenges one by one, even though everything was linked (aerodynamics, chemistry, metallurgy,…)

  121. QMA(2) Says:

    Viktor Dukhovni #118 In ” for CRYSTALS-Kyber [BDK+18], the error term is chosen from a small constant range, the modulus is q = 3329, the dimension is n = 256 · k, where k ∈ {3, 4, 5}, so we can think of q as being almost linear in n”, it states “..the error term is chosen from a small constant range..”.. this means Arora-Ge attack should work there.. so the question of polynomial number of samples based attack existence is OK for NIST is still relevant?

    Also it is confusing to me because they state both “modulus-noise ratio is too high” to make attacks against NIST standard feasible and “error term is constant” in NIST standard … error term constant implies modulus-noise ratio is going to be high… so why their attack does not work against NIST standard? May be I am associating “noise” with “error” wrongly.

  122. QMA(2) Says:

    Usually if a problem is just a hard nut to tackle by just some further analysis without any new ideas or view point, people would have done it already. The new insight the author claims he is using is “Gaussian functions with complex variances”. Are the foundations and theory behind it much different from using just real variables or is it just a matter of doubling the real dimensions and working with it?

  123. Viktor Dukhovni Says:

    QMA(2) #121: This looks like a language barrier issue, the authors should have said that their algorithm *requires* (rather than achieves) a large-enough modulus-noise ratio. For the NIST algorithms the ratio is too small (because the modulus is too small, and the errors are O(1)).

    So, if the algorithm is actually sound, and if there’s room for improving it further, there’s some cause for concern, but as yet no immediate break.

  124. gentzen Says:

    fred #119: “so, yea, the author is indeed mashing different difficulties together, but it’s not clear either whether all those challenges are cleanly separable without a full proper analysis.”
    In my head, a quantum computer is a digital device. Therefore for me, this sort of analysis belong to the quantum algorithm itself, not to the analysis of the physical implementation of quantum computers. You see, Shor’s algorithm itself is digital too. It receives digital classical input, and produces digital classical output.

    To see how deeply this “a quantum computer is a digital device” picture is ingrained in my head, see for example my question

    I really liked the idea of a quantum computer directly reading out a quantum sensor without forcing a classical measurement first. But then I started to wonder whether it is really that simple, even on a conceptual level. A quantum computer is still a “digital” device, in a certain sense (also with respect to its time evolution). On the other hand, the raw quantum output of a quantum sensor probably still lives in continuous time, and also might still be “analog”. Discretizing the “analog” part should be no problem on a conceptual level, but I struggle to find the right abstraction level for the conversion from continuous time (because I want to avoid forcing a classical measurement).

    and compare it to Craig Gidney’s science fiction version of a similar idea

    To the best of our knowledge, our world is quantum mechanical. Objects have properties (position, charge, mass, spin, everything) that, at a fundamental level, are described using quantum information. Which is different from normal information, and can’t be handled by normal computers. In fact, in the entire history of humanity, we’ve never had a way to reliably store quantum information, or transmit quantum information, or manipulate quantum information. We can glimpse it, poke it, predict it, but we’ve never truly held it.

    Quantum computers fix that. They can store quantum information. Transmit it. Manipulate it. Truly hold it.

    … When I picture a world steeped in quantum computation, what I see is…

    Telescopes applying perfect software mirrors to transduced light, because it’s cheaper than polishing glass. Quants pumping Bell inequalities out of the financial system. An Einstein-certified /dev/random. Elitzur-Vaidman microscopes counterfactually probing at sensitive samples. …

  125. fred Says:

    Maybe a more relevant question is whether there’s enough progress that someone like Scott will be able to see a true QC within his lifetime.
    It’s my intuition that the current evangelists and skeptics will all be long dead before it happens.
    Once technology development spans multiple generations, there are fewer and fewer skeptics – because every technology always ends up being taken for granted, eventually.
    Like, we don’t still walk around telling one another
    “Shit, man, electricity!…”
    “I know! Amazing!”.
    And no one still claims that ‘640KB is all the memory anybody would ever need on a computer.’
    On the other hand, my grandma, who was born in 1910, always found the idea that we actually landed on the moon to be totally preposterous (not that she didn’t believe it was real, but she just couldn’t get over it).

  126. Scott Says:

    Well, on one side we’ve got the thousands of people in Santa Barbara and Yorktown Heights and Boston and Toronto and Broomfield in their labs that I’ve toured, with their dilution refrigerators and their ion traps and lasers, and their detailed multi-billion-dollar plans to start making this in a reality over the next decade … and on the other side we’ve got the intuitions of Fred in my blog comment section … and hopefully we’ll all at least live long enough to find out which one was right! 😀

  127. fred Says:

    Scott #126

    So, when do you and all those people think we’ll get to thousands of logical quibts and then a million logical qubits?

    Btw, just because there’s been a recent influx of investment, mainly from ill-placed hype (premature claims of demonstration of quantum supremacy or Wall Street totally miss-understanding what the tech can actually achieve) doesn’t mean that it’s gonna stay this way.
    By having one foot in QC and one foot in AI, you really got all the bases covered… maybe you should also do a stint in nuclear fusion (which is always a mere 30 years away) 😀

  128. Scott Says:

    Fred #127: Estimates vary wildly depending on who you ask (and, of course, on exactly how you specify the goal … a thousand logical qubits are pretty easy if you don’t need to compute anything with them! 🙂 ). I tend to be on the conservative end, with no financial stake whatsoever in any aggressive prediction. But “we’ll all be long dead before it happens” strikes me as more pessimistic than almost anyone in the field would go, given that 30 years sufficed to take us from nothing to hundreds of qubits with 99.9%-fidelity programmable 2-qubit gates and the beginnings of quantum error-correction, which seems pretty clearly like “more than halfway there.”

  129. fred Says:

    Btw, I’ve seen in my lifetime countless technologies (way way more trivial than QC and way better understood than QC) being hyped as the next big thing and never amounted to anything, even when people in the field were really excited and lots of phd thesis were being written… because it’s always cheaper to incrementally upgrade older/existing stuff than entirely ditch things for an entirely new concept. And all hype bubbles do burst at some point and in the end it’s always a question of return on investment. But at least that created interesting careers for a few thousand people in a given generation. That’s how progress works after all, we have to try things.

  130. Scott Says:

    fred #129: Whether something will be “the next big thing” is a totally different question from whether it’s possible. Often things that are possible are simply better done a different way, or are judged to be too dangerous or whatever. QC is somewhat special in that, if your goal is to factor huge numbers or simulate arbitrary quantum systems efficiently, then so far as we know QC is the only way to do it consistent with the laws of physics.

    Do you have examples of technologies “way more trivial than QC” that ended up being impossible in our lifetimes, despite the lack of any other way to achieve the same goal?

  131. fred Says:

    Scott #130

    my point was about technologies that were all “possible” and even demonstrated.
    They just didn’t got adopted or survived in the end because of the economics, politics, new thing being hyped as replacements, etc.

    On the top of my head, some examples of technologies that were possible (but somewhat still challenging), happened, had no equivalent replacements, and still died:

    – The Concorde. There’s just no other way to travel from London to NYC in under 3 hours (unless you join the air force).

    – Nuclear power in Germany (and in other Western European countries, to a less spectacular degree). Instead Germany went back to coal (it’s hard to argue that it’s a good replacement for nuclear energy) and Russian gas, and things are way more complicated now with the Ukraine war (electricity prices have spiked in Europe).
    https://www.dw.com/en/the-battle-for-villages-and-forests-in-germanys-coal-country/a-39964913

    – The NASA moon program, and the space shuttle. One only happened because of a pissing contest with the USSR, the other touted to make space access cheap and repeatable. There’s no replacement for going into space other than actually going into space, so it’s bound to eventually happen again, but maybe with a focus on unmanned methods. One problem here is that the market for space is tiny but crucial – there’s a demand to be able to put satellites into orbit, but beyond that there’s no market for going to the Moon or Mars, etc. So it’s kind of a sexy/inspiring solution looking for problems to solve, which is not dissimilar to QC.

  132. Scott Says:

    fred #131: Since, by your own admission, the Concorde, nuclear energy, and humans in space all did turn out to be possible, and have struggled entirely for other reasons (too expensive, too loud, not useful enough, government bureaucracy, justified or unjustified safety fears…), I fail to see any relevance to a discussion of whether or not scalable QC will be possible in our lifetimes.

  133. fred Says:

    Scott #132

    it’s *you* you brought up the fact that *because* thousands of people currently work on QC, with billions invested (mostly by internet companies which basically are printing money, and can turn on their research focus on a dime, e.g. Google buying Boston Dynamics and then ditching it, etc) that clearly must be a sign that it’s all gonna become a reality in the next decade.
    I’m just pointing out that the same has been true of far more trivial engineering feats, that were deemed just as important, with no replacement, actually completed… and many of these just died out, as a matter of economics, return on investment, politics, etc.

    Then take fusion energy project, ITER (the main fusion project, but many other smaller ones exist).
    Started in 1978. Total cost is 65B$.

    According to the wiki, two main scientists are behind the birth of the project:

    “the ITER project was gaining momentum in political circles due to the quiet work being done by two physicists, the American scientist Alvin Trivelpiece who served as Director of the Office of Energy Research in the 1980s and the Russian scientist Evgeny Velikhov”

    Alvin Trivelpiece died in 2022, and Velikhov is 89 years old.
    45 years later, still no completion, and that’s still just a prototype.

    So your argument that thousands currently working on QC is clear evidence for completion (millions of logical qubits) within your lifetime is weak… that’s all I’m saying.

  134. gentzen Says:

    fred #125: “Maybe a more relevant question is whether there’s enough progress that someone like Scott will be able to see a true QC within his lifetime.
    It’s my intuition that the current evangelists and skeptics will all be long dead before it happens.”

    I believe that “Scott will be able to see a true QC within his lifetime”. However, I believe this will happen because Scott is well aware of the importance to define what “true QC” means for him ahead of time. I believe that there won’t be enough progress to break RSA, but enough progress to see QCs doing useful stuff that cannot be done in any way without them:

    In addition, he is a firm believer in the impact of quantum computers and claims that “they are the future and will be fully integrated on a large scale in 20-30 years.”

    My personal belief is that Craig Gidney is spot-on that the “The mundane uses of quantum computers” will turn out to be the important part:
    https://algassert.com/post/2300

    My personal belief is also that quantum computers won’t crack RSA, not even in 20-30 years. Or if they indeed crack RSA, then there is also a polynomial or quasi-polynomial classical algorithm doing the same.

    Scott #128: “I tend to be on the conservative end, with no financial stake whatsoever in any aggressive prediction.”

    It is more complicated for me (but maybe for you too). I work in semiconductor manufacturing, but got feed-up with technologies like 157 nm lithography, nanoimprint lithography, REBL (Reflective Electron Beam Lithography), 5keV ebeam direct write, DSA (directed self assembly) being developed (with huge effort), only to be killed exactly in the moment when they are ready for mass production prime time. My guess it that it mappens, because that is the moment where economics enters the equation. (157 nm lithography getting killed is what actually enabled EUV to succeed, because ASML and Zeiss had learned their lesson, and requested investments from Intel, TSMC, Samsung and other end-customers already a long time before the technology was ready for prime time.) So I switched to something adjacent, where research institutes and machine manufactures are the main customers. Accidentally, we enormously benefited from QC research, because that increased the demand for electron beam writers dramatically.

    So I have interest for there to be really useful applications of QC. But I have no interest at all in there being a possibility that QC might break RSA, because that would hurt us in in terms of export restrictions (and bad consciousness). Actually, those export restrictions already happened, but not in an open or transparent way, or even in a consistently strict way. Imagine that you decided to remove all those contracts from your books that take forever to get approved or rejected by export control. But then suddenly one such contract gets approved. Should you now have a bad consciousness, if you decide to go forward with that contract? How to react if you did go forward, and some Japanese competitor launches articles in German/European newspapers attacking you and your government?

  135. fred Says:

    One more thing QC and Fusion share is that they can’t produce a return on investment until they’re pretty much totally completed.
    Net energy production for fusion, and sufficient numbers of qubits for QC to implement the few useful quantum algorithm and beat classical machines ( > 1million logical qubits to break RSA, as an example).
    That’s a sharp contrast to classical computers which have been practical all along (e.g. the WW2 Colossus code breaking computer), i.e. any extra investment directly translates into immediate extra return, creating a tight loop to sustain geometric scaling for decades (Moore’s law).

    QC has the extra challenge that, at the moment, there aren’t very well defined applications for it (besides breaking RSA, which is the only clear application that’d showcase a true colossal “win” of the technology, in terms that even Wall Street can digest).

    Evidence of this is Google offering 5M$ to whoever comes up with application ideas:

    https://qz.com/google-competition-millions-quantum-computing-1851310464

    “While there are many reasons to be optimistic about the potential of quantum computing, we’re still somewhat in the dark about the full scope of how, when, and for which real-world problems this technology will prove most transformative. We hope launching this prize will help to shed light on these questions.”

    it must(?) be the case that even 100 perfect logical qubits would be super interesting to model some quantum systems.

  136. QMA(2) Says:

    If the algorithm works, it is in the range of NP\cap coNP problems. There seems no ‘complexity barrier’ to bring it to breaking the range needed for deployable LWE. Is there one?

  137. Quantump P Says:

    The advantage of quantum computing may well emerge in its long-term cost-effectiveness. Even if quantum computers offer similar computational capabilities as classical computers or machine learning models, they could ultimately be less expensive to operate and more energy efficient when used to solve certain problems or simulate various physical phenomena.

    That’d be an interesting outcome.

  138. fred Says:

    Scott #62

    “At this point, I feel like skepticism of fault-tolerant QC is a little bit like skepticism of manned moon missions in the mid-1960s. “

    Why turn back to the 60s when you can actually look at the current state of Artemis? 😀
    Going to the moon should be a slam dunk compared to building 1 million qubits, and, yet, rocket science continues to regress…

    Nasa Artemis program:
    US$93+ billion (2012–2025), $53 billion in 2021-2025, many of which sunk into each failing attempt of SpaceX’s Spaceship launch test
    Total number of contributors: 30,000

  139. Scott Says:

    fred #138: Something that’s indisputably possible—indeed, so possible that it’s already been done—can nevertheless flail around for years because it’s extremely expensive, and the government is involved, and there’s no compelling reason to do it in the first place other than the coolness of doing it (and the coolness is a lot less if it’s already been done).

    Nevertheless, for anyone paying attention, the cost of launching a kilogram into space has come down massively since the Apollo era, and that really does change the economics. Regardless of its wisdom, I’ll give better-than-even odds that Artemis does eventually succeed at landing a diverse crew of humans on the moon, and at lower cost in constant dollars than Apollo (for whatever that’s worth).

  140. Oh Lordy Can't use my Real First Name Says:

    Sorry if this has already been covered (didn’t find this in the scrollback), but what is the sense of whether this result claiming to knock over IBM’s quantum advantage claim on the trotterized transverse Ising model is legit?

    https://twitter.com/dulwichquantum/status/1779615161822720199?s=46&t=b0bihnLgTQJV7D3UelIhKw

    Asking for a friend!

  141. fred Says:

    Scott #139

    “the cost of launching a kilogram into space has come down massively since the Apollo era”

    https://futuretimeline.net/data-trends/6.htm

    (I note they put in there “space elevator”, which was very hyped 20 years ago, with an influx of investment/research, but never came to fruition… this tech is more similar to QC in terms of challenging engineering)

    Clearly the space shuttle cost per kg was 2 orders of magnitude higher, but it’s not surprising considering that it was always taking along 6 astronauts (with all the life support systems), which was totally overkill most of the time except when its flexibility was needed for something like the fixing the Hubble telescope.
    As highlited in the video, the real problem here is the actual goals of going back to the moon.
    To be fair, even though Apollo was a success, it’s not like the system could be used to actually build a permanent moon base (because it was designed to be as reliable as possible and the the design was based on lowest weight lander).
    That said, it’s no excuse for giving SpaceX a carte blanche, and just base the timeline on whatever Musk is claiming, without really working out the basics (engine reliability, actual payload, etc), with an over-emphasis on design by error (especially when each trial costs 4B$).

  142. Scott Says:

    fred #141: Here is a more serious chart. Cost per kg was $5400 for the Saturn V, now down to $1500 for the Falcon Heavy. There’s no Moore’s Law trend like we’ve come to expect for computing, communications, and genomics, but there’s certainly been a decrease.

  143. fred Says:

    Scott #142
    “Cost per kg was $5400 for the Saturn V, now down to $1500 for the Falcon Heavy.”

    Thanks for the graph, interesting.
    Besides obvious progress in material science and fabrication, another thing is that Saturn V was specifically designed to be part of an entire system focusing on putting a couple of humans on the surface of the moon, and cost per kg was irrelevant.
    While Falcon Heavy was designed to be as cheap as possible (reusable, more versatile).
    So a cut by 4.15x isn’t super dramatic.

    Falcon Heavy was never used beyond low orbit, Spaceship is supposed to replace it.
    SpaceX/Musk claimed that they will reduce the cost of Spaceship by so much (orders of magnitude) that the system will replace airliners for long point-to-point flights on earth, with multiple flights a day per rocket… which seems totally preposterous.
    It takes long hours just to fuel up Spaceship and airfare price has hardly changed when taking inflation into account, showing that it’s all limited by fuel cost and savings on fuel (airliners do this by flying very efficiently at 30,000 feet).
    “In the 1960s, the average domestic ticket from the Dallas area could cost around $48 – a small price for a trip across the US. Adjusted for inflation, $48 in the early 1960s totals to about $470 today, roughly in line with the fares offered by many domestic airlines. Again, a domestic flight from Chicago O’Hare International Airport (ORD) in 1963 cost $43 on average. That figure is about $416 in 2022. In 2015, the average domestic flight price out of ORD was $360.”
    Anyway, the point is that Musk has been making so many outrageous claims that one can no longer keep up with what’s genuine vs what’s total BS, and NASA seems to also be victim of this.

  144. Mikhail Dyakonov Says:

    It is exactly 30 years since the invention of the famous Shor’s factoring algorithm, which has triggered the whole QC business. However, factoring the number 15=3×5 by Shor is still not possible. Doesn’t this tell us something about the prospects of quantum computing? Will factoring 15 by a quantum computer become possible during the next 30 years?

  145. Scott Says:

    Mikhail Dyakonov #144: No, you’re mistaken. Factoring 15=3×5 and 21=3×7 with Shor have been possible for a long time (and were done). Factoring larger numbers with Shor is possible too now that we have 99.9%-fidelity 2-qubit gates, but not especially interesting, because of the obvious limits on how far you can scale without fault-tolerance. But of course, this year also saw the first demonstrations of various kinds of net gain from fault-tolerance. Things have progressed enough that I’m willing to bet $500, if you’d like, on a fault-tolerant Shor’s algorithm being used to factor an integer of at least 32 bits within the next decade.

  146. Mikhail Dyakonov Says:

    Scott #145
    A bit of reality on Factoring 15

    The first experimental factoring of 15 by Shor was reported by Vandersypen et al (2001) using the liquid NMR technique. The obtained NMR spectra corresponded very well to the predictions of Shor’s procedure.
    Lanyon et al. (2007) performed the same task in an optical experiment using the polarization of 4 photons, while Lucero et al. (2012) used Josephson qubits: “…we run a three-qubit compiled version of Shor’s algorithm to factor the number 15, and successfully find the prime factors 48% of the time”.
    BUT: in those experiments the so-called COMPILED version of the Shor’s algorithm was used. As it was shown by Beckman et al (1996), the full algorithm can factor a k-bit number using 72k^3 elementary quantum gates; thus honest factoring 15 by Shor REQUIRES 4608 GATES OPERATING ON 21 QUBITS.

    Recognizing that those requirements are well beyond the existing (then, but also RIGHT NOW) experimental possibilities, Beckman et al. introduced a compiling technique which exploits properties of the number to be factored, allowing exploration of Shor’s algorithm with a vastly reduced number of resources.

    One might say that this is a sort of (innocent?) cheating: knowing in advance that 15 = 3 × 5, we can take some shortcuts, which would not be possible if the result were not known beforehand. All the existing experimental testing of Shor’s algorithm use this simplified approach.
    Honest factoring 15 by using the full Shor’s algorithm is still well beyond the reach of experimental possibilities (as I have predicted in 2001).

    Thus it seems that for the foreseeable future we need not worry about the security of cryptography codes based on the difficulty in factoring very large numbers, which are products of hundred-digit primes.

    See (get free copy at) : https://www.researchgate.net/publication/340940709_Will_We_Ever_Have_a_Quantum_Computer

  147. gentzen Says:

    #Mikhail Dyakonov #146: Let me quote from your book:

    The first experiment reporting factoring 15 by Shor was reported by Vandersypen et al. [57] using the liquid nuclear magnetic resonance (NMR) technique. All the gates were implemented by microwave pulses applied within about 1 s, which is less than the nuclear decoherence time. The obtained NMR spectra corresponded very well to the predictions of Shor’s procedure.

    Lanyon et al. [58] performed the same task in an optical experiment using the polarization of 4 photons, while Lucero et al. [59] used Josephson qubits: “…we run a three-qubit compiled version of Shor’s algorithm to factor the number 15, and successfully find the prime factors 48% of the time”.
    […]
    A similar problem was put forward by Levin [9]:
    QC proponents often say they win either way, by making a working QC or by finding a correction to Quantum Mechanics. … Consider, however, this scenario. With few q-bits, QC is eventually made to work. The progress stops, though, long before QC factoring starts competing with pencils. The QC people then demand some noble prize for the correction to the Quantum Mechanics.

    I see now that you quote the same start from your book, only that you replaced [57] by (2001), [58] by (2007), and [59] by (2012). But I want to highlight something else, namely: After I read (in depth) chapter 7 “Quantum computers: physical realization” from Nielsen and Chuan (https://scottaaronson.blog/?p=4182#comment-1811586), it was obvious to me that liquid NMR cannot allow scalable QC. The chapter actually explicitly admitted that all examples it introduced are obviously unable to allow scalable quantum computing. I cannot personally judge the viability or scalability of “optical experiments”, but I later learned that there had been a controversy about liquid NMR constituted “real QC”. Of course, progress of liquid NMR QC has long stopped, but nobody cared or was able to resolve that controversy. But that should have been a “doable” task, much easier than to explain why scalable QC might run into trouble, and still doing it rigorous might have given us valuable insights. For me, this undermines the “QC proponents often say they win either way” argument in a practical sense in our current academic reality.

    It looks to me as if QC has already factored many numbers, with various amouts of cheating:
    https://quantumcomputing.stackexchange.com/questions/2111/what-integers-have-been-factored-with-shors-algorithm
    https://crypto.stackexchange.com/questions/59795/largest-integer-factored-by-shors-algorithm
    https://quantumcomputing.stackexchange.com/questions/37782/practical-implementation-of-shor-and-other-factoring-algorithms
    https://quantumcomputing.stackexchange.com/questions/14340/what-is-a-maximal-number-factored-by-shors-algorithm-so-far
    https://quantumcomputing.stackexchange.com/questions/9204/the-algorithm-of-the-new-quantum-factoring-record-1-099-551-473-989

  148. Scott Says:

    Mikhail Dyakonov #146: It’s true that there’s a philosophical question of what it means to factor 15 “as if you didn’t already know the answer,” when of course you can’t not already know the answer.

    That said, 4608 gates operating on 21 qubits can absolutely be done now, if Quantinuum or Quera care enough. Within the past year they’ve demonstrated all-to-all 2-qubit gates with 99.9% fidelity, which suffices for that.

    Anyway, I’m sufficiently optimistic about where things are going that I don’t feel the need to argue with skeptics that I once did. I’ve put some money on the table, which you haven’t, and other than that I’m content to let the experimentalists make the argument for me.

Leave a Reply

You can use rich HTML in comments! You can also use basic TeX, by enclosing it within $$ $$ for displayed equations or \( \) for inline equations.

Comment Policies:

  1. All comments are placed in moderation and reviewed prior to appearing.
  2. You'll also be sent a verification email to the email address you provided.
    YOU MUST CLICK THE LINK IN YOUR VERIFICATION EMAIL BEFORE YOUR COMMENT CAN APPEAR. WHY IS THIS BOLD, UNDERLINED, ALL-CAPS, AND IN RED? BECAUSE PEOPLE ARE STILL FORGETTING TO DO IT.
  3. This comment section is not a free speech zone. It's my, Scott Aaronson's, virtual living room. Commenters are expected not to say anything they wouldn't say in my actual living room. This means: No trolling. No ad-hominems against me or others. No presumptuous requests (e.g. to respond to a long paper or article). No conspiracy theories. No patronizing me. Comments violating these policies may be left in moderation with no explanation or apology.
  4. Whenever I'm in doubt, I'll forward comments to Shtetl-Optimized Committee of Guardians, and respect SOCG's judgments on whether those comments should appear.
  5. I sometimes accidentally miss perfectly reasonable comments in the moderation queue, or they get caught in the spam filter. If you feel this may have been the case with your comment, shoot me an email.