When the Physicist and the Computer Scientist Walk Into a Quantum Bar

 

SAS Innovate Pre-conference session

A Technocrat’s Discernment, originally partially written on Quantum Day 2026. Then I was in the SAS Innovate 2026 pre-conference workshop, and ended up reframing the article around these two quotes.

I sat in a dimly lit conference room at SAS Innovate this week, half awake from the keynote coffee, when a slide appeared on the screen that quietly forced me to put my notebook down. Two quotes, side by side, separated by a thin red line and about thirty seven years of intellectual history.

David Deutsch, in 2011, insisting that the theory of computation has been mistakenly treated as a topic in pure mathematics, that computers are physical objects, and that what they can or cannot compute is determined by the laws of physics alone, not by mathematics.

And then below it, Donald Knuth in 1974, claiming the opposite with an almost mischievous calm. Computer science, he said, is somewhat different from the other sciences because it deals with artificial laws which can be proved, not natural laws which are never known with certainty.

So which one is right. The physicist who built much of the conceptual scaffolding for quantum computing, or the computer scientist who wrote the book most of us never finished but proudly own all volumes of.

I want to argue, on this post-Quantum Day 2026, that they are both right, and that the apparent contradiction is exactly the place where the practical question of quantum computing solving P and NP problems lives.

The disagreement is not what it looks like

At first glance Knuth seems to be saying that computer science is a kind of formal game, like chess, where the rules are stipulated and theorems are proved within those rules. Deutsch seems to be saying that this is naive, that the rules themselves come from physics and we don’t get to choose them.

But Knuth was writing in 1974, when the dominant model of computation was the Turing machine, and the Turing machine was assumed to be universal in a strong sense. Anything physically computable, the thinking went, could be reduced to a Turing machine. So if you proved something within the formalism, you had proved it about reality, full stop. The artificial laws were a clean abstraction over the natural ones.

Deutsch came along and said, hold on. The Church Turing thesis is a physical statement, not a mathematical one. You cannot prove it from inside the formalism, because the formalism itself is a description of a physical process happening in a physical machine. And once you take that seriously, you realize that nature might offer computational resources that classical Turing machines cannot simulate efficiently.

That last word matters. Efficiently. Not in principle, but in practice. And this is where the rubber meets the road for those of us who actually have to sell GPU capacity to customers who want to know whether their drug discovery pipeline will finish before the patent expires.

P, NP, and the great misunderstanding

There is a popular fantasy, repeated in many otherwise serious articles, that quantum computers will solve NP complete problems. They will not. At least not in the sense that most people mean.

Let me say this plainly because the confusion is everywhere. Shor’s algorithm factors integers in polynomial time on a quantum computer, but integer factorization is not known to be NP complete. It sits in this strange middle territory called NP intermediate, suspected to be neither in P nor NP complete. Grover’s algorithm gives a quadratic speedup on unstructured search, which means that a problem that takes 2^n classical operations now takes 2^(n/2) quantum operations. That is impressive but it does not collapse exponential into polynomial.

The actual complexity class that quantum computers efficiently solve is called BQP, bounded error quantum polynomial time. It contains P. It is suspected to contain some problems outside P. It is not known to contain NP, and most complexity theorists doubt that it does.

So if your customer is hoping to use quantum computers to solve traveling salesman exactly for ten thousand cities, they will be waiting a long time. Possibly forever.

Where Deutsch wins

And yet Deutsch is right that physics matters. The set of problems we can solve efficiently does depend on the physical substrate of our computer. BQP is genuinely larger than P in expectation, even if not by as much as the science fiction crowd imagines. The polynomial speedup of Grover, applied to enough relevant subroutines, can be the difference between a calculation that finishes overnight and one that finishes after the heat death of the sun.

For specific structured problems, the speedups are dramatic. Simulating quantum chemistry, for instance, is in BQP and probably not in P. This is the actual commercial promise of quantum computing, and it is large enough to justify the billions being spent on it. Materials science, catalyst design, certain kinds of optimization with structure that quantum algorithms can exploit. These are the wins.

The error correction problem is real, and we are still pushing through it. I spent most of last quarter benchmarking quantum error correction thresholds with surface codes on classical GPUs, ironically enough, simulating the very thing that is supposed to make classical GPUs obsolete for certain workloads. The threshold is around one percent physical error rate for the surface code. We are getting there. Slowly, expensively, but we are getting there.

Where Knuth wins

Knuth’s point survives the quantum revolution surprisingly well, because the artificial laws of computer science are not the laws of physics, they are the laws we have chosen to impose on top of physics in order to reason about it. Complexity classes are defined in terms of abstract machines. Reductions between problems are mathematical objects. The proof that 3-SAT is NP complete does not change when you switch from silicon to superconducting qubits to whatever Microsoft is doing with topological states this month.

What changes is which abstract machine model best describes nature. Knuth assumed Turing. Deutsch said no, you also need quantum Turing. Someone in the future may say no, you also need topological quantum, or analog quantum, or something we haven’t imagined yet.

But within each model, the artificial laws hold with mathematical certainty. P, NP, BQP, PSPACE, these are real structures, provable structures, and they tell us exactly what each kind of physical computer can and cannot do. Knuth’s framework absorbs Deutsch’s revolution. It just adds another machine model to the catalog.

The practical answer for the rest of us

For those of us building infrastructure rather than writing complexity theory papers, the question is never really “will quantum solve P versus NP.” The question is always “what specific workload, on what specific hardware, with what specific error rate, in what specific timeframe, gets to what specific business outcome.”

P versus NP itself is almost certainly not a quantum question. Most complexity theorists believe P is not NP, and that quantum computers will not change that picture. The interesting question is the one inside BQP, and there the answers are coming in slowly, one structured problem at a time.

So when you see a vendor pitch deck that promises quantum will solve all NP problems, you can confidently file it next to the deck that promised blockchain would solve world hunger. And when you see a careful paper showing a polynomial speedup on a chemistry simulation that matters for a real industrial process, pay attention. That is where the story actually is.

Deutsch and Knuth are not contradicting each other. They are working at different levels of the same stack. Knuth is describing the abstract architecture, Deutsch is describing the silicon. Or the qubit. You need both to actually build anything.

Closing thought

There is something appropriate about reading those two quotes on Quantum Day 2026. The physicist insisting that mathematics is downstream of nature, the computer scientist insisting that proofs about artificial systems hold regardless of nature. They are both correct. The trick, as always, is knowing which lens to use for which problem.

I sometimes think the best computational scientists are the ones who can switch between the two without noticing they are doing it. Knuth proves the theorem, Deutsch builds the machine, and somewhere in between, the rest of us figure out whether the customer actually needs eight H100s or whether they would be better off waiting two more years for the quantum advantage to mature.

Most of the time, by the way, they need the H100s.

Happy Belated Quantum Day.

Comments

Popular posts from this blog

Digital Selfhood

Axiomatic Thinking

How MSPs Can Deliver IT-as-a-Service with Better Governance