The Rosetta Week
How four fields, in ten days, learned to read the unreadable
![]() |
| Copyright: Sanjay Basu |
Last week a knot stopped hiding from its mathematicians, a bacterium stopped hiding from its biophysicists, and a large language model stopped hiding from the people who built it. Three different sciences, three different decades of frustration, and one quiet ten-day stretch in April when each of them finally produced something they had been failing to produce for a very long time. A new kind of alphabet, in each case. Not a discovery so much as a way of writing what was already there.
It looks, at first, like coincidence. Quanta ran a piece about a new knot invariant. MIT Technology Review ran one about mechanistic interpretability. Math, Inc. quietly announced that an AI agent had formalized a Fields Medal proof in five days. A 2026 paper on the bacterial flagellar motor was sitting in the same browser tab as all of them. If you only read one, you would shrug. If you read all four in a single sitting, the way I did, the air in the room changes a little.
Because here is the thing none of these stories quite says out loud. We are not, in any of these cases, finding new objects. The knot was already there. The motor has been spinning for three billion years. The model was sitting on the same servers it has been sitting on all year. What changed, in the span of a single news cycle, is that four scientific communities finished inventing four different ways to read what they had previously been guessing at. That is either coincidence, or the early hours of something. I do not know which. But it seems worth taking the question seriously.
PART ONE
The knot that finally sat still
Knots are one of those mathematical objects that sound niche right up until you notice them everywhere. DNA is a knot, in a strict sense. Polymer melts are knots. Topological quantum computing is knots. Every time a string you cannot see passes over or under itself, somebody has had to invent a notation for it.
The ancient question in knot theory is the one a child would ask. You have two tangles of string. Are they the same knot, or are they different knots dressed up to look the same? It is harder than it sounds. The trefoil and the figure-eight are obviously different. Move up to a knot with twenty crossings and you are not so sure. Move up to a hundred crossings, the kind of thing that occurs in actual molecular biology, and you have left the realm of human eyeballs entirely.
Mathematicians have been chipping at this since the 1800s by inventing what they call invariants. An invariant is a kind of fingerprint. You feed the knot in, you get a number or a polynomial or some other small piece of data out, and if two knots produce different fingerprints they are definitely different. The catch, all along, was the converse. Until recently every invariant we had also gave the same fingerprint to knots that were not the same. Useful, but porous.
Bar-Natan and van der Veen did not write down a faster algorithm. They wrote down an alphabet.
On April 22, Quanta reported a new invariant from Dror Bar-Natan in Toronto and Roland van der Veen in Groningen, and the fingerprint it produces is, of all things, a hexagonal snowflake. A little QR code, basically, that you can hold up next to another little QR code and read off the answer. Their construction scales to knots with 300 or more crossings, well past where every previous method choked. Gil Kalai, who is not given to overstatement, called it a new kind of telescope. The phrase he actually used was that it offers sharper resolution over familiar ranges and extends our reach by a factor of ten.
What is striking is not just that the new invariant is more powerful. It is that it is more readable. A polynomial is hard to look at. A hexagonal QR code is something a person can scan with their eyes, the way a chemist scans a periodic table. The structure of the code carries geometric information about the knot itself. We did not just get a better lookup. We got, for the first time, a typography for tangles.
![]() |
| The same knot, twice. On the left, an object you can hold in your hands. On the right, a code a computer can search |
PART TWO
The motor nobody could see turning
If you have ever rinsed something off a piece of fruit and then thought, briefly, about how the bacteria on it knew which way to swim, congratulations, you have considered the bacterial flagellar motor. It is a real rotating machine. Thirty-four protein subunits arranged in a ring, a stator, a shaft, the works. It spins at several hundred revolutions per second, which is faster than the flywheel of a Formula 1 engine, and it runs entirely on the proton gradient across the cell membrane. People have been studying it since the 1970s. They have known it can switch directions for almost as long. Spinning counter-clockwise pushes the bacterium forward. Spinning clockwise tumbles it, so it can pick a new heading.
What nobody could see, for fifty years, was the actual switch. We knew the motor flipped. We did not know how it flipped. There were models. There were guesses. There was a very large literature of biophysicists arguing about which protein was doing what to which other protein. The thing was, you cannot put an ammeter on a bacterium. The whole apparatus is fifty nanometres across and it is alive.
On April 20, Quanta ran a piece tying together a wave of cryo-EM and single-molecule work that has been arriving since 2020 and culminated, more or less, in a 2026 result from Aravi Samuel and his collaborators at Harvard. The picture they assembled goes like this. A signaling protein called CheY gets phosphorylated, which makes it sticky. The sticky CheY binds to a particular protein on the inner ring of the motor. And then, within milliseconds, the entire ring of thirty-four subunits snaps. Like a hair clip. The subunits were all tilted one way, and now they are all tilted the other way, and the motor is rotating in the opposite direction.
We have not learned how the motor works so much as we have learned how to draw it.
The thing that astonished me, reading it, was the sensitivity. The motor responds to single signaling molecules. One CheY binds, the ring flips. That is a digital switch at the molecular scale, and it has been operating in your gut for as long as your gut has existed.
This is, I think, the most underrated kind of scientific result. Nothing was discovered, in the sense that bacteria have been doing this since the Archaean. What was discovered is the diagram. We finally have the state-transition picture of a machine that has been running unread on the inside of every living thing for three billion years. The shift is not from ignorance to knowledge. It is from approximate knowledge to a notation crisp enough that you could, in principle, reverse engineer it.
![]() |
| Two stable conformations, separated by a millisecond and the binding of a single phosphorylated protein |
PART THREE
The alien autopsy on your laptop
Of all the stories in this cluster, the one that feels strangest is the one about ourselves. Mechanistic interpretability, the project of opening up large neural networks and understanding what their parts are doing, was named one of MIT Technology Review’s 10 Breakthrough Technologies of 2026. The accompanying essay, by Will Douglas Heaven, called the researchers behind it the new biologists. The framing was not metaphorical. The people doing this work describe their day-to-day, in interviews, as very much a biological type of analysis.
Which, when you stop to think about it, is bizarre. Biologists work on systems they did not design. We designed neural networks. We wrote the code. The model weights came out of a training run that we set up. And yet the resulting object is so foreign to us, so internally tangled, that the only profession we have for studying it is the one we use for organisms.
The reason, in a sentence, is superposition. A neural network does not assign one neuron to one concept the way our pop-science intuitions suggest. It crams thousands of concepts into the same neurons, the way a hash table crams many keys into the same bucket. From the outside, this looks like noise. To pull it apart you need a kind of secondary model, a sparse autoencoder, that looks at the original network’s activations and identifies the actual features in there. Anthropic has been writing this up under the name circuit tracing. The autoencoder is the alphabet. Once you have it, you can watch individual concepts light up.
We built it. We trained it. And the only way to find out what we built is the same way we figure out what an octopus is.
The findings, this year, have been wild. Models appear to plan rhymes before they write the lines that contain them. They reason in a shared conceptual space that is not tied to any particular language, then translate at the last step. OpenAI reported, this spring, that they had identified about ten internal personas inside their model that it had absorbed from training data, and that they could surgically suppress the bad ones. There is no other word for that than psychiatry. It is psychiatry on a system that did not exist three years ago.
Dario Amodei, in his essay The Urgency of Interpretability, frames the stakes the obvious way. If we cannot read our own creations, we cannot responsibly deploy them. I think he is right, but I also think the reading itself is doing something stranger than the safety argument captures. We built the creature. The manual did not come with the blueprint. The autopsy is not for finding the cause of death, it is for finding out what we shipped.
![]() |
| The mirror model of a language model. Where a forward pass looks like noise, the autoencoder resolves discrete features |
PART FOUR
The proof no human wrote, but every human trusts
On April 13, Quanta ran a piece by Kevin Hartnett with the slightly alarming title The AI Revolution in Math Has Arrived. It opened with the First Proof challenge, run in February of this year, in which AI models were given a week to attempt ten research-level math problems unlikely to have appeared in any training data. They solved more than half. That alone would have been the story. Except it was not even the headline.
The headline was Math, Inc.’s agent, named Gauss, formalizing Maryna Viazovska’s eight-dimensional sphere-packing proof in five days. The human team had estimated six more months. The twenty-four dimensional proof followed in another two weeks. By the end of it the verified Lean codebase had grown from roughly seventy thousand to roughly two hundred thousand lines. Along the way Gauss caught a typo in the original paper, a fact that I find both charming and, on reflection, slightly alarming.
The thing to understand about formal verification, if you have not bumped into it before, is that it is mathematics taken to its dental hygienist. Lean is a proof assistant. You write your argument in a language so precise that a small computer kernel can check every step against a list of axioms. The kernel is dumb. It does not know what a sphere is. It only knows whether each line follows from the line before. If your proof passes, it has passed in the only sense that any proof has ever needed to pass. The kernel is the priest, and the priest does not care who handed it the manuscript.
The kernel is the priest. The priest does not care who handed it the manuscript.
What Gauss did is generate the manuscript. The human mathematicians shaped the high-level structure. The AI filled in the millions of trivial-but-not-trivial steps that human mathematicians normally wave through with the word obviously. The kernel verified each one. Terence Tao, who has spent the last year arguing for exactly this division of labor, posted an arXiv preprint in March that contains an inequality he credits, in the acknowledgments, to ChatGPT.
The piece quotes a question that I cannot stop thinking about. If a proof becomes something only a computer can comprehend, does mathematics remain a human endeavor, or does it evolve into something else entirely. The article does not answer it. I do not think it can be answered yet. But it does have a useful reference class. After Kasparov lost to Deep Blue, the next generation of chess players grew up studying lines that they could not have invented but could absolutely understand once they saw them. The result was that human chess got better, not worse. Mathematics may be about to do the same thing. Or it may not. The split in the community is real. Some mathematicians feel liberated from drudge work. Others feel that the drudge work was the craft, and that automating it hollows the discipline out. Both groups have a point.
![]() |
Part FIVE What four stories share, and why that should make you a little uneasy Lay the four stories side by side and the same shape keeps showing up. In each case, a scientific community produced a new representation. A QR code for a knot. A state-transition diagram for a motor. A circuit trace for a model. A formalized script for a proof. None of these representations are the thing itself. A QR code is not the knot. A diagram is not the motor. A circuit trace is not, whatever the model turns out to be, the model. A Lean script is not the intuition that produced the proof in the first place. What they are is maps. Better maps than we used to have. Sharper, more searchable, more amenable to being passed around between researchers and, increasingly, between machines. But maps. And maps have a long history of being mistaken for the territory. Thomas Nagel’s old essay, the one about what it is like to be a bat, has been sitting in the corner of every conversation about consciousness for half a century. The point of that essay, the one that does not get enough airtime, is not that bats are mysterious. It is that no map of bat-echolocation, however detailed, is the same as the experience of being a bat. We are getting much better at the maps. It keeps not settling what we actually wanted to know. We are getting much better at the maps. It keeps not settling what we actually wanted to know. Two essays from this same week sharpen the point. An Aeon piece from April 17 argues that consciousness is not a theoretical puzzle to be decoded but a fact you already inhabit. You live in soul land, the title says. You do not need a map of it. And in a slightly weird footnote, the Daily Nous reported the same week that Anthropic’s new Claude Mythos preview model lists Mark Fisher and Thomas Nagel as its favorite philosophers. Make of that what you will. The cautionary note comes from Carissa Véliz, whose new book Prophecy came out April 21. Her argument is that algorithmic forecasts are quietly becoming a new kind of authority, one that launders responsibility. The same pattern shows up in our four stories. Once the code issues a verdict, the humans defer. The risk, Véliz says, is that our new alphabets become new oracles. Read it next to the quantum jamming piece in Quanta, also from April 17, in which physicists are designing cryptographic schemes that work even if quantum mechanics turns out to be wrong. Every alphabet we have ever built was eventually superseded. There is no reason to think ours will not be. CODA Remembering what we wanted to say Every era gets the tools it deserves. The astrolabe told medieval sailors things their eyes could not. The microscope turned a drop of pond water into a civilization. The calculus let Newton and Leibniz speak to motion in motion’s own language. Each new alphabet rewires what understanding means for the generation that inherits it. This month we picked up four. One for the topology of tangles. One for the switching logic of life. One for the hidden concepts inside our own machines. One for the structure of proof. Individually, each is a triumph. Together, they raise the question I cannot stop turning over. When the code does the reading, what is left for us? One answer, the one I find most honest, is that we are doing what we have always done. We are inventing signs that let us point at the world more precisely, and then we are arguing about what we meant by them. The tools get strange. The argument does not. The real work of the next decade is not in building sharper alphabets. It is in remembering what we wanted to say. Sources & further reading PRIMARY Erica Klarreich, A Powerful New ‘QR Code’ Untangles Math’s Knottiest Knots · Quanta Magazine, April 22, 2026 What Physical ‘Life Force’ Turns Biology’s Wheels? · Quanta Magazine, April 20, 2026 Kevin Hartnett, The AI Revolution in Math Has Arrived · Quanta Magazine, April 13, 2026 Will Douglas Heaven, The New Biologists Treating LLMs Like an Alien Autopsy · MIT Technology Review, January 12, 2026 Mechanistic Interpretability: 10 Breakthrough Technologies 2026 · MIT Technology Review, 2026 Math, Inc., Completing the formal proof of higher-dimensional sphere packing · Math, Inc., 2026 Terence Tao, Mathematical methods and human thought in the age of AI · What’s New blog, March 29, 2026 Anthropic, Mapping the Mind of a Language Model · Anthropic Research Dario Amodei, The Urgency of Interpretability · darioamodei.com Quantum ‘Jamming’ Explores the Truly Fundamental Principles of Nature · Quanta Magazine, April 17, 2026 SUPPORTING / ATMOSPHERE You know what consciousness is: you live in soul land · Aeon, April 17, 2026 Justin Weinberg, New AI Model Has a Taste for Philosophy · Daily Nous, April 14, 2026 Carissa Véliz, Prophecy (book release coverage) · Oxford AI Ethics, 2026 AI Proof Verification: Gauss Tackles 24D · IEEE Spectrum, 2026 BACKGROUND Thomas Nagel, What Is It Like to Be a Bat? · The Philosophical Review, 1974 Edward Witten, Topological Quantum Field Theory · Communications in Mathematical Physics, 1988 Monumental Proof Settles Geometric Langlands Conjecture · Quanta Magazine, 2024 (background context) |





Comments
Post a Comment