The Ghosts We Can’t Find
![]() |
| Copyright: Sanjay Basu |
Phantom particles, phantom minds, and the narrowing of scientific imagination. What happens when science’s most powerful tools make it harder to see?
For twenty years, physicists chased a ghost. The sterile neutrino was a hypothetical particle that interacted with nothing, registered on no detector, and left behind exactly zero evidence of its physical existence. It was, by almost every empirical measure, not there. Physicists loved it anyway. They loved it the way you love a theory that is too elegant to be wrong, too mathematically tidy, too perfectly shaped to fill the exact holes in your understanding. This month, the last experiments that could have saved it reported back with devastating news. The sterile neutrino is dead. But its ghost, and the questions it raises about what we can and can’t find, is very much alive.
Because here’s the thing. The sterile neutrino is not the only phantom haunting science right now. At the same moment particle physicists are closing the casket on their favorite invisible particle, philosophers are wrestling with a question that sounds almost absurd until you try to answer it: Where is the AI? Not “where is the server” or “where is the data center,” but where is the thing itself? The entity? The system? Where does it begin and where does it end? And a massive new study in Nature has found that AI, supposedly the most powerful research tool ever created, is paradoxically making science less creative. More papers, more citations, fewer ideas.
We are, it seems, surrounded by things we can’t locate.
…
Part I
The Particle That Explained Everything (Until It Didn’t)
The Standard Model of particle physics is one of humanity’s greatest intellectual achievements and also one of its most annoying. It describes the behavior of every known particle with extraordinary precision. It predicted the Higgs boson decades before anyone detected it. And it stubbornly refuses to account for approximately 95% of the universe’s mass-energy content. This is a bit like building a map that flawlessly covers every street in a city and then discovering the city is, in fact, an archipelago and you’ve only mapped one island.
Neutrinos were supposed to be the bridge to the other islands. We know three types exist: electron, muon, and tau. They barely interact with matter. Billions of them pass through your body every second without touching a single atom, which, if you think about it for too long, starts to feel personally insulting. But beginning in the 1990s, a series of experiments started producing results that didn’t quite add up. The LSND experiment at Los Alamos saw neutrinos oscillating between types in ways the three-flavor model couldn’t explain. Then the reactor antineutrino anomaly showed up, a mysterious 6% shortfall in the expected antineutrino flux from nuclear reactors. Then the gallium anomaly. Each one, independently, hinted at the same tantalizing possibility. A fourth neutrino. A sterile one, meaning it interacted only through gravity. A ghost’s ghost.
![]() |
| Timeline of the sterile neutrino hypothesis, from the first anomalous results at Los Alamos to the decisive null results at Fermilab and KATRIN. |
The appeal was enormous. Sterile neutrinos didn’t just explain the anomalies. They were also a plausible dark matter candidate. Finding one would have cracked open the Standard Model like an egg and revealed whatever comes next. Entire careers were built around them. Experiments costing hundreds of millions of dollars were designed, funded, and run specifically to hunt for a particle that, to borrow a memorable description, was hiding behind the only couch cushion in the room.
Two experiments have now, with extraordinary precision, looked behind that cushion and found nothing. Fermilab’s MicroBooNE detector, a 170-ton chamber of liquid argon that tracked neutrino interactions with millimeter precision, saw exactly what the Standard Model predicted. No anomalous electron neutrinos. No hidden oscillation pattern. No ghost. And KATRIN, a massive spectrometer in Karlsruhe, Germany, measured 36 million electrons from tritium beta decay over 259 days and found zero deviation from the expected energy spectrum. Not a whisper. Not a hint.
This is the death knell for sterile neutrinos.
— Peter Ross-Lonergan, Columbia University
Here is the analogy I keep coming back to. Imagine you keep losing your keys. Every single time, you are convinced they must have fallen behind the same couch cushion, because it’s the only explanation that makes geometric sense. So you design increasingly elaborate tools to look behind that cushion. Tiny cameras. Fiber-optic scopes. You even, at one point, commission a custom-built robotic arm specifically rated for sub-cushion extraction. After twenty years and a truly embarrassing amount of money, you accept the truth. The keys were never there. They were somewhere else entirely. Or maybe they were never keys at all, and what you were hearing was the cat knocking something off the counter in the next room.
The LSND anomaly, the reactor anomaly, the gallium anomaly. They’re all real. Something caused them. But the something wasn’t a new particle. We don’t yet know what it was, and honestly, that might be more interesting than if we had found the sterile neutrino. A solved mystery is gratifying. An unsolved one is generative.
And here’s what makes this story more than just a footnote in particle physics. The twenty years spent chasing sterile neutrinos were not wasted. The precision technologies developed for MicroBooNE and KATRIN will serve a dozen other experiments. The null result itself is enormously informative, because it tells theorists exactly where not to look, which is almost as valuable as telling them where to look. Science’s dead ends have load-bearing walls. You can build on them.
But this story only really gets interesting when you realize that physicists aren’t the only ones chasing phantoms right now.
· · ·
Part II
The Philosopher’s Impossible Question
The journal Inquiry recently issued a special call for papers with a deadline of April 25, 2026, and the title alone is worth the price of admission: “Where Is the AI? Metaphysics, Individuation, and the Unity of Artificial Systems.” The question is not rhetorical. They genuinely want to know. And the longer you sit with it, the worse it gets.
Start simple. You say “the AI.” You probably mean something like ChatGPT or Claude or Gemini. But what are you actually pointing at? The trained weights? Those are just a matrix of numbers sitting on a server, inert until activated. The running instance? There might be ten thousand of those simultaneously, each with a different conversation context, each generating different outputs. Are those ten thousand instances the same AI or different AIs? If I ask it a question and you ask it the same question and we get different answers, are we talking to the same entity? The model was trained by one company, fine-tuned by another, deployed on hardware owned by a third, integrated with tools maintained by a fourth, and is being used by millions of people whose inputs actively shape its outputs. Which of those is “the AI”?
If you think this sounds like a parlor game for overemployed philosophers, consider the practical stakes. If you want to audit an AI system, you need to know where it begins and ends. If you want to regulate it, same problem. If you want to hold it accountable for a decision it made, you need to point at the thing that made the decision. But the thing keeps dissolving the moment you try to point.
![]() |
| The AI individuation problem: the “entity” we call an AI is distributed across multiple layers, each with a plausible claim to being the real thing. |
The Inquiry call draws an explicit parallel to the philosophy of mind, which has been trying to answer the question “where is the mind?” for several centuries now with, it must be said, only moderate success. Is the mind the brain? The brain-plus-body? The body-plus-environment? The extended mind thesis says your smartphone is part of your cognitive system, which sounds provocative until you try to remember a phone number without it and realize it might just be true.
AI inherits all of that complexity and adds new wrinkles that would make Descartes weep into his meditation journal. Biological minds can’t be copy-pasted. You can’t fork a human, run two copies in parallel, merge them back together, and call it a day. (Well, you can try. I don’t recommend it.) AI systems can be and routinely are. This breaks every philosophical framework built for individuating biological entities. A second Inquiry call asks about “AI Agents: Choice, Autonomy, and the Concept of Agency,” and a third asks whether our concept of consciousness is even the right concept to be deploying in this context, or whether we need to engineer entirely new concepts from scratch.
Here is the connection to the sterile neutrino, and it is not a metaphor. Both are cases where we posit an entity to explain observed phenomena and then struggle to pin it down when we go looking. The physicists said “there must be a fourth neutrino” because the data had holes in it. We say “there is an AI” because something is clearly producing these outputs, making these decisions, passing these bar exams. But in both cases, the entity itself keeps slipping through the detector. The neutrino turned out not to exist. The AI exists, probably, but we can’t agree on what or where it is.
It’s like asking “where is a corporation?” You can point to the headquarters. The employees. The legal charter. The brand. The stock price. The corporate culture. The corporation isn’t any single one of those things. It’s somehow all of them and none of them at the same time. Now imagine the corporation could be duplicated overnight, run simultaneously on ten thousand computers, merged with another corporation before lunch, and then fine-tuned on your personal data after dinner. That’s the AI individuation problem. And good luck regulating it without solving it first.
· · ·
Part III
The Telescope That Shrinks the Sky
This is where things get properly weird, and also properly worrying. In January 2026, a team led by James Evans published a study in Nature that analyzed 41.3 million research papers. Let me say that number again because it deserves to land. Forty-one point three million papers. The question they were asking was simple. What happens to science when scientists adopt AI?
The headline results sound like an advertisement for AI adoption. Scientists using AI tools published roughly three times more papers. They received nearly five times more citations. They became project leaders 1.37 years earlier in their careers. If you’re a university administrator reading those numbers, you are already drafting the memo about mandatory AI training for all faculty.
But here’s the catch, and it’s a big one.
Collectively, AI adoption shrunk the total volume of distinct scientific topics being studied by 4.63%. It also decreased scientist-to-scientist engagement by 22%. More papers, fewer ideas. More output, less interaction. The machine makes you faster but the species less creative.
The mechanism isn’t mysterious once you think about it. AI works best where data is abundant and benchmarks are well-defined. If you’re working on a problem with massive datasets and clear performance metrics, AI gives you a rocket ship. If you’re working in a data-poor, benchmark-scarce area, something that is messy and underexplored and maybe doesn’t even have an agreed-upon methodology yet, AI gives you approximately nothing. So scientists, being rational actors in a competitive publish-or-perish environment, migrate toward the zones where AI amplifies their productivity. The rich data fields get richer. The poor ones get abandoned.
A companion paper in Communications Psychology by Traberg, Roozenbeek, and van der Linden puts it more bluntly. AI is creating a “scientific monoculture.” A feedback loop of topical and methodological convergence that flattens scientific imagination into a smooth, efficient, profoundly predictable surface. We are becoming very good at answering the questions we’ve already thought to ask and increasingly bad at formulating new ones.
The analogy I keep reaching for is the metal detector on a beach. Someone gives you a metal detector so sensitive it can find gold nuggets buried ten feet underground. Incredible technology. Transformative. You would naturally gravitate toward the beaches and riverbeds where gold has been found before, because that’s where the detector will actually help. Meanwhile, nobody is exploring the mountains anymore, even though the textbook-rewriting discoveries, the ones that overturn paradigms and open entirely new fields, have historically come from the mountains. From the places where nobody thought to look because there wasn’t an obvious method for looking there.
Connect this back to the sterile neutrino. The twenty-year hunt for a particle that didn’t exist is exactly the kind of science the monoculture threatens to eliminate. It was data-poor for most of its history. There were no clear benchmarks. The methodology was improvised and iterative. An AI optimization system, tasked with maximizing research productivity, would have flagged it as a low-probability investment and redirected resources toward well-benchmarked topics years ago. And it would have been, in a narrow sense, correct. The sterile neutrino was a dead end. But the technologies, insights, and methodological innovations born from chasing that dead end are already generating new science that nobody anticipated. Dead ends in science are not dead. They’re dormant.
We are getting better at answering questions but worse at asking new ones.
There’s an additional dimension here that the Evans paper only begins to touch on. AI is not just shifting what scientists study. It’s shifting how they think about what’s worth studying. When your instrument is a hammer, everything looks like a nail. When your instrument is a transformer model, everything looks like a sequence prediction problem. The epistemological consequences of this might not show up in any analysis of 41.3 million papers, but that doesn’t mean they’re not real. They’re the kind of slow, structural changes that you only notice when it’s too late to reverse them.
· · ·
Part IV
Quantum Batteries, Spinning Plasma, and the Surprises That Remain
The temptation at this point is despair, or at least a kind of comfortable pessimism about the future of scientific creativity. I want to resist that temptation, because the evidence against it is genuinely delightful.
In March 2026, a team from CSIRO, RMIT University, and the University of Melbourne built the world’s first proof-of-concept quantum battery. That alone would be interesting. What makes it extraordinary is how it behaves. This battery charges faster as it gets bigger. Not incrementally faster. Not “a little bit more efficient at scale.” Fundamentally, qualitatively, bizarrely faster. The more quantum cells you add, the faster the whole thing charges. This is the exact opposite of every battery, every queue, every resource-constrained system you have ever encountered in your life. It is like a restaurant where the more people show up for dinner, the faster everyone gets their food. It makes no sense. Until you understand quantum mechanics, and then it makes perfect sense, which is somehow worse.
The effect is called “super absorption.” Individual quantum systems, when coupled together in the right way, collectively absorb photons more efficiently than any of them could alone. The battery charges in femtoseconds. It retains energy for nanoseconds. Those are not typos. We are talking about timescales so short that light itself has barely moved. The practical applications are, for now, limited. Nobody is putting a femtosecond quantum battery in your phone. But the discovery demonstrates something important: nature is still doing things that surprise us, and the surprises tend to come from domains where our everyday intuitions go to die.
Meanwhile, in fusion research, a decades-old mystery just got solved. For years, tokamak simulations couldn’t explain why escaping plasma hit one side of the exhaust divertor harder than the other. The asymmetry was consistent, reproducible, and completely unexplained by existing models. Researchers at multiple labs had tried to account for it and failed. The answer, when it finally came in April 2026, turned out to be almost embarrassingly physical. The plasma was rotating. At 88.4 kilometers per second. A real-world factor that the modelers had simply… left out. Not because they didn’t know plasma could rotate, but because the rotation hadn’t seemed important enough to include in the simulations.
And in China, the EAST tokamak just pushed past the long-standing Greenwald density limit, a theoretical ceiling on how dense a tokamak plasma can get before it becomes unstable. They didn’t just nudge past it. They entered what they’re calling a “density-free regime,” which is the kind of name that makes physicists’ eyebrows do interesting things. The implications for future fusion reactors are significant.
These discoveries share a common DNA. They came from chasing anomalies. Unexplained mismatches between theory and experiment. Things that didn’t fit. Things that bugged people. Things that, in a research environment fully optimized by AI for maximum productivity, might have been classified as noise and deprioritized. The quantum battery came from following a theoretical prediction into the lab and seeing whether nature would actually cooperate. (It did, grudgingly.) The plasma rotation discovery came from someone refusing to accept a known discrepancy as “just one of those things.” The Greenwald limit breakthrough came from engineers who kept pushing a machine past the point where the textbooks said it should fail.
This is what anomaly-driven science looks like. It’s messy, data-poor (at first), benchmark-free (by definition), and deeply, wonderfully inefficient. It is also where the most important discoveries tend to live. Not behind the couch cushion you’ve checked a hundred times, but in the room you hadn’t thought to enter.
· · ·
Part V
Protecting the Margins of Science
Let me try to pull these threads into something useful.
The sterile neutrino saga teaches us that dead ends in science are load-bearing structures. The experiments built to find a particle that doesn’t exist have given us extraordinary instruments, datasets, and analytical techniques. MicroBooNE’s liquid argon time projection chamber is now being repurposed for the Deep Underground Neutrino Experiment, which will study neutrino oscillations with unprecedented precision. KATRIN’s spectrometer will constrain the absolute mass of neutrinos, one of the most consequential open questions in physics. The two decades weren’t wasted. The infrastructure of the search outlived the search itself, and will serve science long after the sterile neutrino has been forgotten as a historical curiosity.
The AI individuation problem teaches us something different but equally important. We have built the most powerful cognitive technology in human history and we cannot, in a philosophically rigorous sense, say what it is. Not “we don’t understand how it works” (although that’s also true). We can’t even agree on what it refers to. We can’t locate it. We can’t draw its boundaries. We can’t say when one AI ends and another begins. And yet we are attempting to regulate it, audit it, deploy it in high-stakes decision-making, and integrate it into every sector of human activity. We are building the airplane while flying it, which is an old cliche, but in this case we are also unable to agree on what counts as an airplane.
And the monoculture research points toward specific, actionable interventions. Fund data-poor fields explicitly and generously. Incentivize exploratory AI use rather than confirmatory AI use. Value negative results. Create institutional structures that reward the kind of anomaly-chasing, dead-end-exploring, benchmark-free research that AI optimization naturally penalizes. The quantum battery, the plasma rotation discovery, the Greenwald limit breakthrough: these are existence proofs that counterintuitive, anomaly-driven science still produces the most exciting results, if we let it.
The deeper message, the one that connects all three stories, is about the relationship between power and imagination. Our tools have never been more powerful. Our output has never been higher. And the range of questions we dare to ask may never have been narrower. This is not a technology problem. It is a courage problem. The sterile neutrino was chased for twenty years because physicists had the courage to follow an anomaly into a dead end. The KATRIN detector, a 200-ton spectrometer that spent five years hunting a particle that doesn’t exist, is a monument to that courage. It is a machine built to be wrong, and it was worth every dollar and every hour.
Will we still build machines like that? Will we still fund research programs that might lead nowhere? Will we still tolerate the glorious, expensive, scientifically essential inefficiency of chasing ghosts?
The ghosts we chase define us more than the things we find. That’s always been true. The question is whether it will remain true in an era that has optimized, with terrifying effectiveness, for finding things.
· · ·
The anomalies remain. The reactor antineutrino deficit is real. The gallium results are real. Something caused them, and we don’t know what. Somewhere out there, in a room nobody has entered, the keys are waiting. The question for the next decade isn’t whether AI can find them. It’s whether we’ll still have the courage to look in rooms where the metal detector doesn’t work





Comments
Post a Comment