Posts

Featured Post

When the Physicist and the Computer Scientist Walk Into a Quantum Bar

Image
  SAS Innovate Pre-conference session A Technocrat’s Discernment, originally partially written on Quantum Day 2026. Then I was in the SAS Innovate 2026 pre-conference workshop, and ended up reframing the article around these two quotes. I sat in a dimly lit conference room at SAS Innovate this week, half awake from the keynote coffee, when a slide appeared on the screen that quietly forced me to put my notebook down. Two quotes, side by side, separated by a thin red line and about thirty seven years of intellectual history. David Deutsch, in 2011, insisting that the theory of computation has been mistakenly treated as a topic in pure mathematics, that computers are physical objects, and that what they can or cannot compute is determined by the laws of physics alone, not by mathematics. And then below it, Donald Knuth in 1974, claiming the opposite with an almost mischievous calm. Computer science, he said, is somewhat different from the other sciences because it deals with artifici...

Engram Memory for Agents on OCI - Part 1. The Why and the What

Image
  Copyright: Sanjay Basu 1. Why Memory, and Why Now Long context is not memory. This is the sentence I keep wanting to staple to people’s foreheads at conferences. Yes, frontier models will happily accept two million tokens of input. No, that does not mean stuffing every prior conversation into the prompt is a good idea, even if you can afford the bill, which most companies cannot. The empirical case against the long-context-as-memory pattern is by now embarrassingly well documented. The original Lost in the Middle paper from Stanford showed that retrieval accuracy collapses when relevant facts are buried in the middle of a long context window. Every follow-up study since (NoLiMa, Michelangelo, RULER, the whole genre) has confirmed the same shape. Effective context length is much smaller than nominal context length. The model’s attention is not democratic. It cares about the beginning, it cares about the end, and the middle goes to the same place socks go in the dryer. Then yo...

The Chip Designs Itself Now

Image
  Autonomous AI Agents Have Quietly Started Rewriting the Tools That Rewrite Silicon I first came across this particular kind of news that arrived without a press release. No keynote, no crowd in a dim auditorium in San Jose, no breathless CNBC segment. Just a PDF quietly parked on arXiv, the kind of paper that looks unremarkable until you read the abstract twice and realize what it actually says. Then, of course, the fireside chat that Anirudh had with Jensen. The paper in question is from NVIDIA Research and the University of Maryland, and its claim is modest in tone and enormous in implication. A team of large language model agents, they report, was pointed at ABC, the million-plus-line open-source logic synthesis system that has been the de facto academic and industrial backbone of chip design research for two decades. The agents were not asked to use ABC. They were asked to evolve it. To rewrite its C code. To improve the tool itself. After thirty-some cycles of automated comp...