The Agent Mirage and What Comes After
![]() |
| Copyright: Sanjay Basu |
Why Orchestration Layers Will Not Drive Enterprise AI Adoption
Shenggang Li recently argued that most AI agents are decoration. He is right. But he stopped short of the more interesting question. If agents are not the answer, what is?
I have spent three years scaling AI infrastructure at Oracle Cloud. Before that, two decades building enterprise systems at AWS, EMC, Dell. I have watched technology hype cycles come and go. Agents feel familiar. They have the same structural weakness as middleware in the 2000s. Useful. Necessary even. But not defensible.
The enterprise AI revolution will not be won by whoever builds the best wrapper.
The Mathematical Trap of Orchestration
Li frames agents as wrappers around base model intelligence. Let me extend this.
Consider the value function of an enterprise AI system.
V(system) = f(M, D, C, T)
Where M is model capability, D is domain data, C is organizational context, and T is the orchestration layer.
The partial derivative ∂V/∂M dominates all others. When Anthropic or OpenAI ships a better model, the value contribution of T approaches zero. This is not speculation. We saw it happen in 2024 when GPT-4 Turbo obsoleted half the agent frameworks built for GPT-3.5.
Agents optimize for T. But T is the wrong variable.
The winning strategy optimizes for D and C. Domain data and organizational context. These are the variables that compound. These are the variables competitors cannot replicate.
Why Tool Calling Changed Everything
Here is what most people miss about frontier LLMs.
The shift from prompt engineering to native tool calling represents a phase transition. Not an incremental improvement. A categorical change in what these systems are.
Claude 3.5 and GPT-4o do not just respond to prompts. They reason about action spaces. They maintain state across function calls. They handle error recovery.
This means the orchestration logic that agents provided is migrating into the model itself.
Let me formalize this.
Agent_value(t) = Orchestration_complexity(t) × (1 - Model_capability(t))
As Model_capability(t) → 1, Agent_value(t) → 0
The denominator is eating the numerator. Every capability you add to your agent framework is a capability that will ship natively in the next model release.
I run a company that deploys agents. I am telling you agents are a transitional form. This is not pessimism. It is pattern recognition.
The RAG Illusion
Retrieval-Augmented Generation seemed like a breakthrough. Inject external memory into a model with frozen weights. Extend knowledge without retraining.
But RAG has a fundamental information-theoretic limit.
I(output; world) ≤ I(retrieved_context; world) + I(model_weights; world)
You cannot extract more signal than your retrieval system captures. And retrieval systems are notoriously lossy. Embedding similarity is a crude proxy for semantic relevance. Top-k retrieval throws away long-tail information.
The real problem is deeper. RAG assumes that intelligence is about having the right documents at the right time. It is not.
Intelligence is about synthesis. About seeing patterns across domains. About knowing what questions to ask before you know the answer exists.
RAG solves the wrong problem very efficiently.
What Enterprises Actually Need
I talk to Fortune 500 CIOs every week. They are not asking for better agents. They are asking for something else entirely.
They want AI that understands their business.
Not their documents. Their business. The tacit knowledge. The organizational dynamics. The competitive context. The regulatory constraints. The history of decisions that shaped current processes.
This is not retrievable from a vector database. It is not capturable in a system prompt. It requires something we have not built yet.
The Emergence of Cognitive Infrastructure
Here is my thesis. The next phase of enterprise AI is not about agents or orchestration. It is about cognitive infrastructure.
Let me define this precisely.
Cognitive infrastructure is the persistent substrate that encodes organizational intelligence in a form that AI systems can reason over.
Think of it as the difference between giving someone a library card and giving them a university education. RAG gives you the library card. Cognitive infrastructure gives you the education.
The mathematical framing looks like this.
Traditional AI deployment maximizes P(correct_output | query, retrieved_docs)
Cognitive infrastructure maximizes P(valuable_action | organizational_state, strategic_context)
These are fundamentally different objective functions. The first is about retrieval accuracy. The second is about decision quality.
Three Components of Cognitive Infrastructure
First. Organizational knowledge graphs. Not document embeddings. Actual structured representations of entities, relationships, processes, and constraints. These persist across sessions. They update incrementally. They support reasoning that document retrieval cannot.
Second. Decision context layers. Every enterprise decision happens in a web of constraints. Budget cycles. Reporting hierarchies. Regulatory requirements. Competitive dynamics. An AI system that ignores this context is useless for real work. An AI system that encodes this context becomes indispensable.
Third. Outcome feedback loops. Most enterprise AI deployments are open-loop. The system makes recommendations. Humans act. Nobody tracks what happened. Cognitive infrastructure closes this loop. It learns from outcomes. It updates its models of what works in this specific organization.
Why Frontier Models Enable This Shift
Here is the counterintuitive part.
Better base models make cognitive infrastructure more valuable, not less.
The reasoning goes like this. Frontier models like Claude Opus and GPT-4o have sufficient capability to reason over complex organizational contexts. Previous models could not. They would hallucinate or lose coherence over long context windows.
But capability without context produces noise. A very smart system that knows nothing about your organization will generate very sophisticated garbage.
The equation is multiplicative.
Business_value = Model_capability × Context_depth × Action_authority
Frontier models have pushed the first term near its ceiling for many enterprise use cases. The binding constraint is now the second term. Whoever solves context depth wins.
The Hardware Dimension Nobody Discusses
I spend my days thinking about GPU allocation and inference optimization. Let me tell you what I see.
The cost structure of AI is shifting. Training costs are concentrating in a few frontier labs. Inference costs are democratizing. A company can now run capable open-weight models on modest hardware.
This changes the economics of enterprise AI deployment.
When inference was expensive, centralized API access made sense. When inference becomes cheap, on-premise deployment becomes attractive. Not for performance. For context.
An AI system running inside your firewall can access your ERP. Your CRM. Your internal communications. Your proprietary datasets. This context is worth more than a marginal improvement in model capability.
The winning architecture is not the best model behind an API. It is a good-enough model with deep organizational integration.
What Replaces the Agent Framework
I am not arguing that orchestration disappears. I am arguing that it becomes infrastructure rather than product.
Think about what happened to web servers. In 1998, building a web server was a product category. By 2008, it was a commodity. Apache and Nginx became invisible infrastructure that everyone used and nobody paid for.
Agent frameworks are on the same trajectory. LangChain and LlamaIndex will become invisible plumbing. Necessary but not differentiating.
The differentiation moves up the stack. To cognitive infrastructure. To organizational context. To the persistent knowledge substrates that encode what makes each enterprise unique.
The Talent Misallocation Problem
Li touches on this but I want to amplify it.
We are training a generation of engineers to build agent wrappers. This is a dead end.
The skills that matter for enterprise AI are different. Understanding how organizations actually work. Modeling decision processes. Designing feedback systems. Building knowledge representations that persist and compound.
These are not prompt engineering skills. They are systems thinking skills combined with domain expertise.
The engineer who understands healthcare operations and can model them formally will create more value than the engineer who can chain seventeen API calls together.
A Prediction
Within three years, the agent framework category will consolidate to two or three players. They will be cheap or free. They will be table stakes.
The value capture will shift to cognitive infrastructure providers. Companies that help enterprises encode their organizational knowledge in AI-native formats. Companies that build the persistent context layers that make frontier models actually useful for real work.
This is where I am placing my bets. Not on better wrappers. On deeper context.
The Uncomfortable Truth for Agent Builders
If you are building an agent company, ask yourself this question.
What do you have that OpenAI and Anthropic cannot ship as a feature?
If the answer involves orchestration logic, tool calling, or memory management, you are in trouble. These are features, not products. They will be absorbed into the base models or commoditized into open-source frameworks.
If the answer involves deep integration with a specific domain, proprietary data relationships, or organizational context that takes years to build, you might have something.
The test is simple. Can a well-funded competitor replicate your value proposition in six months? If yes, you do not have a business. You have a demo.
What I Am Building
We started with agent deployment optimization. Making agents run efficiently across different hardware. NVIDIA. AMD. Positron. TPUs. Inferentia. And many more.
But I am increasingly convinced that optimization of the wrong architecture is still the wrong architecture optimized.
The next phase of our work focuses on cognitive infrastructure primitives. The building blocks that enterprises need to encode organizational context in AI-native formats. The persistent substrates that make models useful rather than just impressive.
This is harder than building agents. It requires understanding both the AI systems and the organizations they serve. It requires patience. It requires accepting that the real value compounds slowly.
But it is the right problem. And the right problem matters more than the convenient one.
The Path Forward
Enterprise AI adoption will not be driven by agents. It will be driven by integration depth.
The companies that win will not have the best wrappers. They will have the deepest context. They will understand that AI capability is now abundant. What remains scarce is organizational knowledge in computable form.
The agent era was a necessary experiment. It taught us what does not work at scale. It revealed the limitations of orchestration without context. It showed us that wrappers depreciate while knowledge compounds.
Now we build what comes next.
Not smarter agents. Smarter infrastructure.
Not better prompts. Better context.
Not more tools. More understanding.
The future of enterprise AI is not about chaining capabilities together. It is about encoding organizational intelligence in forms that compound over time.
Agents will fade. Context will compound. And the enterprises that understand this distinction will capture the value that the agent builders leave on the table.
Thoughts?

Comments
Post a Comment