Developing for the Virtual Agentic Computer
Published: 1/16/2026
By: Mark Fisher
Most examples of AI today are being built within Applications. It starts with a chatbot in the corner of an existing web UI or swapping a database to a language model in a backend service. It becomes an “Agent” when that backend service is given access to Tools so that in addition to generating text, it can perform actions as suggested by the generated text. And lots of very impressive new experiences are being created with that approach!
But a whole new world of experience-building opens up once you see an Agentic System as a new form of virtual computer upon which a new form of Application runs.
The New Agentic Stack
A conceptual mapping of the new stack looks like this:
The layers and their boundaries are being formalized through emerging standards. Those standards have their roots in well-respected organizations and are driving rapid adoption through the ecosystem.
The Model Context Protocol, originally developed by Anthropic, has enabled an extraordinary proliferation of Tools that can plug-and-play across Agents. Anthropic subsequently released AgentSkills as an open standard based on the recognition of common patterns when interacting across sets of Tools, and it too is seeing widespread adoption. Additional standards such as A2A, originally developed by Google, even address the Agent-to-Agent “networking” layer of the stack.
Having these well-defined interfaces codified through widely adopted standards provides the structural support for the metaphor.
Skills as the New Applications
When thinking this way, the idea of where we focus as developers shifts. Instead of building an Agent within an Application using a framework, the Agent is a manifestation of composed Skills upon the agentic stack. We curate and customize Tools to support those Skills as they consume or produce media types, generate UI widgets, and orchestrate any combination of Tool calls.
When some need is not met by what exists in shared Tool registries, we create new Tools, just like we created new Libraries if no existing Library met our needs when building Applications. And often those Tools will map to our existing Applications or refactored versions of them. That is where the engineering effort can focus on domain-specific differentiation. The Agent Loop itself is undifferentiated.
When we need something customized within the stack, we should be able to build upon well-defined extension points, but that is the agentic equivalent of systems engineering. Those extension points should have their own units of delivery where implementation artifacts are provided as isolated components, not all distributed together in a monolith.
That isolation is architecturally enforceable when supported by a zero-trust execution environment and least-privilege capability model. Such enforcement boundaries become even more important in the midst of agentic unpredictability. Less deterministic behavior means interception points for observability, security, and durability are even more essential.
The Composable User Experience
The earliest computers had to be wired for fixed-tasks and the seeds of “software” engineering were planted when those tasks could instead be defined as programs and stored as pluggable content. The historical progression of software development leaves a trail of increasingly decoupled systems: software from hardware, virtual from physical, cloud services from managed infrastructure, and now the agent-driven experience from the full-stack application.
Building a full stack Agent per experience is the fixed-task phase for agentic systems. We should be able to skip straight to the dynamic experience end of the historical line. But that will not happen if we think building an Agent means building an Application that interacts with a language model instead of a database. These systems should be modular from the start, because we have history as a guide. The agentic shift is on the level of the fixed-task to software shift. Agents should be dynamic compositions, each contextual execution a distinct “performance”.
Having history as a guide should also mean avoiding mistakes made in the adoption of microservices. The key difference is that the Agent Loop can rely on component boundaries that are static and generic at the infrastructure level, like peripheral standards for a physical computer. Domain specific negotiation happens within the loop, as instructed by Skills in the language of that domain. That enables integration without requiring the bespoke connectivity of domain-specific API contracts. Tools will be more effective if they accept and produce strong types, but that doesn’t need to create tight-coupling between components since their interactions are mediated through a processor whose “instruction set” is natural language.
The Agentic Field is Green
Rather than refactoring Applications to include Agents, the relationship is inverted: existing Applications fit into the new stack as Tools. Agents are composed, and the new user experiences are defined through collaborations across Agents that provide a diversity of Skills. The existing Applications, with all their determinism and durability, support those Skills as they express higher level usage patterns across Tools.
In case this sounds like an architectural fever dream, keep in mind: the standards exist, the ecosystem exists, the enforceable isolation model exists, and the composable foundation exists. The mindset shift is all that remains.
When composed as described above, Agents are by definition manifesting at a new layer, thus inherently greenfield. And nothing inspires developers more than trekking into a vast expanse of new territory, especially when they still have all of their previously built tools available in their pack.