A recent Washington Post article rightly highlighted the risks of agentic AI creating “silent errors” in consumer applications, hallucinations in healthcare advice, mistakes in legal drafts, or booking the wrong flight. These are valid concerns. But focusing solely on consumer applications misses the far more acute version of this problem, which is happening right now inside the enterprise.
The danger is that we are attempting to manage the new digital workforce with governance infrastructure designed for humans, not agents. And unlike humans, these agents are scaling faster than our ability to supervise them.
While there is debate about whether AI should be allowed to book a vacation, major global companies are quietly deploying agents that update Salesforce records, modify financial systems, and access production environments. We are rapidly approaching what I call the “100,000 Agent Problem”. Consider a mid-sized enterprise with 20,000 employees. If each employee uses just five AI agents during their workday, one for scheduling, one for CRM, one for coding, etc., that organization is suddenly managing 100,000 autonomous entities accessing internal systems.
However, for the past year, corporate AI has been stuck in “Advisor Mode” dutifully summarizing meetings, rewriting emails, and generating slide decks. This is safe, but it isn’t transformative. Summaries don’t move the needle on revenue. The shift we are seeing now is toward “Action Mode,” where AI stops suggesting what to do and starts actually doing it.
When you move from chat to action, the risk profile changes fundamentally. AI agents behave differently from traditional software. They are probabilistic, not deterministic. I often describe them as “enthusiastic interns”. Like a new intern, an AI agent is incredibly eager to help, moves very fast, and wants to clear its task list. But also like an intern, it lacks the institutional context to understand the collateral damage of its actions.
If you ask a human employee to “clean up the customer database,” they know that means fixing typos and merging duplicates. If you ask an “enthusiastic intern” agent to do the same, it might delete ten years of historical sales data because it viewed those inactive records as “clutter”. It did exactly what you asked, with zero malice, and caused a catastrophe.
This probabilistic behavior exposes a fatal flaw in our current security stack. For decades, we have relied on Identity and Access Management (IAM) to keep us safe. These systems answer one question: Who are you?. IAM works for humans because humans have judgment. If a Sales Director has permission to delete a deal in Salesforce, we trust them not to delete a million-dollar opportunity on a whim. But an AI agent inherits those same permissions without inheriting the judgment. If that Sales Director’s agent decides to “help” by deleting a record, the IAM system sees a valid user with valid credentials making a valid call. It waves the agent through the front door.
Traditional security is necessary but not sufficient for the agentic era. We need a new layer of infrastructure that governs behavior, not just identity. We need the ability to enforce granular, conditional permissions, allowing an agent to create a new opportunity but explicitly blocking it from deleting or editing an existing one. Until we have controls that can distinguish between a safe read action and a destructive write action, we are flying blind.
Beyond security, there is a massive economic barrier to the 100,000 agent reality: the “Context Window Exhaustion” crisis. The standard for connecting these agents to data is the Model Context Protocol (MCP). It is a brilliant innovation, acting like a menu that tells the AI what tools are available. But when you connect an agent to a full enterprise stack, Salesforce, Slack, Google Drive, Jira, that “menu” becomes a telephone book.
Currently, an AI agent wastes 80-90% of its processing power (and your budget) reading the descriptions of every tool in your company before it answers a single question. It is the corporate equivalent of hiring a consultant and paying them to read the entire employee directory and every procedure manual before allowing them to answer a simple question about Q3 sales.
This context exhaustion doesn’t just spike costs by 95%; it destroys accuracy. When an AI is forced to choose between 500 similar-sounding tools, it gets confused. It starts hallucinating, searching Google Drive for data that lives in Snowflake. Without an intelligent context layer to filter this noise, the economics of enterprise AI simply do not work at scale.
We have seen this movie before. In the early days of the web, we had Adobe Flash. It was messy, it crashed browsers, and it had security holes but it was utterly necessary to bridge the gap between the static web and the dynamic multimedia future.
MCP as it exists today is a transitional technology that allows us to bridge our legacy systems to this new agentic world. In its short life, it’s already evolved a lot and will continue to do so. It may someday even be superseded by new and more powerful protocols. But in the meantime, it is the only game in town.
CIOs cannot afford to wait for the perfect standard. Just as employees brought iPhones to work during the Bring Your Own Device (BYOD) revolution regardless of IT policy, employees are now engaging in Bring Your Own AI (BYOAI). They are spinning up unvetted MCP servers and connecting them to corporate data because they need to get their jobs done. Blocking this is futile; it just drives the activity into the shadows.
As we look into 2026, enterprises face a stark choice. They can keep their AI agents read-only safe, neutered, and ultimately useless. Or, they can embrace “write” access, unlocking the massive productivity gains of agents that can actually execute work.
To do the latter, we must stop treating governance as a brake and start treating it as a launchpad. IT departments must evolve into the HR Department for AI, responsible for onboarding, monitoring, and, when necessary, firing these “digital interns”.
The real risk isn’t that an AI books the wrong flight for a consumer. The real risk is that enterprises will deploy these powerful agents without the infrastructure to control and manage them, or conversely, that they will be too paralyzed by fear to deploy them at all. The technology to govern this workforce exists. It is time we started using it.

