
Follow ZDNET: Add us as a preferred source on Google.
ZDNET’s key takeaways
- Only 23% of IT managers have complete control over their agents.
- A majority say security guardrails will be inadequate within the next six months.
- Agent management needs to be a ‘first-class discipline.’
AI agents — so easy to spin up — are proliferating out of everyone’s control. And that’s becoming a problem that may undermine any benefits they are delivering.
That’s the conclusion of a just-released survey by Rubrik ZeroLabs, which finds that fewer than one in four IT managers (23%) say they have “complete” control over the agents within their organizations. To make matters worse, these agents aren’t necessarily delivering the productivity sought. A majority, 81%, report that the agents under their purview require more time in manual auditing and monitoring than they were intended to save via workflow improvements. Security is also less than stellar, the survey adds.
Also: Scaling agentic AI demands a strong data foundation – 4 steps to take first
Creating AI agents is easy, and the problem is “users often turn off VPNs or otherwise skirt security controls to spin up agents to act as assistants,” the report’s authors state. The result is a large volume of unsanctioned AI applications, both internally and launched by vendors.
Agent sprawl resembles early cloud adoption
Across the industry, there is concern that agents are starting to get out of hand, with agent sprawl now a pervasive problem. “We are already seeing patterns similar to early cloud adoption, where teams spin up agents independently using different frameworks and vendors,” said Kriti Faujdar, senior product manager at Microsoft. “This leads to fragmentation, inconsistent governance, and hidden security gaps.”
The authors of the ZeroLabs survey found a disconnect between perceived control and operational reality among agents. Just about all IT managers, 86%, anticipate that agentic proliferation will outpace security guardrails in the next year. More than half (52%) expect this to happen within the next six months. Plus, nearly all respondents indicate they lack the “undo” capabilities necessary to roll back unintended agent actions.
Also: How to build better AI agents for your business – without creating trust issues
With the proliferation of agents across enterprise systems, industry observers worry that such sprawl is becoming too difficult to manage and contain. “Any team with API access can spin up an agent in an afternoon,” said Nik Kale, principal engineer with the Coalition for Secure AI. “Multiply that across a large enterprise, and you get hundreds of agents with overlapping permissions, no consistent identity model, and no one who can tell you the full inventory.”
Agentic observability can be notoriously challenging, and the ZeroLabs authors point to a growing need for telemetry for understanding chains of agentic actions, punctuated by enforcement points for security.
5 post-deployment questions
Tracking agent viability means answering the following questions post-deployment, as identified by the ZeroLabs study’s authors:
- What did the agent do? Called a trace, this is the ability to replay or at least reconstruct exactly what happened.
- Why did it do it? What did the agent believe caused it to take certain steps?
- What did it touch? Audit trails should contain a comprehensive list of any data or tools an agent interacted with.
- Did it succeed, safely, and at what cost? How are organizations measuring task success rate, cited outputs, policy violations, or human escalations for an accurate understanding of ROI?
- Where did it fail? Can we reproduce the failure in order to address it?
These are questions that are currently not being answered, the report states. As a result, many administrators and their organizations are unable to “define acceptable agentic behavior; audit what resources and tools agents can access; create policies for triggering a human in the loop; or roll back agentic actions.”
Trade-off between speed and governance
As agents act autonomously, they pose a greater risk than traditional software, said Faujdar. In today’s environment, there is a trade-off between speed and governance. “Organizations want to move fast, but without clear guardrails, they risk creating systems that are difficult to trust, audit, or scale. The winners will be those who treat agent management not as an afterthought, but as a first-class discipline.”
Keeping agents current is also a vexing challenge — as their foundation models tend to drift. “The agent you certified in Q1 is behaviorally different by Q3, through no fault of the platform,” said Renze Jongman, founder and CEO of Liberty91. “Your governance model has to assume the ground moves.”
Also: I asked 5 data leaders about how they use AI to automate – and end integration nightmares
At this point, there are “too many agents operating outside any governance boundary, including the ones teams build themselves,” said Kale, who advises keeping the orchestration layer in the agent stack separate from the model and governance layers. “If all three live inside one vendor’s platform, you’ve handed over your agent’s brain, its permissions, and its accountability chain in a single contract.”
Agent oversight, Kale added, “should involve security, architecture, and the business unit that owns the outcomes, not just the team that wants to ship the fastest.”