Deploying AI agents is not your typical software launch – 7 lessons from the trenches

Weiquan Lin/Moment via Getty Images

Follow ZDNET: Add us as a preferred source on Google.


ZDNET’s key takeaways

  • Agent deployments differ from traditional software launches.
  • Governance can not be an afterthought with agents.
  • ‘AgentOps’ now enters the scene. 

Excitement about AI agents may seem over the top, but remember: It takes work and planning on the ground to make these tools productive. Top-level actions include giving agents freedom, but not too much freedom, while also rethinking traditional return-on-investment measures. 

Also: 3 ways AI agents will make your job unrecognizable in the next few years

Effective AI development and management require making informed choices in control, investment, governance, and design, according to Kristin Burnham, writing in MIT Sloan Management Review. Reviewing recent research conducted by Sloan and Boston Consulting Group, she cites the “tensions” that AI agent developers and proponents need to be aware of:

  • Constraining agentic systems too much limits their effectiveness, while granting too much freedom can introduce unpredictability.
  • Agentic AI forces organizations to rethink how they assess cost, timing, and return on investment.
  • Organizations must decide whether to quickly retrofit agentic AI into existing workflows or take the time to reimagine those workflows altogether.

Also: Forensic vibers wanted – and 10 other new job roles AI could create

Across the industry, there is agreement that agents require new considerations beyond what we’ve become accustomed to in traditional software development. In the process, new lessons are being learned. Industry leaders shared some of their own lessons with ZDNET as they moved forward into an agentic AI future.

1. Governance matters — a lot

 “Confidence isn’t accuracy,” said Nik Kale, principal engineer at Cisco, who led a team delivering agents to provide expert-level technical guidance to more than 100,000 users. Early versions of the agents “could respond confidently but incorrectly, which required us to invest heavily in grounding responses through retrieval and structured knowledge.” 

An important lesson learned was “governance can’t be retrofitted,” Kale added. “When oversight and policy controls are added late, systems often lack the architectural hooks to support them, forcing painful pauses or redesigns.”

Also: 8 ways to make responsible AI part of your company’s DNA

In the long run, trust accelerates, Kale said. “Once systems perform well, human scrutiny drops. That’s when scope creep and unintended autonomy can emerge if boundaries aren’t explicit.”

Kale urges AI agent proponents to “grant autonomy in proportion to reversibility, not model confidence. Irreversible actions across multiple domains should always have human oversight, regardless of how confident the system appears.” Observability is also key, said Kale. “Being able to see how a decision was reached matters as much as the decision itself.”

2. Start narrow

With agents, “we intentionally start narrow,” said Tolga Tarhan, CEO with Atomic Gravity. “Most of the agents we deploy are scoped to a single domain with clear guardrails and measurable outcomes. That might be an engineering copilot, an operations assistant, or an agent that synthesizes complex datasets for executives.” 

3. Ensure data quality

AI works well when it has quality data underneath,” said Oleg Danyliuk, CEO at Duanex, a marketing agency that built an agent to automate the validation of leads of visitors to its site.  “In our example, in order to understand if the lead is interesting for us, we need to get as much data as we can, and the most complex is to get the social network’s data, as it is mostly not accessible to scrape. That’s why we had to implement several workarounds and get only the public part of the data.”

Also: No, AI isn’t stealing your tech job – it’s just transforming it

“Data quality is the number one issue,” Tarhan agreed. “Models only perform as well as the information they’re given.”

4. Start with the problem – not the technology

“Define success upfront,” said Tarhan. “Instrument everything. Keep humans in the loop longer than feels necessary. And invest early in observability and governance. When done right, AI agents can be transformational. When rushed, they become expensive demos. The difference is discipline.” Tarhan’s team makes it a point to treat agents with roadmaps, feedback loops, and continuous iteration — and “not as science experiments.”

5. Consider ‘AgentOps’ methodologies

“AI agents do not succeed on model capability alone,” said Martin Bufi, a principal research director at Info-Tech Research Group. His team designed and developed AI agent systems for enterprise-level functions, including financial analysis, compliance validation, and document processing. What helped these projects succeed was the employment of “AgentOps” (agent operations), which focuses on managing the entire agent lifecycle.   

5. Keep agents focused

Rather than building monolithic do-everything agents, Bufi advised “employing multiple specialized agents for functions such as analysis, validation, routing, or communication.” In addition, Bufi’s team sought to have these agent teams mirror how human teams operate, “through explicit orchestration patterns hub-and-spoke for parallel work, or sequential pipelines where intent and confidence had to be established before deeper action.”

7. Keep context in line, and stay adaptable 

Even for a relatively confined single-user agent, “context management is a significant hurdle and can lead to major problems if not handled correctly,” said Sean Falconer, head of AI at Confluent, reflecting back on a personal agent he built. “As agents loop through tools and iterative interactions, the context window fills rapidly. While older data points might lose relevance, models don’t always prioritize the right information implicitly.”

Also: Could you be an AI data trainer? How to prepare and what it pays

To maintain high-quality and consistent output, developers “spend a disproportionate amount of time optimizing how they prune, summarize, and inject context so the agent doesn’t lose the thread of the original objective,” Falconer explained. “Engineer for adaptability from day one. “Ensure your AI investments are flexible and properly abstracted. Avoid vendor or model lock-in so you can pivot quickly as the next wave of innovation arrives.”

Artificial Intelligence