The Telecom Agentic AI Trap: Why Poor Data Quality Will Kill Your Autonomous Ambitions
- Jan 22
- 4 min read
Updated: 2 days ago

Executive Summary: key takeaways for telecom leaders
|
The telecom industry is rapidly experimenting with agentic AI, and according to IBM, 44% of CSPs report early implementations in customer-facing workflows
The vision is compelling: autonomous agents that don’t just chat but "think, plan, and act" to resolve complex network and billing issues without human intervention. While the ambition is high, the execution determines the winners.
According to research from Omdia, CSPs spent a staggering $90 billion on customer experience labor in 2024 alone. The incentive to optimize is massive. And the potential is real: an IBM study reveals that CSPs tracking AI’s impact on CX are seeing an average improvement of 8%.
In the telecom world, where CSAT scores are notoriously hard to move, an 8% lift is a game-changer. It proves that AI does deliver. But why isn't everyone seeing these results? The answer lies in the unsexy, often ignored foundation: data quality.
An autonomous agent is only
as smart as the data you feed it.
Why the gap lies in execution, not ambition
It is easy to buy an AI model; it is incredibly difficult to feed it the right data. As IBM highlights, the gap in the industry is not a lack of vision, but a lack of execution capability. Many organizations simply lack the data infrastructure, governance, and cross-functional alignment needed to scale AI effectively.
For telecom operators, the challenge is distinct. Critical data is locked in rigid, siloed legacy systems (OSS/BSS) that were never designed for the fluidity of AI. Extracting this data is costly and technically complex. Without a modern data architecture, your "smart" agent is left blind, forced to make decisions based on fragmented or outdated information.

Perhaps the most vivid description of this challenge comes from Philippe Ensarguet, VP of Software Engineering at Orange, cited in the TM Forum report:
“You can't be successful at agentic AI if you are not successful at automation. You are not able to be successful at automation if you’re not successful at data. You cannot be successful at data if you’re not successful at telco infrastructure. So, I would say it’s like a cake with multiple layers, and you don’t have the [agentic AI] cherry on top if you don’t manage the rest."
Key takeaway
You cannot simply layer agentic AI on top of a fragmented infrastructure. A successful implementation requires a data platform that enables true data democratization to break down silos between network and support teams.
Learn more about building a unified foundation with Subtonomy’s Data Platform.
The danger of "thinking" AI agents: security & boundaries
The shift from generative AI to agentic AI introduces a new layer of risk: autonomy. Unlike a chatbot that suggests an answer, an agentic AI has the ability to "think, plan, and act". This autonomy requires entirely new policies. We must ask:
How independent should
we allow them to be?
If an agent has the power to issue refunds or reconfigure network parameters, it must operate within strict guardrails. As TM Forum warns, without new policies, agents risk "overstepping their own boundaries" or taking actions they shouldn't.
This makes the implementation of emerging frameworks like Model Context Protocol (MCP), Agent-to-Agent (A2A) critical, but risky. While these protocols allow agents to find the right APIs to solve a problem, they must be set up with the highest security in mind. If an agent can autonomously call APIs, those APIs must be "high-quality", secure, well-documented, and strictly governed to prevent an agent from accidentally (or maliciously) wreaking havoc.
Key takeaway
Autonomy without boundaries is a liability. To let agents "think" safely, you need a secure framework ensuring every API call is authenticated, accurate, and governed.
Explore how we prepare telecom data for secure AI-driven customer support.
Feature | Legacy infrastructure | AI-ready infrastructure |
Data access | Batch-based, siloed | Real-time, unified via APIs |
AI action | Reactive (user prompts) | Proactive (autonomous triggers) |
Cost risk | High (hallucinations & retraining) | Low (grounded in "truth") |
Agent behavior | Guessing based on training data | Acting based on live network data |
Comparison table: traditional vs. AI-ready data infrastructure
The Solution: High-quality telecom data & AI agentic readiness
You cannot build a skyscraper on a swamp. To bridge the execution gap and achieve that 8% CSAT lift, operators must prioritize the architecture over the algorithm.
Unlock legacy data safely: You need an abstraction layer that liberates data from rigid legacy silos without breaking them.
Define the boundaries: Implement strict governance frameworks that define exactly what an AI agent can and cannot do.
Secure the front door: Ensure that the APIs exposed to your agents (via MCP, A2A or similar) are of high quality and security.
Turning complex telecom data into AI-ready APIs
To scale agentic AI beyond pilots, telecom operators must get the data foundation right. Legacy OSS and BSS systems weren’t built for AI-driven workflows. Data is fragmented, slow to access, and difficult to govern — limiting what AI and autonomous agents can safely do.
Subtonomy specializes in turning complex telecom data into secure, AI-ready APIs. We abstract and structure data from network, service, device, and customer systems so it can be reliably used in AI and agentic setups, without exposing or disrupting legacy environments.
By grounding AI in real-time, governed telecom data, operators reduce hallucinations, enforce clear boundaries, and enable autonomous workflows that actually scale.
The result: AI that moves from experimentation to impact — improving customer support efficiency while keeping cost and risk under control.


