In Part 3, we walked through the midnight maintenance scenario - an aggregation router upgrade that caused KPI dips ten minutes after the MOP completed. We saw three agents work in sequence: the KPI Drift Monitor flagging the deviation, the Change Management Agent resolving the identity gap, and the RCA Agent validating the causal path through the NKG.

But we treated those agents as a black box. We showed what they did, not how they actually work. That is what this part addresses.

The question is worth asking carefully: what makes an autonomous agent different from a sophisticated script? The answer lies in two things working together - a reasoning engine that constructs plans rather than following fixed procedures, and a trust layer that ensures every action stays within defined operational boundaries. Get either of these wrong and you either have a system that cannot handle novel situations, or one that cannot be safely deployed in a production network.

The Dual-Core Design

Every agent in the system has two cores working in parallel: an Intelligence Core that reasons over goals, and a Trust Layer that governs what actions are permissible. Neither works without the other.

Think of it this way: the Intelligence Core determines what should be done. The Trust Layer determines what can be done. The intersection of these two is where autonomous action becomes safe enough to deploy on live networks.

The Intelligence Core: Reasoning Over Goals, Not Procedures

Traditional automation follows a script. If condition A, then action B. This works well for known, predictable situations - and breaks the moment something falls outside the script's assumptions.

An autonomous agent works differently. Rather than following a fixed procedure, it constructs a plan based on the current situation and the goal it has been given. In the midnight scenario, the KPI Drift Monitor was not told "if latency rises on Router-A, send an alert." It was given a goal - detect meaningful KPI drift and determine whether it warrants escalation - and it built a plan to achieve that goal based on what it observed.

The reasoning loop that makes this possible has four stages:

Perception: The agent ingests live data from the monitoring layer - KPI streams, fault indicators, anomaly signals - and enriches this with context from the NKG (what does the topology look like around the affected element?) and the TKG (what changed recently, and when?). This is not passive data collection. The agent is actively building a situational picture.

Planning: The agent decomposes its goal into a sequence of sub-tasks. In the midnight scenario this looked like: query the TKG for recent changes in the relevant time window; traverse the NKG to identify which services and customers depend on the affected element; compare current KPI behaviour against historical baselines; determine whether the deviation is localized or spreading. The plan is constructed dynamically based on what the agent finds at each step.

Reasoning: At the core of every agent is a reasoning model - a combination of language models and domain-specific models - that interprets what the data means, identifies the most likely root cause, and determines the appropriate response. This is where the agent moves from observation to conclusion.

Tool Use: The agent does not act directly on the network. It identifies which tools it needs - an API call to the inventory system, a query to the performance analytics engine, a configuration diff comparison - and invokes them through governed interfaces. The tools are the hands. The reasoning engine is the brain.

The Fusion of AI Paradigms: Why Three Layers Are Needed

One of the most important architectural decisions in building these agents is the combination of three distinct AI paradigms. Each has a role that the others cannot fill.

Generative AI - the Language Understanding Layer: Large and small language models (LLMs and SLMs) interpret unstructured data - MOP documents, change tickets, field reports, vendor manuals, troubleshooting guides, network configuration files. In the midnight scenario, the Change Management Agent needed to understand what the MOP actually described before it could reason about its impact. No deterministic system can read a MOP document and extract operational intent. Generative AI can.

Symbolic Reasoning - the Deterministic Layer: For diagnostic conclusions that will drive real network actions, probabilistic reasoning alone is not sufficient. The agent switches to deterministic logic - traversing the NKG to establish topological facts, querying the TKG to verify causal relationships between timestamped events. This is what allows the system to present engineers with verifiable proof rather than a confidence score. When the RCA Agent concludes that the aggregation router upgrade caused the KPI dips, that conclusion is anchored in graph traversal, not inference.

Purpose-Built Network Models - the Domain Intelligence Layer: General-purpose AI models do not understand how telecom networks behave. This third layer adds domain-specific intelligence: topology-aware graph models that understand network dependencies, temporal models trained on historical KPI patterns to distinguish genuine drift from normal variation, forecasting models for early degradation prediction, and edge-efficient SLMs optimised for interpreting vendor-specific logs and standard operating procedures. Together, these give the agent the operational fluency needed to make decisions that a generic AI system simply could not.

Agent Archetypes: Two Examples

The following table illustrates how this architecture plays out in two specific agents. These are not hypothetical - both were in action during the midnight scenario in Part 3.

The Trust Layer: Governance Is Not a Constraint - It Is the Enabler

There is a common assumption that governance and autonomy are in tension - that the more you constrain an agent, the less autonomous it becomes. In practice, the opposite is true. Without a robust trust layer, no operator would deploy autonomous agents on a live production network. The trust layer is precisely what makes autonomy possible at scale.

In a telecom environment, the stakes make this non-negotiable. An agent acting outside its boundaries - even with good intent - could affect millions of customers, violate SLA commitments, or trigger cascading failures across domains. The Trust Layer prevents this through four mechanisms:

Deterministic Guardrails: An agent cannot execute a command that has not been pre-validated. It operates within a verified toolset with hard boundaries on what parameters it can modify. Even if the reasoning engine concludes that an action is necessary, it cannot be taken unless it falls within the pre-approved operational envelope.

Semantic Traceability: Every step of the agent's reasoning path is recorded and anchored in the NKG and TKG. When the agent reaches a conclusion, it produces a reasoning trace that shows exactly which node, timestamp, and graph relationship led there. A human engineer can follow this trace and either validate or challenge the conclusion - without needing to reverse-engineer what the AI did.

Policy Guardrail: There is a strict boundary between the agent's intelligence and the network's control plane. An agent can request an action, but that request must pass through a policy engine that validates it against current maintenance windows, customer SLAs, and operational policy. Only after validation does the action proceed.

Immutable Audit Trails and Bias Testing: Every agent decision is logged in a chain-of-thought record that cannot be altered. Beyond logging, the system undergoes regular bias testing - intentionally manipulating input data to verify that guardrails remain effective and that agents do not develop systematic blind spots. This is increasingly important as regulators turn attention to AI accountability in critical infrastructure.

Conclusion

The dual-core architecture - Intelligence Core and Trust Layer working together - is what separates a genuine autonomous agent from a sophisticated automation script. The Intelligence Core gives the agent the ability to handle novel situations, interpret unstructured operational data, and construct plans rather than follow procedures. The Trust Layer ensures that this intelligence operates within boundaries that operators can audit, validate, and trust.

In the midnight scenario from Part 3, it was this combination that allowed three agents to trace a causal chain across domains, resolve identity gaps, and deliver a verified recommendation to an engineer - all within minutes of the initial KPI drift. The intelligence found the answer. The trust layer made the answer trustworthy.

In this part, we have covered:

The dual-core design of autonomous agents - Intelligence Core and Trust Layer

The four-stage reasoning loop: perception, planning, reasoning, and tool use

Why three AI paradigms are needed and what each contributes

Two agent archetypes showing how the architecture plays out in practice

The four mechanisms of the Trust Layer and why governance enables rather than constrains autonomy

Looking Ahead: Part 5 – Scaling Trust with Agentic Ops

A single agent operating correctly is a proof of concept. A workforce of agents operating correctly across a Tier-1 network is an operational reality that requires its own discipline. In Part 5, we will look at how to scale autonomous operations - managing agent identity, lifecycle, observability, and continuous improvement across a network-wide deployment.

About the Authors

Balakrishnan K
General Manager and Senior Practice Partner, Autonomous Network, Wipro Engineering

Balakrishnan K heads autonomous network at Wipro Engineering. He focuses on enabling clients across numerous industries to advance their network operations strategy and digital-transformation journey.

Ravi Kumar Emani
Vice President and Practice Head, Connectivity, Wipro Engineering

Ravi has more than 25 years of experience helping global enterprises realize their connectivity goals. He is currently responsible for the Connectivity Practice Unit for NEPS and the Communications portfolio for Wipro Engineering. Ravi has authored numerous articles on 5G and is a Distinguished Member of the Technical Staff (DMTS) at Wipro.