Agent Delegation Chains: The Authorization Lineage Nobody's Tracking
Agent Delegation Chains: The Authorization Lineage Nobody's Tracking
A compliance officer reviews an audit trail. They see a decision — a credit application rejected by an AI system. They trace it back: Agent A received the request. Agent A spawned Agent B to retrieve credit history. Agent B spawned Agent C to call an external scoring API. Agent C returned a score. Agent A made the decision.
The compliance officer asks: "Who authorized Agent C to call the external scoring API?"
The system owner checks the authorization records. Agent A was approved. Agent B was approved. Agent C? Never explicitly reviewed.
Agent C acted under Agent A's delegation — which itself acted under the system's production approval. But the chain of explicit human authorization stops at Agent A.
This is the delegation authorization gap. Under EU AI Act Article 14, it's a compliance liability.
How Delegation Creates Unauthorized Agents
In standard authorization models, a human approves a system for a specific function. The approval covers what you can see: the orchestrator, its tools, its stated behavior.
What most approval processes don't cover: every agent the approved orchestrator will spawn at runtime.
Modern agentic frameworks spawn sub-agents dynamically:
- Claude Code spawns web-search agents and file-editor agents on demand
- LangChain orchestrators spawn retrieval agents based on query complexity
- CrewAI crews spin up specialized workers for each task step
- A2A-protocol-compliant agents delegate subtasks to registered workers
In each case, the human reviewer saw the top-level system. Sub-agents emerge from runtime decisions — based on inputs, context, and the orchestrator's own logic. The humans who approved the system never explicitly reviewed those sub-agents.
The EU AI Act Accountability Problem
EU AI Act Article 14 requires that high-risk AI systems allow for "meaningful human oversight." Article 9 requires risk management across the full operational lifetime of the system.
Both requirements break at delegation boundaries.
When an orchestrator spawns a sub-agent:
- The authorization chain is implicit: the sub-agent operates under the parent's authorization, but this delegation was never explicitly documented or reviewed
- The risk profile shifts: the sub-agent may access data sources, call APIs, or make decisions that weren't in scope when the parent was approved
- The accountability gap opens: if the sub-agent causes harm, the original authorization doesn't cover the specific action
Consider three patterns that appear in production systems today:
Scope drift via delegation
Agent A is approved to "analyze customer support tickets." At runtime, it spawns Agent B to retrieve customer purchase history and Agent C to query the refund eligibility API. Neither retrieval was in the original approval scope. Both execute under that approval.
Data access expansion
Agent A has access to customer tier data. Agent B, spawned by Agent A, inherits that access implicitly and sends it to an external summarization service. The data governance review never covered this transfer.
Tool invocation depth
Agent A is approved to use three tools. Agent C — three levels deep in the delegation chain — invokes tools the human approver never knew were reachable from the original authorization.
In each case, logs show the actions. They don't show the authorization gap.
Free API key, 500 proofs/month, no card required.
Get my free API keyWhy Logs Don't Solve This
The standard response is: "We have full execution logs."
Logs record what happened. They don't record whether what happened was explicitly authorized.
A log entry showing Agent C called payment_api.charge() tells you the call was made. It doesn't tell you:
- Whether anyone reviewed Agent C before it could make that call
- What delegation scope was passed from Agent A to Agent C
- Whether the charge amount was within the autonomy budget granted at the original authorization level
This distinction matters to EU AI Act auditors. Article 9(1)(c) requires that risk management systems include evaluation of "known and reasonably foreseeable risks" across the system's full operation. Implicit delegation chains are foreseeable risks. Not documenting them is a gap auditors will find.
The audit question isn't "did the agent make this call?" — logs answer that. The audit question is "was this specific call within the scope of what was authorized by a human reviewer?" — logs don't answer that.
What Authorization Lineage Proof Requires
Closing the delegation gap requires proof at three levels:
Spawn event attestation
Every time an agent spawns a sub-agent, there must be a record of: parent agent identity, delegation scope granted, tools and data access transferred, and the runtime context that triggered the delegation. The record must be cryptographically bound so it can't be altered after the fact.
Scope inheritance proof
Each sub-agent's authorization must trace back to a human approval. Not just "authorized by parent" — but which human, which approval document, and what scope was explicitly included in that approval.
Boundary enforcement proof
Sub-agents must operate within the scope they were delegated. Any action outside that scope must trigger an escalation. That escalation event must be recorded with proof that a human reviewed and authorized the out-of-scope action before it executed.
Without these three levels, your authorization model is implicit. EU AI Act compliance requires explicit, auditable proof.
The Practical Gap in Multi-Agent Frameworks
Most multi-agent frameworks have no concept of delegation attestation.
LangChain's LCEL has execution traces. They don't capture authorization scope inheritance.
CrewAI logs agent actions. It doesn't link each action to the human approval that governs it.
MCP's specification defines tool invocation. It has no mechanism for delegation scope proof.
This isn't a failure of these frameworks — they were designed for capability, not compliance lineage. The gap becomes critical when agentic systems move from internal tools to customer-facing applications in regulated sectors. Financial services, healthcare, legal — all require documented authorization for consequential decisions. When those decisions arrive via a three-level delegation chain, the authorization documentation is empty.
How Independent Verification Addresses Delegation Lineage
The structural fix requires capturing delegation events at the boundary level, independent of the agent framework.
When Agent A spawns Agent B, an independent verification layer records the delegation: parent identity, child identity, scope granted, invocation context, timestamp. This record is cryptographically bound to the parent's authorization certificate.
When Agent B makes a decision or calls a tool, that action links to its delegation attestation, which chains back to the original human authorization.
The result: a complete authorization lineage from human approval to terminal action. Not a log (recording what happened), but a proof chain (proving every action was within explicitly authorized scope).
For EU AI Act Article 14 compliance, this transforms "we approved the orchestrator" into "we approved a system with bounded delegation, and every sub-agent action was within those bounds — here's the cryptographic proof."
Trust Layer's proxy architecture sits outside the agent framework, which means delegation events are captured regardless of which framework the orchestrator uses. The attestation is generated at invocation time, not reconstructed from logs after the fact.
What This Means for Teams Building Multi-Agent Systems Now
The August 2026 EU AI Act deadline for high-risk AI systems is approaching. Multi-agent systems that make consequential decisions — in lending, healthcare triage, insurance underwriting, HR screening — are in scope.
If your system spawns sub-agents, your current authorization model likely doesn't cover the delegation chain. Retrofitting authorization lineage into a running system is harder than building it in. The review question to ask now:
Can you produce proof that every agent in your delegation chain was operating within explicitly authorized scope?
If the answer is "we have logs," that's not sufficient for Article 9 and 14 compliance. Logs prove events. Proof chains prove authorization.
The orchestrator's approval doesn't transfer to the sub-agents it spawns. Each delegation boundary is a compliance boundary. Without attestation at those boundaries, the accountability gap belongs to the system operator.
Prove it happened. Cryptographically.
ArkForge generates independent, verifiable proofs for every API call your agents make. Free tier included.
Get my free API key → See pricing