NO IDENTITY.
NO BOUNDARIES.
NO KILL SWITCH.
Your AI agents are running on borrowed trust. Fidea is the infrastructure to fix that.
CHAPTER 01
THE AGENTIC
IDENTITY CRISIS
How we gave machines human credentials
and hoped for the best.
When the first AI agents needed to call an API, engineers did what they always do: they grabbed an API key from the environment, or borrowed an OAuth token from the user who triggered the workflow. It worked. Nobody thought twice.
That was the original sin of agent security.
Every pattern that followed — shared service accounts, long-lived tokens stored in agent configs, human credentials passed to non-human processes — inherited this assumption: that the identity model built for people who log in once and work for eight hours was sufficient for machines that operate continuously, spawn children, and make decisions autonomously.
It isn't.
CREDENTIAL SPRAWL
A single AI agent has access to 37 API keys, none of which expire for 90 days. It was created for a one-hour task three months ago.
INVISIBLE DELEGATION
Agent A calls Agent B, which calls Agent C, which accesses your payment system. Nobody approved that chain. Nobody can see it.
NO KILL SWITCH
An agent is compromised. You revoke its token. But the 12 sub-agents it spawned still have active credentials.
In 2025, an estimated 0 millionAI agent instances accessed production APIs on any given day.
Fewer than
0%had dedicated agent credentials.
Source: Internal research estimates, 2025
CHAPTER 02
WHY TRADITIONAL
IAM FAILS
The gap between human identity and machine trust is not a feature request. It is a design flaw.
| HUMAN USER | AI AGENT | |
|---|---|---|
| Authentication | Once, at login | Continuous, per-action |
| Session duration | Hours | Milliseconds to weeks |
| Delegation model | Explicit consent | Automatic, chain-based |
| Credential lifecycle | Long-lived, renewable | Should be ephemeral, task-scoped |
| Identity source | Corporate directory | Created programmatically |
| Revocation speed needed | Hours (acceptable) | Milliseconds (critical) |
| Audit granularity | Session-level | Action-level |
| Spawn behavior | Never | Frequently, recursively |
SCENARIO 01: THE RECURSIVE DELEGATION
A customer support agent uses Claude to draft a response. Claude determines it needs to check the order status, so it calls a fulfillment agent. The fulfillment agent needs shipping data, so it calls a logistics API through a credential proxy.
Three levels of delegation. The original human user's OAuth token has been passed — or worse, copied — through each layer. The logistics API sees a "human" user making a request. It has no idea an AI agent is involved. There is no way to revoke the agent's access without revoking the human's access. There is no audit trail of the delegation chain.
Traditional IAM was never designed for this. It has no concept of delegation depth, sub-agent identity, or chain-scoped credentials.
SCENARIO 02: THE CREDENTIAL HOARDER
A company deploys an AI coding assistant with access to their GitHub repos, Jira instance, Confluence wiki, Slack workspace, and AWS console. The assistant needs broad access to be useful.
Six months later, the assistant has accumulated 43 active OAuth tokens across 12 services. Some were granted for one-off tasks that completed months ago. The assistant's service account has more production access than any individual engineer. Nobody tracks which tokens are still in use. Nobody can enumerate them without checking each service individually.
This is not a configuration problem. There is no human IAM system that tracks machine credential accumulation across services with per-task granularity.
SCENARIO 03: THE PHANTOM AGENT
An engineer spins up a test agent on Friday afternoon. It authenticates using a shared service account, runs some experiments, and the engineer goes home. The agent is still running. It spawns two sub-agents to parallelize a task. One of those sub-agents discovers it needs database access and requests credentials from a secrets manager — using the parent's service account identity.
By Monday, there are seven active agent instances, all operating under the same identity, all with database access, none of which the engineer intended to create. Traditional IAM sees one authenticated entity. Reality is seven autonomous processes with unbounded scope.
“You cannot solve an architectural problem with a configuration change. Agent identity requires new primitives, not new policies on old primitives.”
CHAPTER 03
THE TRUST LAYER
Four primitives. One infrastructure.
Zero inherited credentials.
01
IDENTITY
Every agent gets a cryptographic identity. Not a borrowed human credential. Not a shared service account. Its own identity, issued at creation, revocable in milliseconds.
02
SCOPING
Credentials are scoped to exactly the task, the API, and the time window required. Nothing more. A token issued for "read order #4521" cannot access order #4522. Cannot be escalated. Cannot be reused.
03
DELEGATION
When Agent A spawns Agent B, the delegation is explicit, recorded, and scoped. B inherits a subset of A's permissions, never the full set. The chain is auditable to any depth.
04
REVOCATION
Any credential, any agent, any delegation chain — revoked in milliseconds. Not hours. Not "after the token expires." Now. The kill switch works at machine speed because agents operate at machine speed.
How they compose
Identity: Agent receives unique ID
Scoping: Token scoped to task + API + time
Rules: Deny by default, allow by policy
Delegation: Agent never sees raw credential
Audit Service observes every step — cryptographic chain
THE CRITICAL INSIGHT
Your agents never see raw credentials. Ever.
The credential proxy sits between agents and the APIs they access. When an agent needs to call Stripe, it doesn't receive a Stripe API key. It receives a scoped, ephemeral Fidea token. The proxy translates that token into the real credential at the gateway. The agent never knows the real key exists.
This means a compromised agent has nothing to steal. There is no credential to exfiltrate, no key to copy, no secret to leak. The blast radius of any agent compromise is bounded by the scope and lifetime of its Fidea token.
This is not a feature. It is the architecture.
Every day, millions of AI agents are granted access to production databases, financial APIs, customer records, and infrastructure controls. They operate at machine speed, make thousands of privileged calls per minute, and delegate authority to sub-agents that no human ever approved.
They are using your employees' credentials. They have no identity of their own. And when something goes wrong, there is no kill switch that works fast enough.
This is the agentic identity crisis. And Fidea exists to solve it.
THE PLATFORM
Authentication is where we started.
Trust infrastructure is where we're going.
Fidea's platform is a suite of purpose-built services for managing the complete lifecycle of agent trust — from identity issuance to credential management to real-time security monitoring.
AVAILABLE NOW
Agent Registry
Identity, metadata, lifecycle management, instant kill switch.
Token Issuer
OAuth 2.1 + PKCE, opaque tokens, on-behalf-of delegation, task-scoped.
Credential Proxy
Gateway-mode enforcement. Agents never see raw credentials.
Policy Engine
Embedded rules. Deny by default. Scope-aware. Time-windowed access.
Audit Service
Append-only cryptographic chain. Tamper-evident. Complete delegation trail.
Python SDK + CLI
pip install fidea. Three lines to authenticate. Full API coverage.
COMING NEXT
Agent SOC
Security Operations Center for AI agents. Real-time anomaly detection, behavior baselines, threat response.
COMING 2026
Agent DAST
Dynamic security testing for agent workflows. Automated discovery of permission escalation paths and credential exposure.
COMING 2026
Federation
Cross-organization agent trust. Your agent calls my agent — with verifiable identity and scoped delegation.
EXPLORING
We started with authentication because identity is the foundation of trust. You cannot monitor what you cannot identify. You cannot test what you cannot scope. You cannot federate what you cannot authenticate.
Every product we build sits on top of the identity and credential infrastructure we have already deployed. The Agent SOC uses the same audit chain. DAST uses the same delegation model. Federation uses the same cryptographic identity.
This is not a product roadmap. It is a logical consequence.
FOR BUILDERS
From zero to authenticated agent
in five minutes.
SDK Reference
Full Python SDK docs. Types, methods, error handling.
View SDK docs →CLI Tool
fidea-cli for local dev and CI/CD integration. Bash-friendly.
Install CLI →API Docs
OpenAPI 3.1 spec. Every endpoint, every parameter, every error.
Browse API →Architecture Guide
How the components fit together. Sequence diagrams.
Read guide →Example Agents
Six example agents from simple to multi-delegation chain. Copy and modify.
See examples →GitHub
MIT license. Contribute, fork, or just read the source.
Star on GitHub →DEEP DIVES
For those who want detail
THE VISION
NOW
- Authentication
- Identity
- Credential Proxy
- Audit Chain
NEXT
- Security Operations
- Dynamic Testing
- Behavior Monitoring
- Compliance Reporting
FUTURE
- Federation
- Intent-Based Access
- Cross-Org Trust
- Agent Marketplace Trust
HORIZON 1: TRUST AT THE IDENTITY LAYER
Today, Fidea ensures every agent has its own identity, scoped credentials, and an audit trail. This is table stakes — the minimum viable trust.
HORIZON 2: TRUST AT THE BEHAVIOR LAYER
Tomorrow, Fidea monitors agent behavior in real-time, tests workflows for security vulnerabilities before deployment, and generates compliance evidence automatically. The Agent SOC and Agent DAST make trust active, not passive.
HORIZON 3: TRUST AT THE NETWORK LAYER
In the future, agents from different organizations will need to trust each other. Fidea's federated identity protocol enables cross-org delegation with verifiable credentials. Your agent calls my agent, and both sides can prove identity, scope, and intent.
This is the TCP/IP of agent trust.
The agent economy is not coming. It is here.
The question is not whether AI agents will access your most sensitive systems. They already do. The question is whether that access is governed by infrastructure designed for agents — or by patches on systems designed for people who type passwords.
We are building the trust layer. Join us.
COMPANY
We started building Fidea because we saw the same pattern at every company deploying AI agents: brilliant engineers hard-coding API keys into agent configs and hoping nobody noticed.
We knew this would not scale. We knew the incident was coming. So we built the infrastructure to prevent it.
Founded by engineers who have seen what happens when agents operate without identity.