The OpenID Foundation (OIDF) has launched a detailed whitepaper addressing the emerging challenges in authentication, authorization, and identity management brought about by the growing presence of AI agents. This whitepaper, entitled “Identity Management for Agentic AI: The new frontier of authorization, authentication, and security for an AI agent world,” is set to be publicly available after 9 am (ET) on Tuesday, October 07. This report was assembled by the Artificial Intelligence Identity Management Community Group (AIIMCG) at OpenID, comprising global experts in AI and identity systems.
Unveiling Impending Identity Challenges
Positioned to benefit developers, architects, standards bodies, and enterprises, the whitepaper provides critical insights into the dynamics of AI agents and access management. It offers strategic directions to tackle future challenges in identity management for AI systems. AI agents, according to the document, are systems capable of autonomous decision-making and adaptation, rather than strict adherence to pre-set rules, which introduces new security challenges.
The whitepaper highlights that while existing security frameworks manage basic AI agent scenarios like accessing company tools, these systems falter as AI agents begin to operate across different businesses, act independently, or manage complex permissions among multiple users. This exposes significant security vulnerabilities.
Key Future Challenges
The research outlines numerous forthcoming challenges that demand urgent attention from developers, standards organizations, and enterprises. Among these are:
- Agent Identity Fragmentation: The lack of common standards leads companies to create separate identity systems, complicating development and security.
- User Impersonation vs Delegated Authority: AI agents appear as regular users, making it difficult to identify the true actor. Systems indicating when agents act on behalf of others are necessary.
- Scalability Issues in Oversight: Users face a barrage of permissions requests, risking blanket approval and security compromise.
- Recursive Delegation Risks: Agents delegating tasks to other agents creates intricate permission chains without clear boundaries.
- Multi-User Agent Limitations: Current systems are tailored for individual users, not for agents catering to multiple users with distinct permissions.
- Automated Verification Gaps: Automated systems are crucial for verifying agent actions without human oversight.
- Challenges with Agents Controlling Browsers: Such agents bypass standard security checks, which might demand internet restrictions.
- Multi-Facet Agent Identity: Agents alternate between independent action and user-delegated tasks, a dual nature current systems fail to track effectively.
Industry Reactions and Recommendations
Experts within the field underscore the urgency of addressing these issues. Tobin South, Head of AI Agents at WorkOS and Co-Chair of the OpenID AIIMCG, emphasized that AI agents are surpassing the capabilities of current security systems, calling for industry cooperation to develop standardized practices. Similarly, Atul Tulshibagwale, CTO of SGNL, recognized the whitepaper as an industry milestone, encapsulating the intersections of AI with identity and access management.
According to Gail Hodges, Executive Director of the OpenID Foundation, AI and identity experts are actively working towards enabling Agentic AI use cases while ensuring security and privacy are not compromised. The whitepaper serves as an essential guide on tackling these challenges, with ongoing efforts from the AIIMCG to prioritize and develop necessary specifications with their peers in standards bodies.
A Collaborative Effort for Future Security
The OpenID Foundation's whitepaper calls for joint efforts across the industry to progress AI securely. For AI agents, especially within single-company scenarios, leveraging established security frameworks is crucial. Immediate steps include implementing robust standards like OAuth 2.0 and using interfaces, such as the Model Context Protocol (MCP) to securely connect AI with external tools.
Organizations are urged to avoid custom solutions and integrate agents into existing login systems for optimal security oversight, ensuring each agent is clearly managed under strict security policies.
The Path Forward
While short-term technological measures are vital, addressing the broader, complex security landscape for interconnected AI agents demands a fundamental shift. A trustworthy, verifiable ecosystem for agent identities necessitates improvements in delegation management, authority, and accountability online, including better provisioning and de-provisioning practices.
The concluding section of the whitepaper highlights the industry’s need to establish open and interoperable standards. Developers and architects should build flexibility into existing standards, while aligning with enterprise protocols like IPSIE for secure, seamless integration. Standards bodies must fast-track protocol development to lay out an interoperable foundation, avoiding fragmented systems, and enterprises need to treat agents as integral parts of their identity and access management infrastructure, establishing clear governance and lifecycle management policies.
Absent unified industry action, innovation could be stifled by incompatible identity silos, exacerbating security risks.
Discover how AI, biometrics, and analytics are transforming casino security
The OpenID Foundation (OIDF), a pioneer in open identity standards, has released a comprehensive whitepaper addressing the mounting authentication, authorization, and identity management challenges posed by the rapid rise of AI agents.
This critical whitepaper, “Identity Management for Agentic AI: The new frontier of authorization, authentication, and security for an AI agent world,” which is STRICTLY EMBARGOED until 9 am (ET) Tuesday 07 October, was researched and compiled by the OpenID Foundation’s Artificial Intelligence Identity Management Community Group (AIIMCG) – a team of global AI and Identity experts collaborating to address rising identity management challenges in AI systems.
Impending future identity challenges
The whitepaper provides solid guidance for those working at the intersection of AI agents
The whitepaper provides solid guidance for those working at the intersection of AI agents and access management – developers, architects, standards bodies and enterprises. It also provides strategic direction for those stakeholders to address impending future identity challenges.
AI agents, as discussed in the paper, are AI systems that can autonomously take actions and make decisions to achieve goals, adapting to new situations through reasoning rather than following fixed rules.
The whitepaper reveals that while current security frameworks can handle simple AI agent scenarios, such as company agents accessing internal tools, they break down when AI agents need to work across different companies, act independently, or handle complex permission sharing between multiple users. This has created major security gaps.
Several critical future challenges
Several critical future challenges that require immediate attention from developers, standards bodies, and enterprises, have also been uncovered through the research.
- Agent identity fragmentation. Companies are creating separate identity systems instead of common standards, making development harder and less secure.
- User impersonation vs delegated authority. AI agents look like regular users, making it impossible to tell who actually did what. Clear "acting on behalf of" systems are needed.
- Scalability problems in human oversight. Users will face thousands of permission requests and likely approve everything, creating security risks.
- Recursive delegation risks. When agents create other agents or delegate tasks, it creates complex permission chains without clear limits.
- Multi-user agent limitations. Current systems work for individuals, not agents serving multiple users with different permissions in shared spaces.
- Automated verification gaps. Computer systems are needed to automatically verify agent actions without constant human supervision.
- Browser and computer use agent challenges. Agents controlling screens and browsers bypass normal security checks, potentially forcing internet lockdowns.
- Multi-facet agent identity. Agents can switch between acting independently and acting for users, but current systems can't handle this dual nature or track which mode the agent is operating in.
Constant human supervision
Tobin South, Head of AI Agents at WorkOS, Research Fellow with Stanford’s Loyal Agents Initiative, and Co-Chair of the OpenID Foundation’s AIIM CG, said: “AI agents are outpacing our security systems. Without industry collaboration on common standards, we risk a fragmented future where agents can't work securely across different platforms and companies.”
Atul Tulshibagwale, CTO of SGNL and Co-Chair of the OpenID Foundation’s AIIM CG said: “This whitepaper is an important industry milestone, which captures all aspects of the intersection of AI and identity and access management.”
Triage specification requirements
Gail Hodges, Executive Director of the OpenID Foundation said: “We know AI and Identity experts alike are trying to unlock Agentic AI use cases while security and identity experts are trying to ensure safeguards for security, privacy, and interoperability are incorporated.”
“This whitepaper offers a primer for how we can approach this daunting challenge. Beyond the paper, the AI and Identity Management CG will continue to triage specification requirements, assess priorities, and collaborate with standards body peers to accelerate work on the most pressing requirements.”
A call for industry-wide collaboration
Organizations should implement robust standards, such as OAuth 2.0, and adopt standard interfaces
The OpenID Foundation's whitepaper issues a clear call for industry-wide collaboration to securely advance the future of AI. For today's AI agents, particularly those in simpler, single-company scenarios, the paper recommends immediate action using proven security frameworks.
Organizations should implement robust standards, such as OAuth 2.0, and adopt standard interfaces, like the Model Context Protocol (MCP), for connecting AI to external tools using recommended security measures.
Instead of building custom solutions, companies are urged to use dedicated authorization servers and integrate agents into existing enterprise login and governance systems, ensuring every agent has a clear owner and is subject to rigorous security policies.
Rigorous security policies
However, these immediate technical steps are only the beginning. The report stresses that the larger, more complex challenges of securing a future with highly autonomous, interconnected agents cannot be solved in isolation.
Moving from basic security to a world of trustworthy, verifiable agent identities requires a fundamental evolution in how they manage delegation, authority, and accountability online, as well as the provisioning and de-provisioning of agents in the enterprise.
Providing specific recommendations
The whitepaper concludes with an urgent appeal for the entire industry to work together on open, interoperable standards. The whitepaper provides specific recommendations for each key stakeholder:
- Developers and architects: Build on existing secure standards, while designing for flexibility in delegated authority and agent-native identity. Align with enterprise profiles like IPSIE to ensure security, interoperability, and enterprise readiness.
- Standards bodies: Accelerate protocol development that formalises these concepts, creating an interoperable foundation rather than fragmented proprietary systems.
- Enterprises: Treat agents as first-class citizens in IAM infrastructure. Establish robust lifecycle management — from provisioning to de-provisioning — with clear governance policies and accountability.
Without this unified effort, the ecosystem risks fracturing into a collection of proprietary, incompatible identity silos, hindering innovation and creating significant security gaps.