As AI Agents Proliferate, Identity Security Gaps Persist, Report Finds

It points out that organizations are expanding AI capabilities faster than they can govern the identities that those systems create and depend on.

Topics

  • As enterprises continue to adopt AI, identity security is increasingly becoming a constraint to work rather than a foundation to build on. AI tools are rapidly inside organizations—many of them poorly tracked, loosely governed, and granted broad access by default. This is creating a blurred blind spot.

    A new report from Delinea, a security control provider for enterprises, suggests that the race to adopt artificial intelligence is outpacing the evolution of enterprise identity security—often by design. According to its global survey of more than 2,000 IT decision-makers, 90% of organizations say security teams are being pressured to relax identity controls to support AI initiatives, even as visibility into AI-driven identities remains limited.

    The report, Uncovering the Hidden Risks of the AI Race, points to a structural tension: organizations are expanding AI capabilities faster than they can govern the identities those systems create and depend on. As AI agents, automation tools, and machine accounts proliferate, so do non-human identities (NHIs)—and with them, new attack surfaces.

    Nearly 90% of respondents reported at least one gap in identity visibility. These gaps are most pronounced in AI environments, where discovery and monitoring of machine identities occur at almost twice the failure rate seen in legacy systems. This lack of visibility is not merely technical—it has operational consequences. Eighty percent of organizations say they cannot consistently explain why an NHI performed a privileged action, raising concerns about traceability, accountability, and incident response.

    The findings also highlight the persistence of outdated access models. Despite the dynamic nature of AI systems, 59% of organizations still rely on standing privileged access for NHIs and AI agents. This increases the risk that compromised or misused identities could operate undetected within critical systems.

    Perhaps most striking is what Delinea describes as an “AI security confidence paradox.” While 87% of respondents believe their identity security posture is ready for AI-driven automation, nearly half acknowledge deficiencies in governance specific to AI systems. Confidence in discovering NHIs is high (82%), yet fewer than one-third of organizations validate these identities’ activities in real time.

    The report highlights that identity is no longer limited to human users. As AI systems begin to access sensitive data and infrastructure autonomously, identity governance must extend to every entity—human or machine—interacting with enterprise environments.

    Delinea argues that addressing this gap will require a move toward unified identity frameworks that integrate real-time authorization, least-privilege access, and continuous auditing across all identity types. Without such controls, the acceleration of AI adoption may continue to introduce risks that organizations are neither fully measuring nor managing.

    Topics

    More Like This

    You must to post a comment.

    First time here? : Comment on articles and get access to many more articles.