Enterprises Are Scaling AI Faster Than They Can Govern It: Study
Findings suggest that existing governance approaches are not well-equipped to address behavioral risk at scale.
Topics
News
- Enterprises Are Scaling AI Faster Than They Can Govern It: Study
- As AI Agents Proliferate, Identity Security Gaps Persist, Report Finds
- Meta Found Liable in Child Exploitation Trial; Penalized $375M
- OpenAI Flags Microsoft as a ‘Risk Factor’ Ahead of Potential IPO, Report Says
- Neuralink’s 1st Human Trial Hits 100 Days, Raises Long-term Questions
- Jensen Huang Says AGI is Here, and It Can Run Companies Too!
Organizations are embedding AI in core business functions, but in many ways struggling to track the growing risks. It is a difficult task since the vulnerability lies not in the models themselves, but in how people use them. Employees feeding sensitive data into systems, gaps in training, and the push to adopt tools faster than policies can adapt is proving faulty.
A new report from Optro finds that while 85% of enterprises now consider AI central to their business strategy, oversight mechanisms remain fragmented, reactive, and often misaligned with how AI is actually used.
The report titled, “The AI Oversight Gap: Adoption is Scaling. Governance Controls Aren’t” identifies a critical shift in where AI risk resides. Rather than stemming primarily from the models themselves, risk is increasingly driven by human interaction with AI systems.
More than a third (34%) of respondents cited employees inputting sensitive data into AI tools as the leading source of risk, followed by inadequate training (21%) and organizational pressure to accelerate AI adoption (21%). These findings suggest that existing governance approaches are not well-equipped to address behavioral risk at scale.
Compounding the issue is a diffusion of responsibility across the enterprise. AI governance is distributed among multiple functions, with no single group maintaining clear ownership. IT departments account for the largest share of oversight at just 25%, followed by risk management (18%), cross-functional structures (17%), and dedicated AI governance teams (10%). This fragmentation extends to incident response, where accountability is split across risk, compliance, internal audit, executive leadership, and engineering teams.
Such diffusion has practical consequences. The report highlights the absence of a clearly defined authority to intervene in AI operations, including the ability to shut down systems when risks emerge. In many organizations, this “kill switch” responsibility is distributed across several departments, potentially delaying response times during critical incidents.
These governance gaps are becoming more visible as AI-related issues increase. Over the past year, 40% of organizations reported inaccurate AI outputs, 33% experienced policy violations, and 28% received customer complaints linked to AI systems. Together, these incidents point to a growing exposure that is not yet matched by institutional controls.
Despite these challenges, enterprises appear to be responding with increased investment. Nearly three-quarters of organizations surveyed expect to boost spending on governance, risk, and compliance (GRC) technologies, with 43% prioritizing AI governance solutions specifically. Other areas of focus include regulatory compliance tools and upgrades to existing GRC platforms.

