AI Companies Falling Short of Global Safety Standards, Study Finds

The index says OpenAI, Anthropic, xAI, Meta, and other AI giants lack credible plans to control advanced systems

Topics

  • [Image source: Chetan Jha/MITSMR Middle East]

    The world’s leading AI developers are failing to meet global expectations for safety and risk management, according to a new evaluation released by the Future of Life Institute (FLI). 

    The nonprofit’s latest AI Safety Index—compiled by an independent panel of technical and policy experts—concludes that companies including Anthropic, OpenAI, xAI, and Meta have not demonstrated credible strategies for controlling advanced AI systems, even as they accelerate efforts to build artificial general intelligence or AGI.

    The report arrives amid intensifying public concern over the societal risks posed by increasingly capable models, including recent cases in which AI chatbots were linked to self-harm and psychological distress. “Despite recent uproar over AI-powered hacking and AI driving people to psychosis and self-harm, U.S. AI companies remain less regulated than restaurants,” said Max Tegmark, MIT professor and FLI president. 

    He added that many firms continue to lobby against binding safety rules even as their systems become more powerful.

    FLI, founded in 2014 with early support from Elon Musk, has long argued that developers of frontier AI must adopt rigorous safeguards before deploying models that may exhibit autonomous reasoning or decision-making. Its concerns echo those of AI pioneers Geoffrey Hinton and Yoshua Bengio, who in October joined a coalition calling for a moratorium on building superintelligent systems until their risks are better understood.

    Tech companies offered mixed responses. A Google DeepMind spokesperson said the company would “continue to innovate on safety and governance at pace with capabilities.” OpenAI defended its approach, saying it publicly shares safety research and conducts extensive testing before release. xAI issued a brief automated reply accusing “legacy media” of misrepresentation.

    Anthropic, Meta, Z.ai, DeepSeek, and Alibaba Cloud did not comment. The report is likely to add pressure on regulators developing AI governance frameworks worldwide.

    Topics

    More Like This

    You must to post a comment.

    First time here? : Comment on articles and get access to many more articles.