AI Could Make Humans Obsolete, Pose Extinction Risk, Say Ex-OpenAI Researcher

While today pulling the plug on AI systems may be simple, it won’t remain an option in the future.

Topics

  • A former OpenAI researcher has sparked fresh concerns about the future in which AI plays a far more dominant role, stating that the risks may not be decades away but barely a few years away.

    ​Speaking on The Daily Show about AI systems spiralling beyond human control, AI-safety professional turned whistleblower Daniel Kokotajlo said, “There’s a 70% chance of all humans dead or something similarly bad.”

    ​When asked to clarify, he said, “extinction.”

    ​For Kokotajlo, the timeline is what makes the warning more concerning. The pace of advancement isn’t just quick; it’s accelerating every year. “The pace of AI progress is going to be fast, and it’s going to accelerate dramatically,” he said.

    ​The threat is nearer than one anticipates. “I would guess something like five years,” he noted.

    ​Beyond the pace, control over AI systems will become a prime concern. While today pulling the plug on AI systems may be simple, it won’t remain an option in the future. AI is increasingly being embedded into infrastructure across industries, including critical ones such as defence and military networks. Any attempts on halting such AI systems can become a complex task. In such scenarios, humans will not only be dealing with isolated machines, but also with systems which operate independently and at scale.

    ​According to him, researchers have yet to develop a plan or system to ensure that highly advanced AI systems behave safely for people. “One of the core problems that we are dealing with is figuring out how to make an AI have goals, values, et cetera that you want them to have,” he said.

    ​Without addressing this issue, the risks increase as systems become more powerful.

    ​Last but not least, companies racing to be AI superpowers is a concern. As they push to build more advanced systems—faster and better than their competitors—shortcuts to safety measures are prone to occur in such environments. This isn’t an easy strategy to implement. If one company slows down to develop better guardrails and safety nets, another outpaces it.

    ​The former OpenAI researcher notes that a scenario exists where AI systems no longer need humans in any capacity. “There will be millions of AIs that are superintelligent,” he said, adding that the future systems will be able to build and manage their own infrastructure. 

    Topics

    More Like This

    You must to post a comment.

    First time here? : Comment on articles and get access to many more articles.