OpenAI Co-Founder Warns Humans Have No Way of Stopping ‘Superintelligent’ AI
OpenAI Co-Founder Ilya Sutskever warned this week that superintelligent artificial intelligence systems will be so powerful that humans will not be able to effectively monitor them, which could lead to “disempowerment of humanity or even human extinction.”
Sutskever and head of alignment Jan Leike wrote in a blog post that they are focused on tackling the problems that will be posed by “superintelligence,” which has a “much higher capability level” than artificial general intelligence (AGI).
They said that they believe that superintelligence could arrive as soon as sometime this decade and that it’s hard to predict just how fast technology will develop.
“Currently, we don’t have a solution for steering or controlling a potentially superintelligent AI, and preventing it from going rogue,” they said. “Our current techniques for aligning AI, such as reinforcement learning from human feedback, rely on humans’ ability to supervise AI. But humans won’t be able to reliably supervise AI systems much smarter than us, and so our current alignment techniques will not scale to superintelligence. We need new scientific and technical breakthroughs.” (Read more from “OpenAI Co-Founder Warns Humans Have No Way of Stopping ‘Superintelligent’ AI” HERE)
Delete Facebook, Delete Twitter, Follow Restoring Liberty and Joe Miller at gab HERE.