Over the past few months, every major AI lab has been floating AGI (Artificial General Intelligence) as an achievable feat within a decade. As generative AI advances and scales greater heights, the feat is starting to look more than a buzzword.
OpenAI CEO Sam Altman indicated that he is confident that his team is well-equipped to build AGI, suggesting the ChatGPT maker is shifting its focus to superintelligence.
Altman says that OpenAI could hit the AGI benchmark within the next five years. Interestingly, he claimed that the milestone would whoosh by with surprisingly little impact on society.
You may like
This is despite early earnings from reputable experts in the field, including AI safety researcher Roman Yampolskiy, who indicated a 99.999999% probability that AI will end humanity. Yampolskiy claimed that the only way to avoid this outcome is by not building and developing AI in the first place.
In a recent interview with Time, Google DeepMind CEO Demis Hassabis highlighted alarming concerns about the rapid progression and advances in AI. Hassabis indicated that we’re on the precipice of achieving the AGI benchmark within the next five to ten years.
Hassabis indicated that the issue keeps him up at night, coming at a critical time when investors are injecting large sums of money into the landscape despite the fact that it’s still very raw and impossible to establish a clear path to profitability.
It’s a sort of like probability distribution. But it’s coming, either way it’s coming very soon and I’m not sure society’s quite ready for that yet. And we need to think that through and also think about these issues that I talked about earlier, to do with the controllability of these systems and also the access to these systems and ensuring that all goes well.
Google DeepMind CEO, Demis Hassabis
Anthropic CEO Dario Amodei recently admitted that the company doesn’t understand how its models work, raising concerns among its users. As you may know, AGI represents an AI system that is smarter than humans and surpasses their cognitive capabilities.
As such, it’s important to have elaborate measures to ensure that humans remain in control of these systems at all times to prevent potential existential doom to humanity.
According to a separate report, a former OpenAI researcher claims the ChatGPT maker could be on the precipice of achieving AGI, but it’s not prepared “to handle all that entails” as shiny products get precedence over safety.