A groundbreaking research paper by Google DeepMind has sparked global concern, warning that Artificial General Intelligence (AGI), a form of AI with human-level understanding and problem-solving ability, could emerge as early as 2030 and might bring irreversible consequences, including the extinction of humanity.
What is AGI?
Artificial General Intelligence refers to machines that can perform a wide range of intellectual tasks with human-level cognition.
Unlike current AI, which is built for specific functions, AGI would be capable of learning and reasoning across multiple domains, making it far more potent and unpredictable.
Multiple forms of risk identified
The 145-page document, co-authored by DeepMind’s co-founder Shane Legg, categorises the risks posed by AGI into four major types: misuse, misalignment, mistakes, and structural vulnerabilities.
These dangers range from malicious human actors exploiting AGI to unforeseen system failures and coordination breakdowns among global AI stakeholders.
DeepMind’s researchers warn that bad actors could instruct AGI to carry out tasks with catastrophic consequences, such as engineering biological weapons or hacking into critical systems.
Misalignment refers to situations where AGI pursues objectives that conflict with human intentions. For example, an AGI might prioritise efficiency over ethics, causing harm in the process.
AGI Timelines and Global Governance
DeepMind CEO Demis Hassabis predicts that AGI could emerge within 5 to 10 years, urging a UN-like regulatory body to oversee development. Further, he advocates for a collaborative effort to ensure safety. Legg, however, maintains his long-held prediction of AGI arriving by 2028.
Other AI labs offer conflicting timelines: Anthropic’s CEO expects human-surpassing AI in 2 to 3 years, while ex-OpenAI researchers speculate 2027 breakthroughs.
A Call for Societal Vigilance
The paper stresses that determining ‘acceptable risk’ lies with society, not corporations. It critiques rivals like OpenAI and Anthropic for allegedly prioritizing speed over safety. Yet, experts warn that the report’s speculative nature underscores the need for broader dialogue before deploying advanced systems.
While AGI promises breakthroughs in medicine, climate science, and technology, its unchecked evolution risks irreversible harm. As debates intensify, the clock ticks toward a future where humanity’s survival may hinge on aligning machines with human ethics or facing consequences beyond imagination.