A week back, Google DeepMind released a 145-pager document claiming that Artificial General Intelligence (AGI) can emerge by 2030 and can pose an existential threat to humanity. The warning coming from Google has sparked debate among ethicists and technology experts who have diverging opinions on what the paper says.
Simply put, the paper by Google DeepMind talks about the “severe harm” that can be caused by AGI, as it is expected to arrive by 2028. The paper also suggests that the severity of the harm by AGI on the society is not for Google DeepMind to measure, rather it depends on the society to measure it, based on its risk tolerance and conceptualization of harm. As Google attempts to build AI systems that can exceed levels of human intelligence, the company has begun dabbling with the idea of possible extinction of humanity at the hands of AGI. Nowhere does the paper specify how AGI will lead to human extinction but simply warns the public of the harm it can do to the civilization.
DeepMind cofounder – Shane Legg – has already pegged 2028 as the year of advent of AGI as per his median forecasts and has detailed the steps that Google DeepMind and other similar companies must take to prevent an irreversible harm to the civilization that thrives today on the pillar of its intelligence. The paper classifies the harm under four categories:
- Misuse: Intentional use of AI for causing harm
- Mistakes: Unintended failure of AI due to training flaws
- Misalignment: Developing systems with the intent to harm
- Structural risks: Conflicting incentives between different groups of people, countries, enterprises and other stakeholders.
Moreover, the paper has highlighted the shortcomings of the approaches adopted by Anthropic and OpenAI as they lack sufficient oversight, security protocols and rigorous training.
However, some AI safety experts are left unimpressed by the paper, as they say that AGI is still loosely defined, owing to which we may not realize when AGI has indeed emerged. Moreover, AGI will be highly intelligent and unpredictable, thereby having higher potency to create massive political and social turmoil. Some experts even remain sceptical that the standards of AGI can be met in the next five years. Things like common sense and ability to learn from very few examples are dominions of humans, which AI has not been able to step into so far. The world still needs collaboration and dialogue on what AGI indeed is and how it can be prevented from causing any harm to the society whatsoever.
The stakes are high in the AI world. As a few leading companies are pushing the frontiers of innovation by leaps and bounds almost every month, it may not be stupid to think that AGI may arrive sooner than expected. And if it is just five years away, then we are largely underprepared for the implications it may have on our civilization – what we call as humanity.