Geoffrey Hinton – the Godfather of AI – says that AI-led human extinction is possible in the next 30 years. Sam Altman – CEO of OpenAI – says that superintelligence is just round the corner and is expected to knock on our door in a few years. Dario Amodei – CEO of Anthropic – says that superintelligence will be far more capable compared to the Nobel Laureates across several fields. And the list goes on…
Superintelligence – or Artificial General Intelligence (AGI), basically meaning the level of intelligence that surpasses that of humans – has been a matter of intense speculations and discussions over the past few years. Scientists, ethicists and policymakers have all been huddling to brainstorm the ways to manage the onset of the inevitable. No wonder, the statements of the industry stalwarts add spice to the recipe of fear and rumours.
Is the advent of superintelligence indeed closer than it appears?
Is it going to sound the death knell for the human civilization?
Is singularity almost in sight?
Well, we don’t have concrete answers yet. However, there are aspects of this subject that do warrant attention and deliberation.
Contextual comprehension vs Computation competence
The first question that naturally arises when one thinks of superintelligence is how do we define intelligence in the first place? Is it merely the ability to do a particular task? In that case, there are animals that do some specific tasks better than humans… does that make them more intelligent than us?
For example, weaverbirds, such as Baya Weaver – are adept at weaving elaborate nests by doing their own computation of how the nest will stay intact in the face of winds, intruders and other disturbances. That’s something humans may not be able to reason quickly. Does that mean we pass the crown of intelligence over to the weaverbirds?
A theory suggests that all aspects of human intelligence are replicable by machines as they are fundamentally some forms of computation. That means human intelligence can be easily replicated and surpassed someday by the machines that keep improving by learning from inflow of massive data. Alan Turing had also devised a Turing Test for machines, which is a method of inquiry for AI systems, to check if their intelligence matches with the intelligence of the humans. However, there are two concerns. First is that five supercomputers, including Eugene Goostman, have already cracked the Turing Test – albeit amid some counterclaims. Second is that passing the Turing Test may not equate to achieving human-like intelligence.
If simply solving a particular task was indeed a mark of intelligence, then we would be ranked behind several animals who can do a particular task better, just like the weaverbird mentioned before.
Therefore, if computational competence is not the ideal indicator of intelligence, then contextual comprehension must be accounted for to judge a machine’s intelligence level. This implies that AI must have deep contextual understanding of a problem statement, exercise independent reasoning and process tasks without human oversight. If an AI system can’t analyze a situation holistically and pursues an objective with zero regards to ethics and accountability, will you call it superintelligent?
Where do the jobs go?
That’s one of the leading questions on the top of everyone’s mind! If superintelligence achieves mastery over the tasks that humans perform today, where do the humans go then? Do we as creators get to be served incessantly by superintelligence? Or do we as noobs get despised, sidelined and eventually kicked out of existence by the entities more intelligent than us?
The question simply is, if superintelligence can take over most of the tasks of humans, do humans lose their value on this planet?
Elon Musk has already stated that none of the humans may have a job very soon. He even claimed that in a few years’ time, possibly before 2030, Ai may be able to perform tasks that all humans combined may fail to do. If that happens to be true, then how are the hordes of humans globally expected to earn a livelihood? And are the corporations so focused on mee efficiency and technological advancement that they are ignoring the need for inclusive development enabled by technology? Are the titans in some of the powerful enterprises creating things that threaten the very existence of their kin?
Andrew McAfee – principal research scientist at MIT Sloan – opines otherwise, saying that AI may simply enhance the capabilities of humans, rather than replacing them at work. He argues that AI will bring a platoon of clerks and coaches that are ready to help us on demand and get some job delegated to them by humans.
However, the field of AI is rife with concerns that superintelligent machines may soon be the drivers of massive unemployment and income inequality.
Unconstrained AI intrusion into the areas that demand high cognitive and computation skills may eventually erode the dominion of humans that has been built largely on the foundation of its collective intelligence.
Redundancy staring at us?
If superintelligence goes on to achieve all the competencies that humans display, will humans be left redundant? Humans have an innate desire to create and to add meaning to the world around them. If humans are devoid of any meaningful job, with what purpose will humans live? Will there be a world where humans are no longer required? What if superintelligence believes so, and takes an action to relieve the planet of a massive liability?
Well, some experts may brush this speculation off as a conspiracy theory. But we cannot ignore the fact that large and small companies have already been laying off thousands of people as they slowly adopt AI in their business. With AI creating art, writing stories, performing calculations, answering queries, analyzing data, and what not… the day is not far when superintelligence will take up a major share of what humans do today.
Unless we design new jobs, just like we created when computers arrived in workplaces, we will be in for a massive turmoil.
Hinton has already said that AI development has outpaced expectations of many experts globally and for the first time in the history of human evolution, we may encounter something that’s more intelligent than us. That will be unprecedented and may require colossal efforts by humans to stay relevant in the larger schema of things.
AI’s conquest of trust and control
When large volumes of workloads shift over to the basket of AI, it is natural that the human overseers will have to ensure that AI’s outputs are trustworthy. Imagine dealing with a situation where AI is tasked with discovering a new drug to deal with a pandemic, and AI formulates a spurious or harmful drug that pharmaceutical companies trust blindly. Even visualizing the hypothetical aftermath is enough to send shudders down the spine.
If AI begins making decisions on behalf of humans on healthcare, media, legal judgments, financial disputes, infrastructure and defence, will we be able to trust them blindly without any regulations or guardrails?
If ever a situation arises when AI falters and leads to loss of lives globally, the entire trust fabric will be broken. The subjugation by superintelligence and loss of control over systems beyond human comprehension can spread chaos among the masses, leading to erosion of trust fabric and eruption of civil strikes, protests and violence amid collapse of law and order. Such a situation can threaten the stability of enterprises which can come under the eye of storm of people who have been lying suppressed and destitute for a long time. Soon, the collapse of human civilization and code of morality can undermine all the merits of superintelligence and can lead to menace in different parts of the world.
Can brittleness of AI save humans?
Talking of humans turning against superintelligence and the concerned business enterprises, humans may also find a way to triumph over AI by exploiting its vulnerabilities. This is known as the “brittleness of AI”. For example, changing the rules of a system can render the systems useless and deprive them of their capabilities to achieve a particular objective. Humans are adept at adapting to such changes in rules and parameters around them. However, an AI may not be. And fiddling with the conditions for which an AI system has been trained can render the AI system futile, thereby enabling human intelligence to succeed over AI.
Is superintelligence just a hype?
Superintelligence still happens to exist in the speculative zone of innovation. Surpassing human intelligence will need AI to conduct at least whole human brain emulation, a concept which is still nowhere in sight. Though we don’t know what innovation may be happening in what corner of the world, it is quite improbable that AI will be able to do everything that a human brain can do and supersede those abilities as well. Some experts also opine that simply increasing computational power using supercomputers and lately, quantum computers, will not lead to superintelligence. Human intelligence is far more than simply doing computations, it is also about consciousness and awareness of our ecosystem in which our intelligence manifests and expresses itself.
Considering that AI’s outputs are still not of the quality expected from an intelligent machine, superintelligence does sound more like a hype-generating marketing gimmick rather than a reality.
However, some experts say that even though we cannot predict the exact timeline of the emergence and pervasion of superintelligence, it is undeniable that AGI will make its mark very soon in the years to come. Amara’s Law, named after Roy Amara in 1973, says that humans often underestimate a technology’s long-term impact while overestimating its short-term impact. And that’s why though AI does not seem to match the expectations we have set for it for the short-term, in the future, it may be able to disrupt our civilization in a profound way.
Singularity on the way?
A famous inventor at Google, Ray Kurzweil, had written in his 2005 book that humans will achieve singularity by 2045. Singularity refers to the technological growth that will reach a stage at which some irreversible change will occur. It can either be a takeover of the world by AI, or possibly the merger of humans with AI.
The fusion between humans and machines to form cyber organisms – more aptly called “cyborgs” – will usher in an era where the capabilities of humans will multiply manifold and intelligence levels will explore beyond our imagination.
A stage may come when AI may be able to design its own improvements and execute them without supervision. That will be the day when AI systems will need to be docile enough to explain to humans how to keep them under control. But will any entity that gains awareness of its independence of thought and actions ever let any other entity control it? That’s a question to ponder upon.
What Tomorrow holds for us?
We don’t know. Yes, we absolutely do not know! While we can never confidently refute the possibility of superintelligence showing up soon, we do need to step up our game as a human collective to introduce guardrails and regulatory frameworks that ensure AI remains safe for us. There is no harm in having superintelligence augment our capabilities, rather the problem begins when it starts to substitute us in areas where we want to preserve our presence. A lot is in store for us related to superintelligence, and we may not want all of that to happen.