Can AI mitigate AI-generated disinformation?

Imagine a piece of news pops up tomorrow in your Twitter feed talking about a collapse of the financial markets because of the downfall of a major bank. No sooner than the spread of this news with the appropriate hashtags on the social media platform, people will begin losing their minds and start selling the shares of the concerned company. A section of the population may also begin to withdraw their monetary savings from that bank, leading to an actual crisis for the bank. With the sufficient number of likes and shares, the news can cause ripple effects among the population and the economy, at large.

What if the next day the news turns out to be fake? Obviously, by that time, the company’s management would have informed all stakeholders about the falsity of the news and the media industry would have done its best to allay the fears. However, the damage would have been done anyways. The massive blow that the disinformation attempt would have dealt to the fabric of trust will send shockwaves across the political, economic and social spheres. Not that such attempts at spreading disinformation have never been made before, yet the unprecedented pace at which artificial intelligence (AI) can generate and spread disinformation sends shivers down the spine of any lawmaker.

 

Why is gets difficult for social media platforms?

Social media firms face a grand challenge in weeding out disinformation because AI engines are not adept at detecting the AI bots in the social media platforms. Basically, AI is weak at catching its own brethren. Usually, human content moderators are the most adept at identifying social media accounts that have bot-like features and are not genuine users. However, such moderation teams have been cut down by the companies. For instance, Meta has axed employees from content moderation teams, while YouTube and X have mirrored those actions. In the pursuit of efficiency and cost savings, these social media platforms have collapsed their strongest wall of defence against disinformation. Consequently, the platforms have witnessed a torrent of malicious AI-generated content driven towards advertisements. There is an additional challenge.

When AI engines are deployed to identify disinformation-generating accounts, they also flag the accounts that belong not to AI, but to genuine humans.

As a result, accounts that simply commented or shared the disinformation get banned despite not being the creators of those posts, thereby leading to curtailment of freedom of expression.

 

Generative AI is the emerging champion of disinformation

Generative AI (GenAI hereafter) raises the bar of disinformation far higher. The generative adversarial algorithms make it easy for anyone to impersonate someone by mimicking voice, facial features and expressions and thereby create fake multimedia content that can mislead anyone. Malicious content can be passed on as genuine information by scrupulous actors. Such is the capability of GenAI that it can create desired malicious content at meteoric speed and with immaculate precision which makes it too difficult for any gullible person to distinguish it from genuine information.

A day may come when we may not necessarily have authors behind a piece of text or a creator behind visual information online, thanks to GenAI.

Though the prevalent GenAI chatbots, such as ChatGPT and Google’s Gemini, have safeguards in place and do restrict the creation of potentially harmful content, yet it is not impossible to create a new chatbot using Large Language Models (LLMs) that can be used to create hate speech and divisive content. Despite the potential of GenAI platforms to generate trustworthy content using unscrupulous training data, yet it is equally easy to instantly manufacture content with an overt tone of sincerity such that it inspires blind trust among the information consumers. Research by University of Zurich in 2023 had found that GenAI could produce such compelling disinformation that participants in the study could not distinguish between fake content created by GPT-3 and true content on X platform.

There have been reports of some of the reputed media houses using GenAI for posting news pretending as if it authored by real reporters.

This raises concerns about the effectiveness and veracity of such news, given that the GenAI algorithms can pick content from biased datasets. Moreover, GenAI can hallucinate, i.e. create information that does not exist. Such kind of disinformation can rupture the fabric of trust in the digital realm. Eventually, a day may arrive when people may not be able to trust even the trustworthy sources of information, leading to collapse of the trust economy.

 

Authoritarians can make us puppets

A society behaves and cooperates based on the knowledge it holds. If the governments or the authoritarians in the regime control the information shared with people, naturally the citizens will become dolls in the hands of these agents and will harbour thoughts about anything based on what is communicated to them. This can lead to a situation where governments use AI to create disinformation about any topic to rally citizens for a spiteful cause, incite them for violence, shift their attention from genuinely burning topics and mislead their sentiments on things that matter to them and the society at large. With GenAI, this kind of authoritarianism gets amplified and brings the sentiments of swathes of population under control.

 

Political instability can be round the corner

In India, various agents have already tried imitating the prime minister, Narendra Modi, singing songs or speaking something that was never said. Even women wrestlers who were protesting against a few government authorities were deepfaked to malign their image. Other politicians in India have also seen their deepfakes emerging to mislead the population. Moreover, in Mexico, a deepfake multimedia version of a political figure emerged displaying him as verbally preferring a specific candidate for mayoral elections. In the United States, several deepfakes of the president, Joe Biden, have surfaced which tend to mislead the audience under the garb of political satire.

Though people can identify such deepfakes using detailed inspection, there may come a day when such distinction may not longer be possible through naked eyes and people may act or react to these deepfakes, leading to political instability and social disharmony in the nations.

 

Where will the responsibility lie Tomorrow?

There is still a raging debate about who owns the responsibility of malicious disinformation that’s generated by AI. While some opine that it is the creators’ responsibility to exercise caution during content generation, others say that AI tool developers need to be responsible about implementing safeguards in place. Yet another section of population advocates for having a large team of moderators in social media companies. Until strict policies and comprehensive frameworks are not designed and implemented, each stakeholder may keep passing the ball to another’s court. Till then, we – the society – have to be cautious of any information that we come across and exercise due diligence in verifying the trustworthiness of the information presented to us.

Tomorrow Avatar

Tomorrow

Leave a Reply

Your email address will not be published. Required fields are marked *