AI labelling needs an overhaul…seriously!

The push for labelling AI content has never been so big! Policymakers across the world, particularly in the Global North, are now concerned about the utter lack of effective policies that can mitigate the perils of AI-generated content online. No wonder the fake video of the US President, Joe Biden, cursing his political rivals in a PBS NewsHour-labelled deepfake caught the world by a storm recently. With the availability of cheap tools and accessible AI infrastructure, miscreants are on their way unbridled to foment chaos and misinformation online.

How do we mitigate the threat of AI-generated content, including deepfake, manipulated videos and spurious multimedia?

Enter AI labels

One route that experts usually advocate is the labelling technique. It typically refers to the method of disclosing whether a piece of content is generated partly or entirely by AI. Usually when you browse through Meta (erstwhile Facebook), you would find some posts that would be marked by the platform as AI-generated. It is meant to prepare the viewers to consume the content from a cautionary perspective. Such measures are meant to let viewers know that a part of the content they are consuming has not occurred in reality, and therefore, they must take due caution before forming judgments about the characters and the circumstances presented in the post.

Labels basically aim to notify the users of a platform that they are consuming AI-generated content, which can be partly or entirely distant from reality.

However, labelling of AI content is insufficient alone. Because mere labelling neither deters users from consuming the information, nor specifies if the content is harmful. Here’s where a pandora’s box opens.

Why labelling is not enough?

Suppose you are presented with two pieces of information – one with the AI label and the other without. And next, you are asked to identify your perception towards the information. A few possibilities may arise:

  • You may consider the AI-generated content to be spurious
  • You may consider the unlabelled content as genuine
  • You may not consume the AI-generated content
  • You may like to consume the AI-generated content without forming judgments about it
  • You may consume the AI-generated content and form judgments based on it
  • You may find the unlabelled content more harmful than the labelled content
  • …and so on… the list may run longer.

This is why we need more emphasis on developing resilient and meaningful policies for mitigating the ill-effects of AI. Just because a piece of content has been labelled, it does not imply the information is harmful. Rather, a few YouTubers these days create meaningful and factually correct videos using generative AI. At the same time, just because something is labelled as AI-generated does not mean it won’t be consumed. People may be aware of mis-informative content and yet love to consume it as it appeals to their sense of leisure. Moreover, not every unlabelled content is guaranteed to be genuine. Some startups with limited funds may not be able to invest in expensive labelling processes, which may lead to their content going unlabelled. Simply shifting a user’s outlook towards AI-generated content towards negativity may also sound unethical as few content creators develop ethical videos using generative AI tools, and therefore can be a violation of freedom of expression. Not to forget, a genuine unedited piece of content may be more harmful compared to AI-generated content. Therefore, penalizing AI-generated information does not sound like a good idea.

These issues manifested in the form of protests by users of Meta after the company announced placement of “Made with AI” label on photos created using AI. Users saw these labels as punitive measures, leading Meta to change the label to “AI Info”, saying that it would rely on an assessment of the amount of AI editing before adding the labels.

AI vs humans
Image credits: Dall-E

Will such assessments help?

We hear your gut feeling… such assessments won’t help much. Why? The reason is simple.

The magnitude of editing is not a reflection of the magnitude of harm.

There are videos, for example that of Joe Biden cursing his opponents, that had only one word edited. Sometimes, a part of the video is edited to communicate a message in contrast to what was communicated in reality. And that’s why simply notifying the users with labels depending on the extent of AI-led manipulation is hardly enough. What we need is a mechanism that measures the harm – and that will certainly require looking at the content from a human lens to understand the levels of sarcasm, hostility, mockery, hate and hidden provocations, among other negative sentiments. Meta’s adoption of C2PA – a technical standard – to interpret impartial technical signals embedded into the content posted on its platform has raised further questions on the effectiveness of such standards.

It is also equally important to interpret the inferences viewers can draw from labelled and unlabelled content, and assess them from regional, cultural, political and economic contexts. That’s because people’s interpretation of information varies according to their cultural, political and social settings.

Standards need to evolve

What experts also opine is that instead of looking into the extent of manipulation, viewers must rather be informed about the method of manipulation. This includes giving out technical descriptions of the application of AI in creating a piece of content. Research suggests that labels that describe the process through an audio-visual content was manipulated helps to foster trust in media among the audience.

Moreover, AI labels should not drive unnecessary scepticism about trustworthy media or diminish trust in genuine information and deprive the content of its context.

Transparency is the key

Instead of making labels a black-box concept, it is crucial to communicate to users how labelling has been done.

AI literacy needs to focus on educating the masses about the pros and cons of AI-generated content. The generative AI platforms and designers of AI-generated content must share insights on how AI content labelling techniques impact the trust perception among consumers. The platforms and designers must also communicate to users the process using which a piece of content was created or edited. It is to be noted that AI-generated media are not monolithic because the amount and type of algorithmic intervention during AI-driven editing content varies across designers, platforms and information.

Bring the community together

Policymakers may like to frame stringent policies and yet may not be able to make them inclusive. That’s where communities must join hands and enable community-driven methods for content labelling. It basically refers to a setup where participants in a platform work cohesively to annotate and report content based on the potential for harm they carry.

Democratized methods for evaluating and labelling content can help to foster trust in the social media platforms and information sharing avenues online.

In 2021, twitter had rolled out Birdwatch – comprising over 100,000 users – who commented and annotated potentially misleading content and highlighting AI-manipulated content. Distributed labelling methods help to create a trust-centric approach to content labelling.

What to expect Tomorrow?

Content labelling is absolutely necessary but needs to evolve in its approach. We need to humanize the labelling approach and ensure that the notification atop any information helps the audience to make informed decisions rather than simple running away from AI-generated content. Policymakers must also ensure that their policies and labelling techniques do not end up being weaponized against credible content creators. Labelling techniques yet have miles to go before they can respond to needs for a safer AI-driven world.

Tomorrow Avatar

Tomorrow

Leave a Reply

Your email address will not be published. Required fields are marked *