Artificial intelligence has begun growing integral to the generation of content, especially the ones that are displayed to the masses via televisions, prints and the web. No wonder, deepfakes, spurious information and manipulated multimedia has also begun making its way into the mainstream media. If trends are to be interpreted correctly and objectively, it is no-brainer that soon people will fail to distinguish between genuine information and AI-generated content, thanks to the ever-evolving sophistication of the generative AI models. In such circumstances, it is again not difficult to surmise that such cognitive failure on our part can lead to misinformation and a consequent exploitation of people on social, economic and political grounds. Therefore, strict measures by policymakers and law enforcement agencies are no longer a good-to-have thing, but rather an absolute necessity.
US FEC took a step
The United States Federal Elections Committee (FEC) had taken notice of the threat posed by deepfakes and use of AI in elections. Any kind of AI-generated malicious content can invoke fear, hatred, ridicule, and hordes of other inclement emotions among the voters for a particular candidate, so much so that it can alter the course of election outcomes. Various state laws have come into being in the United States of America that aim to deal with the emergent threat of AI in the political realm. However, there are several issues plaguing the process of necessary policymaking.
The first concern is related to what Ellen Weintraub – the FEC’s Vice Chair – had to say on the lack of a single agency that has the jurisdiction or the capacity to deal with every aspect of the threat posed by Ai-generated content. Despite FEC rightfully highlighting the issue of AI’s interference in elections, its authority is restricted to electoral candidates alone. It can only regulate the use of AI in the advertisements for federal candidates, not the content shared ordinarily by televisions and radio stations. Worse, its authority does not even extend to the independent issue campaigns and state or local elections. It has jurisdiction over “federal elections” alone.
Therefore, some other entity is required to fill in this policy gap and to bring non-election-oriented campaigns under the lens of scrutiny.
US FCC steps in
The United States Federal Communications Commission (FCC) has recently announced a new proposal for increasing transparency in political advertisements. If turned into a law, it will require all those entities that file information about television and radio advertisements with the FCC to disclose if AI was utilized in developing the content.
To be noted, FCC’s proposal does not ban AI-generated content, it just asks for a disclosure when AI has been used.
The jurisdiction of FCC will extend to areas where FEC has no control. It will also bring under its ambit a broader range of political advertising that goes beyond federal candidates. Not to forget, FEC has also raised a few concerns regarding FCC’s proposal and has said that there exist overlapping jurisdictions and conflicting mandates. FEC’s Chairman – Sean Cooksey – has opined that FCC’s proposal overlaps with the exclusive jurisdiction of the FEC, which can lead to conflicts with existing laws and incompatible differences between the two agencies which may eventually need to be resolved by the federal court.
Anyways, despite the jurisdiction overlaps and irreconcilable differences, FCC’s proposal is a significant positive step in the right direction.
Its not a panacea after all
No matter we have the US agencies working towards reining in the consequences of AI-generated content in political campaigns, the battle is far from being won. The proposed regulation just mandates disclosure but does not prevent the damage that AI-generated content can do to the psychological health of viewers or consumers of the content. The psychological impact on voters needs attention as well.
What’s more unsettling is the fact that FCC’s proposal does not extend to online platforms and streaming services.
These are the media through which young voters consume information and political advertisements. When a platform that’s growing fast in popularity among the youth is not include in the regulation, it will just be a matter of time that policymakers will realize the need for another set of regulations which can comprehensively include all spheres of advertising and communicating information.
What’s our take?
It’s simple. We know FCC’s proposal is not a cure-all. It definitely is a step – if not a leap – in the right direction. However, it needs its other allies – more comprehensive policies – to cover all kinds of media where political advertisements are run and which influence the voters’ mindset. However, for the moment, if this proposal gets approval and FCC is given stronger enforcement powers by the Congress, we will see a new era of broadcast transparency ushering in soon.