European Union AI Governance Regulations – a Landmark or a Lesson?

European Union (EU) has often been at the forefront of safeguarding its people against the negative implications of emerging technologies. Artificial Intelligence (AI) governance has not been behind in its agenda, as is evident from its regulations being drafted for AI technologies. The EU’s proposal has emerged as the most holistic legislation addressing concerns with AI systems and their associated risks, as it introduces a technology-neutral definition of AI systems. Moreover, it also offers a comprehensive classification of AI systems, each with its different obligations and market entry conditions, based on a “risk-based approach” that is applicable across industries. The Council of the EU aims at ensuring AI systems deployed in the EU market stay safe to use and respect the fundamental rights of people of the EU and does not violate EU’s values. However, there are some serious concerns that some policymakers have with the drafted regulations.

The first concern experts have expressed is with regards to the risk-based approach. The proposed AI Act places AI systems into one of the four categories, viz.: unacceptable risk, high risk, limited risk and minimal risk, depending on the severity of the detrimental impact of the systems on any of the individuals interacting with the system. However, the high-risk category – that imposes a significant number of extra requirements on AI systems – also captures various AI systems that do not cause serious fundamental rights violations. For example, AI solutions like conversational bots that capture users’ health information for claim settlement, query resolution and fraud detection has been categorized as high-risk by the risk-based approach. Policymakers opine that such AI systems should not be generalized as “high-risk” but should get a classification based on the actual risk they pose to users, thereby bringing spotlight on the need for clear definition of terms in the draft regulations.

Second, a coalition of French, German and Italian governments recently proposed that companies be allowed to self-regulate AI systems for their intended use cases. Evidence from social media sector and media advertising industry have fairly shown in the past how self-regulation falls short of achieving its intended objectives and helps companies evade responsibility by withholding data on high-risk AI systems. Therefore, the fact that EU’s AI Act has been grounded in this principle of self-assessment, following the proposal of European Commission in 2021, is quite unsettling. What’s more discomforting is the European Commission’s 2021 proposal that such self-assessments need to be done only for AI systems with an “intended purpose”. This would leave technologies like GPT-3 out of the ambit of AI governance. Despite the European Parliament now bringing such technologies under the regulatory framework in 2023, the coalition of Italy, Germany and France advocating for the 2021 proposal is a major concern and highlights the need for building multinational consensus on regulatory frameworks.

Third, inconsistency in the drafts of different institutions of EU is also an issue to be resolved. The European Parliament’s draft text puts a ban on AI systems that attempt to recognize people’s emotions through facial detection, while the European Commission and the Council of the European Union don’t advocate for such ban. Unlike drafts from the other two bodies, the European Parliament’s draft also escalates social media recommendation models to “high-risk” category, thereby mandating additional scrutiny over their mode of working. Its proposed legislations to regulate the generative AI models for issues related to copyrights and data privacy will also face stiff opposition from the tech industry. Therefore, legislative friction among the institutions is another challenge that needs to be addressed.

All in all, the European framework for AI governance is a landmark given that it spearheads the concerted efforts in the space of regulating emerging technologies from a human-centric perspective. However, it does offer crucial lessons for other countries to follow in order to build consensus among institutions and to not delegate the moral responsibilities around AI to the prospective violators of the AI ethics.

 

Tomorrow Avatar

Tomorrow

Leave a Reply

Your email address will not be published. Required fields are marked *