“India is not a private nation. Private information is often shared freely… and public life is organized without much thought to safeguarding personal data.” This was a statement made by the Indian government in 2010. Similarly, China’s state intervention into the private lives of its citizens for the social credit system has been decried by people carrying a liberal perspective, while some Chinese researchers have defended it as a positive means to promote moral behavior among citizens. Though the apex court of India had later ruled that privacy is a fundamental right of citizens, and Chinese government had explicitly stressed on the need for Artificial Intelligence (AI) technologies to respect human rights, the example of the two countries is not a siloed one. Cultural nuances do affect the way ethical guardrails are constructed and policies are framed around AI and other digital technologies.
Human rights approach to AI governance
The human rights approach to regulating and governing AI aims at creating a universally enforceable framework that respects the human rights of all individuals across all geographies. Advocates of human rights approach to AI governance claim that it offers a normative standard and approach to the discourse on ethical, social and political implications of AI technologies. Moreover, the appeal for a uniform human rights approach is also based on the set of legal and institutional measures it provides for prevention and redressal of the negative implications of AI tools. All in all, the proponents of this unified approach believe that the quest for the Holy Grail of the Universal Code of AI Ethics can find its conclusion in the Universal Declaration of Human Rights, European Convention on Human Rights and many other such frameworks that define what kind of treatment every person deserves from all kinds of interactions they have with living or non-living entities.
Is this approach inclusive?
Short answer, “no”. The long answer delves into the cultural nuances across countries that impact the inclusivity of AI governance models based on human rights approach. Proponents of human rights perspective often assume that it would be readily applicable to a non-western context. However, despite the explicit normative standards established by human rights frameworks, diverse and often conflicting interpretations exist of specific human rights. Moreover, global power asymmetry and geopolitical situations can turn human rights into a tool to serve the interests of the powerful. The multiplicity of interpretations of human rights and local differences in how people perceive different kinds of treatments meted out to them makes it difficult to pinpoint what adheres to or violates the human rights of people in diverse local contexts. This eventually reflects as plurality of moralities in different cultures when assessing the ethical issues of information technology. As a result, a common human rights approach may not align well with the different moral paradigms that are born out of diverse practices, beliefs and experiences of a cultural group. Ethical universalism can therefore be at odds with cultural notions and has potential to promote dogmatism and intolerance. For example, the Chinese social credit system is viewed as a dystopian surveillance system by many liberals, while the Chinese citizens view it as a means to promote social order and trust. Such differences between perspectives of users of AI technologies underscore the interplay between ethics, culture and AI governance.
Ethical paradigms are also at odds against each other in this regard. While the utilitarian ethical framework prioritizes maximization of overall happiness, the deontological approach promotes adherence to a set of moral principles. This is a challenge with normative ethics, as adherence to a common set of AI governance principles will not be aligned to the cultural nuances of different populations segments that would rather be happy without an imposing universal framework.
What is furthermore unsettling is the critical concern of cultural biases creeping into the AI systems. Since AI algorithms train using large datasets that may be riddled with biased language and content, including gender, racial and cultural stereotypes. Outcomes of an AI-based tool that may look safe to people belonging to one culture, may be offensive to other cultures. Such issues lay bare the futility of a universal human rights approach.
What can be done?
Cultural sensitivity in AI design is crucial. Ethical frameworks for AI need to be developed for ensuring cultural plurality in the algorithm design. These frameworks must have enough flexibility to adapt to different contexts while upholding fundamental ethical principles. Opening up the discussion around AI governance to different cultures of the world that are exposed to digital technologies can allow diverse philosophical perspectives and mitigate risks of intellectual domination by people from technologically advanced countries.
International institutions that frame the AI ethics policies for an inclusive future need to have greater geographical representation of policymakers, philosophers, culture experts and lawmakers to design inclusive guardrails for AI. Engaging the public in discussions about AI governance is vital. Public input can provide insights into how cultural and ethical values intersect with AI development.
The necessity for openness and responsiveness calls for collaboration between AI ethics and governance experts and individuals from different cultural backgrounds when determining the suitability of specific ethical standards for assessing AI technologies in diverse cultural contexts. This approach can help prevent the inappropriate manipulation of cultural distinctions and contribute to enhancing the ethical framework of methodologies like HRA, as well as other approaches for the global ethics and governance of AI. To illustrate, HRA and similar methodologies can integrate normative justifications for concepts like the right to privacy from Hindu and Islamic traditions or the Confucian principles of state non-interference. Furthermore, achieving local endorsement is crucial to substantiate the ethical assessment of AI technologies. This can be achieved by aligning with the values of local communities to justify ethical principles and rights or by demonstrating that upholding these principles and rights will yield beneficial outcomes for them.
Way forward
Cultural plurality in AI governance presents a complex but not insurmountable challenge. Embracing diversity, fostering dialogue, and developing ethical frameworks that respect cultural nuances are essential steps in navigating the evolving AI landscape. Ultimately, a cultural approach to AI governance acknowledges the richness of our global society and the need to balance technological progress with respect for diverse values and ethics. By doing so, we can harness the full potential of AI while upholding the cultural pluralism that defines our world.