Humanizing AI for Inclusive Technological Development

The emergence and pervasion of Artificial Intelligence (AI) in a multitude of industries and geographies are resulting in advent of new use cases that mankind stands to benefit from. However, certain applications of AI also threaten to disrupt the social fabric due to their misalignment to human rights. This warrants a study of the interplay between AI and human rights from a multicultural and legal context.

Understanding AI’s Role in Human Rights and Cultural/Legal Context

Human intelligence is the mother of artificial intelligence. Therefore, the intrinsic bias that humans suffer from in decision making also creep into the AI algorithms, leading to the phenomenon of exclusion in the digital technology space. The bias and discrimination found in the AI systems are rooted in social systems, which are now giving rise to a new form of discrimination of vulnerable communities at the hands of technology [1]. Such prejudices embedded in the minds of technologists eventually make way into the algorithms, thereby leading to the propagation of the biases from physical to the digital world. Use of AI for autonomously generating text can lead to discriminatory results because of the biases inherent in the training dataset. Moreover, the technique of image classification by the AI-enabled facial recognition systems is a cause of concern, as a few of the past examples show how the blacks were treated unfairly by criminal justice systems powered by AI [1]. The loss of jobs at the hands of fast-spreading AI technology is another human rights concern, among many other issues.

 

AI’s Potential Implications for Human Rights Issues

Some use cases of AI hold promise for advancing the cause of human rights in ways unimagined earlier. For example, AI can enable automated teaching and grading of test scores which can act as a huge leap towards making education accessible to many students in underprivileged areas where teachers are in short supply. Moreover, consumer analytics and smart remedial process in financial services can help to increase financial inclusion in areas untouched by banking system currently [2]. Right to fair public hearing and objectivity in trial process can be ensured by AI as it is devoid of human emotions. However, experts are equally worried about other aspects of AI’s impact on human rights.

However, AI technologies have potential to negatively impact human rights, legal infrastructure and democratic setup. The socio-technical nature of AI is important to understand as these sophisticated technologies operate in the human contexts for fulfilling human-defined goals.

For example, AI can be implemented to predict risks of diseases in a population, understand human behavior and understand the consumption preferences of people. Though these use cases sound good, they are prone to bias as the technologies reflect the values of the people who developed the solution [2]. The reproduction of harmful biases can also reflect when AI treats a particular demographic different for making discriminatory predictions without an objective reason. Opacity of the underlying algorithms also make assessment of AI difficult, which is against the humans’ general right to information [2].

Such complexity of the AI systems may also interfere with a person’ right to fair trial where lawsuit-related decisions are made by AI systems. Though AI may reduce arbitrariness and discriminatory action by human jurors, it may undermine the decisional independence and authority of the judiciary [2]. In extrajudicial circumstances, AI can also predict human behavior and identify transgressors in public by conducting facial recognition and analyzing facial expressions [2]. Even health parameters – such as pulse rate, breath rate, etc. – collected by the wearable devices can be exploited by the government to do unjustified surveillance on its citizens and to quell dissent, which is fundamental to a functional democratic setup.

Finally, the biases, inequalities and discrimination entrenched in the datasets that AI models train on can lead to discriminatory decisions against a particular community and can undermine social and economic rights of people in a society.

 

Importance of Inclusivity and Diversity in AI Applications

Ensuring diversity in the training datasets is of utmost importance for maintaining data quality. By including diverse perspectives from researchers and developers belonging to diverse backgrounds, we can incorporate ethics and transparency in the AI models. What is also required is to collect data on how the present AI models affect the different communities and segments of population around the world, for us to tailor the models to be inclusive. We need to treat AI robots akin to economists – just as economists obtain different results depending on the methodologies applied, robots may also gather different perspectives when analyzing data and literature from diverse sources [3].

Dataset availability needs to be decentralized so that one entity or a handful of organizations don’t hold monopoly on the kind of data that trains the AI models [3]. Basically, the researchers, developers, validators, dataset providers, assessors and consumers should not be homogenous.

 

Case Studies of Unethical Outcomes by AI

Examples of negative implications of AI technologies have emerged from around the world, a few of which we discuss below.

1. AI-driven unemployment

Oxford academics Carl Frey and Michael Osborne had estimated in their seminal study that 47% of the U.S. jobs are at risk due to future automation driven by AI. This could be seen in the layoff of 90% of its workforce by Changying Precision Technology in China which replaced the workers with machines. Adidas is also moving towards robot-only factories for efficiency purpose. AI-based virtual assistants can also take up the jobs of personal assistants, translators, customer service representatives and many others soon in the future. The social and economic rights of people are at risk.

2. Opaque surveillance

Freedom of movement has been enshrined in many international agreements and is a recognized fundamental right in many countries. However, a report by the Carnegie Endowment for International Peace claimed that approximately 75 out of 176 countries surveyed are actively using AI for security purpose and internal law and order [1]. Such AI applications are speculated to be discriminatory towards the blacks, refugees and migrants. The disparate impact of these technologies on a particular segment of population shows how predictive policing tools can be riddled with conscious and implicit bias. The U.S. has already deployed facial recognition software at the US-Mexico border and in armed drones deployed over Pakistan and Afghanistan. Investigation by The Intercept revealed that approximately 9 out of 10 people killed in drone strikes were different from the intended target [4].

3. Altruism related concerns

United Nations High Commission for Refugees (UNHCR) collects biometric data of refugees as a means of objective identification method. However, concerns have been raised that the data collected from Rohingya refugees is being used discriminatorily for repatriation, instead of cultural integration in the host countries [1].

 

Universal Human Rights Principles as Foundation for AI Ethics

AI governance stands to benefit immensely from a multidisciplinary approach that includes lessons from statistics, human rights law, philosophy, sociology, audit principles and stakeholder theories [5]. The international standards can act as an existing and flexible baseline for incorporating direction and accountability to AI governance.

1. Universality and inalienability

Article 1 of the Universal Declaration of Human Rights says, “All human beings are born free and equal in dignity and rights”. AI systems must be designed to be inclusive and non-discriminatory to any segment of the global population, irrespective of their size and cultural background.

2. Interdependence and Interrelatedness

Each human right is dependent on the other to help an individual realize dignity through fulfilment of physical, psychological and spiritual needs. Therefore, AI systems must not act against the religious, cultural and socioeconomic status of any individual or community.

3. Indivisibility

None of the human rights principles is more important compared to another. Therefore, AI tool developers must not put one use case or outcome at a higher priority compared to another. All facets of AI governance are equally important and denial of one right impedes enjoyment of the other rights.

4. Participation and inclusion

All individuals have the right to participate and access information on the decision-making processes that affect their well-being. This rights-based approach for AI governance necessitates a high degree of participation by all communities, civil society, youth, indigenous population, minorities, and all other stakeholders.

5. Equality and non-discrimination

Everyone has right to be treated as equal. No one should suffer from discrimination on the basis of color, caste, creed, gender, political affiliation, economic status or any other reason. This principle of non-discrimination needs to be integrated into AI model design.

6. Accountability and Rule of Law

States and duty bearers are answerable to the observance of the human rights. Taking this principle to AI governance, we can mandate accountability to the developers, organizations and international standards organizations to comply with the legal norms and standards enshrined in AI governance models, or else people can institute proceedings for appropriate redress before a competent adjudicator with the rules provided by the law.

 

Cultural Plurality of AI Governance

Cultural differences across nations can impede the development of an overarching global AI governance framework and can enable malignant actors to disregard the demand of ethical values in AI. The malicious actors may even justify the violation of common ethical principles through deference to a local practice or tradition of historical significance [6]. Formulation of AI governance policies that are exclusive to certain cultures may impede the adoption of the framework across all nations, which can further obstruct international cooperation on AI ethics.

Normative standards for AI ethics and governance globally must be open and responsive to cultural values. Since cultural values and norms undergo change with time, the AI governance models must be open to iterative negotiations and reconstruction of arguments [6]. This requires scholars and practitioners of AI ethics to think together with a multicultural working group when deciding if specific normative standards are appropriate for evaluation of AI technologies in a cross-cultural setting. Local support to the AI governance models needs to be sought by integrating their values and rights in the model design and evaluation process.

 

Recommendations for Rights-based AI Governance

A study by the European Parliament in 2019 outlined multiple policy options to govern algorithmic transparency and accountability – including awareness campaigns, education, watchdogs and whistle-blowers [7]. Regulatory oversight, legal liability and accountability in public-sector use of algorithmic decision-making process are important policy measures. Algorithmic impact assessments and an algorithmic transparency standard need to be developed.

From the standards viewpoint, IEEE P703 Standard for Algorithmic Bias Considerations offers hope for formulating comprehensive AI ethics-related standards [7]. AI Fairness 360 Open Source Toolkit uses 70 fairness-related metrics to help users to examine and report discrimination in machine learning models to mitigate their negative impact [7].

Contestability must be provided as a right to the people for raising concern regarding opaqueness of AI algorithms, in case they jeopardize the social fabric. Moreover, legal personhood must be conferred on AI solutions so that laws of the land can apply on them for containing their severe impact on the population [7]. Countries of the world must take inspiration from the Communication from the European Commission on AI for Europe (2018) that advocates for the modernization of education at all levels on priority so that people can have opportunities to acquire the skills they need to survive in the AI-savvy world [7]. Finally, criminal liabilities of AI entities must be determined and, data provenance and consent must be considered to be of particular importance while framing AI-related laws.

 

In Summary…

Integration of human rights and ethical values to the AI design, development and deployment is of utmost importance to make the technology safe and inclusive for all. Lack of empathy towards human rights can impede the adoption of AI tools across all regions and can also provide leakage points in legal framework for organizations to exploit. Therefore, for bringing transparency, accountability and responsibility in the process, we need to integrate a human rights-based approach to AI governance.

 

References

  1. Baweja, S., Singh, S. “Beginning of Artificial Intelligence, End of Human Rights”, London School of Economics (2020). https://blogs.lse.ac.uk/humanrights/2020/07/16/beginning-of-artificial-intelligence-end-of-human-rights/. Accessed on 26th September 2023
  2. Leslie, D., Burr, C., Atiken, M., Cowls, J., Katell, M., Briggs, M. “Artificial Intelligence, Human Rights, Democracy and Rule of Law”, The Alan Turing Institute (2021), https://rm.coe.int/primer-en-new-cover-pages-coe-english-compressed-2754-7186-0228-v-1/1680a2fd4a, Accessed on 26th September 2023
  3. Stoltzfus, J., “Data Quality: Why Diversity is Essential to Train AI”, Techopedia, https://www.techopedia.com/why-diversity-is-essential-for-quality-data-to-train-ai/2/34209 (2023). Accessed on 26th September 2023
  4. Feldstein, S., “The Global Expansion of AI Surveillance”, Carnegie Endowment for International Peace, https://carnegieendowment.org/2019/09/17/global-expansion-of-ai-surveillance-pub-79847 (2019). Accessed on 26th September 2023
  5. Jones, K., “AI governance and human rights”, Chatham House (2023). https://www.chathamhouse.org/2023/01/ai-governance-and-human-rights/03-governing-ai-why-human-rights Accessed on 26th September 2023
  6. Hong, P., “Cultural Differences as Excuses? Human Rights and Cultural Values in Global Ethics and Governance of AI”, Philos. Technol. 33, 705–715 (2020). https://doi.org/10.1007/s13347-020-00413-8
  7. Rodrigues, R., “Legal and human rights issues of AI: Gaps, challenges and vulnerabilities”, Journal of Responsible Technology (2020), https://doi.org/10.1016/j.jrt.2020.100005

 

Tomorrow Avatar

Tomorrow

Leave a Reply

Your email address will not be published. Required fields are marked *