Why Global South needs a tailored AI policy?

Talk about the development of autonomous cars or smart healthcare machines, or quantum computers, most of the development on Artificial Intelligence (AI) is being spearheaded by the countries of the Global North. Time and again we hear about innovation happening at Harvard, or an imaginative machine created at CERN, or robotics reinvented in Australian university or frontiers of engineering propelled by an invention at Massachusetts Institute of Technology. So much so that innovation has been synonymous with Global North, given the unimaginable magnitude of investment and experimentation done in the advanced countries.

As the innovation divide between the Global North and the Global South widens almost daily, one is left to wonder if AI development is indeed catering to the needs of the Global South. Are we inventing enough new systems or testing new machines in the context of the Global South? Is the Global South indeed prepared for the adoption of the latest AI inventions, given their drastically different economic, political, social and cultural landscape?

How is Global South different?

Take for example, an autonomous car designed for the roads of a sparsely populated Global North nation, or an advanced country where traffic rules are strictly followed. In such a setting, an autonomous vehicle is expected to operate with perfection as the traffic related rules that are encoded in its algorithm are same as those followed by the other drivers on the road. Now, bring that car to a Global South country where roads are congested, and people appear to take liberty of breaking the traffic rules. Will an autonomous car designed and tested in a traffic-rule-adhering country be able to operate well in a country where traffic rules are treated as flexible guidelines, rather than strict instructions?

Here’s another example, suppose a generative AI tool that creates images out of textual prompts is developed in a culturally homogenous Global North country. When that same tool is tested in a culturally diverse environment of a Global South nation, it is natural to assume that some of the tool’s output may not be acceptable to a segment of the population.

For example, if we ask DALL-E to create an image of an ideal Indian, will it create the image of a person from the north of India, or northeast or south or central India?

Any wise person would know that none of these options is correct, as there is no one persona that alone fits into the description of an ideal Indian. Will DALL-E be able to show such wisdom?

Many middle-income and low-income countries lack the sophisticated infrastructural resources to develop AI systems in-house. Therefore, they are relegated to a dependent consumer’s role where they simply avail the AI services that have been designed in distant lands that do not carry adequate knowledge of the socio-economic and cultural setting of the consuming countries.

Such dilemmas exist when it comes to AI development and implementation in the Global South nations.

Therefore, policies around AI development and ethical use of technology need to be framed from a Global South perspective to account for the diverse context of these countries.

Explainable AI policy needs to be pluralist

A part of hesitation and speculation around AI tools in Global South countries arises also due to the lack of clarity on how the AI systems operate and how reliable their outcomes are. Decisions taken by AI, if different from a human expert, may not be trusted readily by people. That’s why explainability of AI systems is of crucial importance. There is an absence of universally agreed-upon standards for explainable AI, which leads to a fragmented regulatory landscape.

This fragmentation allows for pluralist and culturally relativist policy frameworks that may resonate better with local contexts of Global South regions.

However, before proposing pluralist policies, one needs to investigate the precedents of pluralist policymaking. Most of the dominant policy recommendations on explainable AI are entrenched in Global North perspectives. Their efficacy is limited to the context of Global North nations, while the diverse contexts of Global South countries go missing. For example, an AI development company may surface descriptions about its AI-powered diagnostics healthcare tool. However, the explanation won’t help much in a country where people trust meeting doctors in-person more than relying on a mobile app that has little human touch.

Does AI falter in new settings?

Yes, it can, and it does. For example, according to an article by J.O. Effoduh for Carnegie Endowment, an image recognition software misdiagnosed Boran and Sahiwal breeds of cows as malnourished because of their lean build, which had nothing to do with malnourishment. Similarly, Google’s generative AI tool, Gemini, had been in the eye of storm this year owing to misrepresentation of particular races in its prompt-generated images. On the other hand, Amazon’s Rekognition tool for facial recognition was found by the American Civil Liberties Union to be riddled with bugs and falsely matching innocent people with mugshots of criminals. Such cases highlight the problems that may crop up if AI systems are not developed and tested with an understanding of the diverse cultural, linguistic, social and environmental contexts of different countries.

We need humans to explain AI

Human’s capability to interpret and inform needs to be tapped into making AI explainable. When it comes to implementation of AI tools in diverse contexts, it is critical to involve those who are best at explaining complex things to humans – humans themselves.

We need certified experts to communicate the factual information about AI tools, while contextualizing its applicability and benefits to the Global South setting.

Moreover, AI developers must avoid creating general purpose AI systems and rather, develop specialized AI applications that are tailored to the needs of specific nations and social settings. Moreover, instead of a textual guide on AI systems, visuals, infographics and storytelling methods must be used to effectively communicate to people how AI systems work and how they will be affected by the AI tools.

What’s next for Tomorrow?

AI is growing sophisticated with each passing day. It is high time for researchers and policymakers to step up and ensure the contextualization of AI tools to the cultural, social, economic and linguistic landscape of the Global South countries. Since each Global South country is ethnically, linguistically and culturally different, a universal policy framework for AI may not end up being effective. Rather, we need an umbrella framework that offers a policy prescription for all nations, using which each nation must frame its own localized policy that resonates with the values outlined in the umbrella framework. We need human involvement to make AI explainable to people who least understand it and need continuous oversight by a multistakeholder institution to ensure AI stays fruitful and meaningful for our diverse population.

Tomorrow Avatar

Tomorrow

Leave a Reply

Your email address will not be published. Required fields are marked *