When algorithms man our borders

Drones in the air, cameras on the fences and robots on the lands – international borders have become the new frontier for innovation in Artificial Intelligence (AI). Governments are enforcing Algorithmic Border Governance (ABG) with such speed and conviction that it seems like AI will soon be at the centre of migration dynamics. From being a companion of border patrolling guards to become the sentry itself, AI has come a long way in nurturing international borders as a trial ground for its invasive monitoring applications.

 

What’s happening at the borders?

Well…a lot, to be honest! In various parts of the world, the United Nations Commissioner for Human Rights found iris scanning machines deployed in refugee camps and AI-powered lie detectors installed at the borders. These technologies obviously do not operate with an entirely benevolent intent. The Customs and Border Protection division of the United States Department of Homeland Security have deployed Babel X – an AI-based tool – that links an individual’s social security number to their location and social media accounts.

The deployment of AI does not stop at software alone, even AI-powered physical robots are keeping an eye on refugees at all times.

For example, autonomous two-legged and four-legged robotic dogs have been deployed on the US-Mexico border as reinforcements to ward off illegal immigrants. Frontex – the European Border and Coast Guard Agency entrusted with management of European Union (EU’s) external borders against cross-border crime – has also allegedly deployed drones to intercept the illegal immigrants in the Mediterranean Sea, instead of rescuing them. Violation of the rights of immigrants with connivance of AI is gradually strengthening its hold across the world.

Even following entry into a foreign land, the migrant workers and people with insecure citizenship status are sometimes subjected to digitally enabled surveillance techniques that can potentially monitor and exploit asylum seekers and refugees. Military-grade biometric sensors, emotion interpreters, facial recognition tools and drone surveillance perpetuate further violation of rights for the displaced populations. The exclusionary design of these AI tools for surveillance is a major concern for algorithmic governance of international borders.

The buck does not stop with the government, even the private enterprises are equally responsible for ensuring their AI platforms are not misused against the immigrants.

However, private firms seem to be tilting towards the use of AI for surveillance of immigrants. For example, Microsoft has signed a contract worth $19.4 million with ICE for facial recognition and data processing platforms, while Amazon has marketed its facial recognition technology “Rekognition” to police and immigration authorities. Orlando Police had begun trials of Amazon’s technology for recognizing people’s faces using AI, despite backlash from within Amazon. What shocked the world was the news that American Civil Liberties Union found the AI tool to be riddled with bugs and falsely matching innocent people with mugshots of criminals. IBM had a furtive engagement with the New York Police Department to develop an AI-powered facial recognition system to search for people according to their gender, race age and other personally identifiably parameters. Baidu in China is participating in a civil-military infusion led by Chinese Communist Party to develop semi-autonomous weapons for the country.

It is, therefore, understandable that despite the bias and racism ingrained in the code of AI tools, private and public sector institutions have been collaborating to use AI that can discriminate against immigrants.

 

Does regulation help?

Yes, only when the policies are comprehensively inclusive. The European Union (EU) AI Act has done a good job in outlining prohibitions for unethical use of AI and has proposed a framework of requirements for oversight and accountability of high-risk AI systems deployed in the EU market. However, the prohibitions on AI systems do not extend to the context of migrants. As a result, potentially discriminatory risk assessment systems and predictive analytics tools that facilitate migrant pushbacks could not be banned in the EU. Even lie detectors are not banned at the EU borders as the prohibition does not apply on the use of emotion recognition tools in the migration context. The list of high-risk systems also excludes fingerprint scanners, biometric identification systems and forecasting tools that can be used by the concerned authorities to curtail immigration. Moreover, the export of potentially harmful surveillance technologies is also not banned in Europe.

While national security is important which often warrants the use of AI for surveillance of immigrants, transparency and accountability on how these systems are used and what their implications can be is of utmost importance.

 

Human rights-based approach is what we need Tomorrow

Any AI system that aims to help with border patrolling and monitoring refugees needs to be built from a human rights perspective. Considerations of bias and data skewness need to be accounted for by the developers, policymakers and governments. Otherwise, the AI systems for border patrolling will be no less threatening that what Anduril Industries has created on the border between the United States and Mexico. The company has created a lattice system that can detect motion within a 2-mile radius through 32-feet towers that are equipped with radars, laser-enhanced cameras and communications antennae to ward off illegal immigrants from Mexico. Such applications of AI do not resonate with the UN Guiding Principles on Business and Human Rights, which must be placed at the centre of AI development process. We need an international organization to identify and audit such invasive technologies that intend to violate the rights of the individual migrants. Though the pace of development of ABGs is quite fast, there is still time for the global institutions to come together and implement a framework that ensure AI is used for ethical purposes alone and not for incriminating innocent migrants to make their life tougher than what it already is.

 

Tomorrow Avatar

Tomorrow

Leave a Reply

Your email address will not be published. Required fields are marked *