Recent data breach in Australia must sound the siren

What was long feared has finally materialized. Civil society has long been worried about the misuse and theft of private data from AI systems. Yet, there have been numerous AI applications floating in the market with a lack of well-designed security and ethical guardrails. What’s the big deal?

The big deal is the threat that looms over these AI systems 24/7 regarding the breach of the fortress that these systems claim to have built around user data. No matter how much the AI companies might have assured us of their security and event preparedness, it takes one incident to bring the confidence down to rubble. And that’s what has happened recently in Australia, with an AI-powered facial recognition application that has been pervasive in bars, restaurants, shopping malls, and sporting events.

 

What has indeed happened?

Well, here it is… The facial recognition data of many Australian users of Outabox has landed in the hands of a malicious actor. Outabox is a company founded in Australia that rolled out facial recognition kiosks to scan visitors and note their temperature during the COVID-19 pandemic. Such kiosks have also been deployed in casinos to detect problem gamblers who have voluntarily enrolled in the self-exclusion initiative. What happened last week is that a website emerged by the name “Have I been Outaboxed” claiming to have been launched by Outabox employees located in the company’s office in the Philippines. Though Outabox indeed has an office in the Philippines, these hackers who launched the website did not necessarily belong to the company. And yet it allowed users to simply enter their name and check if their information had been included by the company in its database. The database had slipshod internal controls and had user data stored in an unsecured spreadsheet, making the data available to all to exploit.

This incident involving a massive data breach rings the sirens over the utter lack of safety of privacy-invasive facial recognition systems that are deployed in casinos, restaurants, sports events, and clubs. And if you think that the malevolent website released only facial recognition data to the public, then just hold on! The website has graciously (sarcasm intended, obviously) exposed facial recognition biometrics, signature, phone number, driver’s license scans, club membership data, address, birthday, slot machine usage, and club visit timestamps of customers of IGT – a leading supplier of gambling machines. Even the signature, photograph, and redacted driver’s license of one of Outabox’s founders has been posted online. One can easily imagine how mischievous parties can exploit this wealth of data for their ulterior motives.

The perpetrators have justified their actions by stating that they have been unpaid employees for a few months at Outabox, and their act of breaching into Outabox’s database is a form of protest against the company’s decision to outsource jobs to Asian countries for cheaper labour. No matter how much unethical and unreasonable this sounds, the blow has been dealt severely and the company could hardly do anything to contain the damage initially.

 

What next?

A 46-year-old man has been arrested in a suburb of Sydney on charges of blackmail. Investigation has begun under the Crimes Act of the United States, under the offence of blackmail under the Crimes Act and possession of personal information for unlawful purposes. While West Tradies Club in Mt. Druitt, Fairfield RSL, and City of Sydney RSL are the clubs that have been affected, the police authorities have asked all the affected organizations and individuals alike to stay patient about further information from authorities. The local authorities are also working on getting the site pulled down from the web at the earliest. Moreover, some cybersecurity experts are also in doubt over the hackers’ claims of possessing confidential data of users. It is probable that the hackers actually have no biometric data, unlike what they claim on their website.

 

Are we prepared for the next?

Seems not! Sadly, despite all the security measures and privacy laws that we have built so far, such breach incidents shake the foundation of our preparedness from the core. We still need more stringent privacy reforms and inescapable guidelines and limitations on facial recognition technology. Till then, view every AI system with a hint of suspicion and doubt, and share only as much data as it is okay for you to share publicly without concerns.

Tomorrow Avatar

Arijit Goswami

Leave a Reply

Your email address will not be published. Required fields are marked *