A group of digital rights organizations have filed a series of legal complaints against the facial recognition company Clearview AI for its practice of scraping public data from the internet to power its invasive biometric identification systems.
The leading group of activists includes Privacy International, Hermes Center for Transparency and Digital Human Rights, Homo Digitalis and noyb, and the European Center for Digital Rights.
Those organizations contend that Clearview AI's use of an "automated image scraper" tool is an abuse of people's right to data privacy and is resulting in the formation of an illegal biometric database that has very serious ramifications for those citizens.
Clearview AI has long been considered a highly controversial company, due to the way that it can leverage its systems to identify just about anybody. It developed its sophisticated tracking technology using public images scraped from the internet without the consent of those individuals.
On top of this, Clearview now claims to have "the largest known database of 3+ billion facial images".
This is concerning because the company has already entered into contracts with private companies and law enforcement agencies in the US and around the globe. What's more, the advanced facial recognition tech can even be used in combination with Augmented Reality glasses, to give police agents the ability to identify people in real-time as they walk through public spaces.
Shady past
Clearview AI's history is without a doubt extremely shady. News of the company and its sophisticated tools first broke in January 2020, when the New York Times uncovered its services being sold to government agencies and private corporations for identification purposes.
Until that point, the company had worked within a purposeful shroud of secrecy – compiling publicly available photos to train up its algorithms and become a leading provider of facial recognition tech.
Now, the company's secretive operations and far-reaching influence is being brought into question by a number of leading organizations, which claim that its service was created in a highly immoral way that puts citizens at risk, and flies in the face of existing privacy protections.
Speaking about the legal challenges that have now been brought against the company, Ioannis Kouvakas, Legal Officer at Privacy International said:
European data protection laws are very clear when it comes to the purposes companies can use our data for... Extracting our unique facial features or even sharing them with the police and other companies goes far beyond what we could ever expect as online users.
Against the spirit of the net
The coalition of organizations that have brought the legal complaints against Clearview contends that the work the company has engaged in to create its services, and the capabilities it is now selling, are in direct conflict with the very nature of the internet. Lucie Audibert, Legal Officer at Privacy International, summed it up perfectly:
Clearview seems to misunderstand the Internet as a homogeneous and fully public forum where everything is up for grabs. This is plainly wrong. Such practices threaten the open character of the Internet and the numerous rights and freedoms it enables.
Alan Dahi, Data Protection Lawyer at noyb concurred with this opinion, stating that:
Just because something is 'online' does not mean it is fair game to be appropriated by others in any which way they want to - neither morally nor legally. Data protection authorities need to take action and stop Clearview and similar organizations from hoovering up the personal data of EU residents.
Long-lasting repercussions
Hoovering up people's online photos for the purposes of creating technologies that can then identify those people in real-time while out in public is an extremely concerning and invasive practice that no citizen would ever have thought possible when uploading their images to the internet.
The public nature of the internet results in people's faces being constantly uploaded, not just by themselves but also by others, and it's vital that this process cannot be undermined to create far-reaching privacy and security risks for individuals in the name of profit.
The idea that global police forces can leverage tools created in a shroud of secrecy to engage in surveillance is extremely concerning – particularly when that technology was developed using sensitive biometric information taken from people without their knowledge or consent.
Facial recognition technology creates a biometric map of a subject's face that can be used to identify that person both in photos and in public for the rest of their life, allowing them to be tracked in real-time in any public space by any private corporation or government agency.
This level of tracking, and the biometric information it involves, creates overwhelming privacy and security risks for all data subjects involves – resulting in a highly sensitive cache of biometric data that is at risk of data leaks and breaches. Privacy International stated:
Due to its extremely intrusive nature, the use of facial recognition systems, and particularly any business model that seeks to rely on them, raise grave concerns for modern societies and individuals' freedoms.
Regulators now have 3 months to respond to the complaints, and we can only hope that regulators will rule that Clearview’s practices are in breach of citizens existing rights within Europe. This would result in "meaningful ramifications" for Clearview's global operations according to PI.
In the meantime, anybody in the EU who is concerned that their face and biometric data is being held and processed by Clearview AI can make a formal request to have their information removed from the results of searches made by its many clients. To do this, simply send an email to [email protected] (more information about making this request is provided by PI here.)