The company plans to delete over one billion faceprints in light of growing privacy concerns and legal pressure.
In a recent post, Meta's VP of artificial intelligence, Jerome Pesenti, stated that Facebook will no longer be using facial recognition software. The company also plans to delete over one billion faceprints.
Since 2010, Facebook users had previously been able to leverage facial recognition technology to automatically identify individuals in any photos or videos uploaded to the social media site.
However, mounting legal and political pressure regarding the use of facial recognition software has prompted Meta to declare that the system will be shut down across the platform in the weeks to come.
Changes on the horizon
Meta currently plans to erase the individual facial recognition templates of more than a billion users – as a result, Facebook will be unable to automatically recognize faces in photos and videos.
The identifying faceprint of individuals who had opted-in to facial recognition will be deleted. Users who didn't opt-in to the system have no faceprint to delete, however, and Pesenti stated that all users are now encouraged to tag their images manually.
The changes we're announcing today involve a company-wide move away from this kind of broad identification, and toward narrower forms of personal authentication.
A controversial tool
Facebook had allowed its users to automatically tag individuals in photos and videos with facial recognition software since 2010 – hugely streamlining the process, and making it much easier for users to find out when someone else had uploaded a picture of them.
Pesenti reiterated that the recognition system did far more than this, however, stating that the technology was especially useful to visually impaired and blind users. These users would previously have been able to identify friends and family in photos, thanks to the usernames automatically included in the image's alt text.
Whilst users will still be able to add alt text and tags to their images and videos, Facebook won't help them out by automatically making suggestions.
This pivot comes, ostensibly, as part of Meta's plan to create an expansive "metaverse" – a virtual world where users can communicate via VR headsets. Pesenti also stated that the company still intends to consider facial recognition technology in instances where users are asked to verify their identity, as a means of protecting people from fraud.
However, Pesenti stressed that the proposed advantages of the software would need to be weighed against "growing concerns about the use of [the] technology as a whole".
Under pressure
The "growing concerns" referred to by Pesenti have, in fact, been steadily boiling for almost a decade. In 2020, Meta paid $650m to settle a class action lawsuit wherein users claimed that the company had produced and stored facial scans without first acquiring consent. Facial recognition technology has also provoked criticism from all corners of the globe, with the UK's Information Commissioner's Office (ICO) even stepping in to address widespread concerns about the technology being used in school cafeterias.
Facial recognition software algorithms also often hold notorious racial biases against people of color, and demonstrate absurdly poor accuracy and increased error rates, particularly when dealing with subjects who are black women aged 18-30. In 2018, a landmark study conducted by researchers Buolamwini and Gebru calculated that black women were mis-classified by some algorithms up to 35% of the time. In contrast, white men were nearly always classified correctly.
These errors can have a massive impact on the lives of their mis-classified subjects – particularly if they are used by authorities in order to identify potential suspects. Erroneous facial recognition software, trained with predominantly white faces, can ultimately be responsible for wrongful arrests and police violence.
Posturing or purposeful?
Privacy advocates are now concerned that Meta's recent name change, and its plan to phase out facial recognition, is little more than a well-timed PR stunt intended to distract from the company's current woes.
In documents shared by ex-employee turned whistleblower, Frances Haugen, it was discovered that the company previously known as Facebook was well aware of the adverse effect its apps have had on users – particularly teenagers. The documents also revealed that Meta had struggled to resolve these issues, or else ignored them outright, continuing a worrying pattern of unreliable data handling and disregard for users' wellbeing.
Unfortunately for Meta, and Mark Zuckerberg, it'll take a lot more than a new coat of paint and a handful of promises to assure its critics that the company is motivated by a commitment to safeguarding user privacy – and not simply chasing profit.