MEPs are also calling upon EU lawmakers to ban social scoring systems and tackle algorithmic bias
Members of the European Parliament have voted in support of banning biometric mass surveillance entirely, stating that individuals should only be monitored if they have been suspected of a crime, and not otherwise.
This potential win for public privacy would put an end to the automated recognition of people in public spaces through their biometric features, and EU lawmakers are also being called upon to ban AI-assisted predictive policing and the use of private facial recognition databases.
A line in the sand
The resolution adopted by MEPs, which passed with a 377:248 vote, draws a clear line in the sand in terms of public privacy, and will be discussed further during the upcoming negotiations that will inform the finer details of the Artificial Intelligence Act.
Ruling out ratings
In addition to placing a blanket ban on biometric mass surveillance in publicly accessible spaces, MEPs are also on a mission to rule out the use of social scoring systems – a thoroughly dystopian idea that would see individuals given a rating based on their personality or behavior.
In April 2021, an attempt was made by the EU's executive to ban high-risk uses of AI-assisted technology. Whilst this ban would've also nixed social scoring, numerous MEPs quickly echoed the civil sentiment and were critical of the proposed legislation for not going far enough.
Keeping an eye on our rights
Artificial intelligence is no longer the stuff of sci-fi novels and blockbuster thrillers – it's a part of our present, and is already being used in (and on the) public across Europe, much to the concern of privacy advocates.
Similarly, biometric systems have become increasingly commonplace in our day-to-day lives, whether we're traveling, shopping, or accessing our phones and buildings.
One of the most worrying aspects of AI-assisted technology is their potential algorithmic bias. MEPs have highlighted condemnable instances of identification systems misidentifying people of color, LGBTQ+ individuals, women and seniors more often. Additionally, in Germany, facial recognition systems have been deployed outside LGBTQ+ spaces, religious venues, lawyers' offices, and GP surgeries – and all without adequate reason.
This tactless targeting of marginalized groups demonstrates how AI can be used to quash self-expression and uphold social oppression, and generally make us all a lot more wary of going about our lives authentically.
As such, MEPs have come to the conclusion that AI-powered systems must always have human supervision – and that a human operator must always make the final say in law enforcement contexts. In addition, MEPs have also condemned AI-assisted judicial decisions, and pushed to ban the practise, and quell any budding automatic systemic biases in an already repressive justice system.
A promising pushback
All in all, it's encouraging to see MEPs take such a firm stance in opposition to biometric mass surveillance. Some may argue that this technology plays a huge part in assisting law enforcement efforts and even makes our own lives that much more convenient. These factors, however, should not justify instances where biometrics are used to indiscriminately surveil individuals in public spaces or treat innocents as suspects without any provocation.
If left unchecked and unchallenged, government bodies, law enforcement agencies, and private corporations could feasible use biometric surveillance and AI-powered systems to chip away at public privacy, and prop up a society founded on unwarranted and automatic discrimination – which would be disastrous for us all.