ProPrivacy is reader supported and sometimes receives a commission when you make purchases using links on this site.

EU to propose regulations that limit use of AI in public spaces

The EU is preparing to publish draft legislation that would regulate the use of artificial intelligence (AI) in society. The laws seek to limit the use of AI for "indiscriminate surveillance" and ban the use of AI in systems that can be exploited to influence or manipulate human behaviors.

 

If passed, the proposed legislation would still not come into full force for a few years. However, the content of the regulations could have a significant effect on the use of AI in Europe for many years to come.

So, are the proposals robust enough to protect citizens' fundamental human rights?

Baby steps

In January 2020, the EU published a white paper that laid out proposals for regulating the artificial intelligence used to track citizens in public spaces. 

Public consultation on the white paper resulted in over 1250 replies and submissions from "interested stakeholders from the public and private sectors, including governments, local authorities, commercial and non-commercial organizations, experts, academics and citizens".

According to ECHAlliance, this included contributions "from all over the world, including the EU's 27 Member States and countries such as India, China, Japan, Syria, Iraq, Brazil, Mexico, Canada, the US and the UK". Now, the finalized proposals are to be published on Wednesday, after which they will be debated in the EU parliament.

In the drafts that were leaked in advance, sections have raised concerns because of the ambiguity of the language used and the existence of loopholes. These, it's feared, could allow AI to be exploited by the military and government authorities in ways that are harmful to citizens.

Statue outside of the European commissioners office

EU values

According to Politico, which saw the draft last Tuesday, the EU is working toward a "human-centric" approach to AI regulation that permits the positive use of AI.

This, the EC hopes, will allow the EU to compete with America and China for technological progress – while also providing rights for citizens that ban the use of AI when it is "high-risk".

To these ends, the leaked draft restricts the use of AI "monitoring and tracking of natural persons in digital or physical environments, as well as automated aggregation and analysis of personal data from various sources". It also seeks to limit the use of AI when it infringes on EU "values" and human rights. 

To achieve this, the proposed regulations ban AI that manipulates human behaviors and that negatively affects citizens' ability to form opinions or make decisions and can cause them to act to "their own detriment".

The draft also proposes bans for AI in social scoring systems like those used in China, and for "indiscriminate surveillance applied in a generalized manner to all natural persons without differentiation".

Finally, the draft proposal would restrict the use of AI in predictive systems that can be exploited to target a person's (or a group of people's) "vulnerabilities".

A dictionary zoomed into the word loop hole

Too many loopholes?

The proposed AI regulations would be the first of their kind globally – and can definitely be considered a step in the right direction. That said, experts are concerned due to the ambiguity of the wording and the potential for interpretation.

In Tweets made last Wednesday, European policy analyst Daniel Leufer said:

How do we determine what is to somebody's detriment? And who assesses this?

Leufer highlighted that there are 4 prohibitions in the draft that "are very very vague, and contain serious loopholes".

Specifically, Leufer highlights prohibitions that do not apply if "such practices are authorized by law and are carried out [by public authorities or on behalf of public authorities] in order to safeguard public security".

Leufer is worried that as a result of these exceptions "the detrimental manipulation of human behaviour, the exploitation and targeting of people's vulnerabilities, and indiscriminate mass surveillance can all be carried out to safeguard public security".

This is damning and ultimately leaves too many gray areas– an opinion seconded by the European Centre for Not-for-Profit Law (ECNL) in comments made to the BBC. Like Leufer, the ECNL stated that there remains "lots of vagueness and loopholes" in the draft proposals that appear to make them unfit for purpose.

That said, Leufer was quick to point out that the leaked 80-page draft was a version of the proposal that dates back to January, which he hopes means it has "significantly progressed since". 

High risk 

On Wednesday we will finally gain clarity over the EU's proposals. What we know thus far, is that certain high-risk uses for AI will be identified and restricted in the proposal.

As a result, any organizations that develop AI that is prohibited – or that fail to provide proper information about their AI – could face fines of up to 4% of their annual global revenue.

High-risk definitions listed in the leaked draft include:

  • Systems that establish priority in the dispatching of emergency services
  • Systems that determine access to "educational or vocational training institutes".
  • Recruitment algorithms
  • Systems that evaluate people's eligibility for credit
  • Systems that make individual risk assessments
  • Crime-predicting algorithms

Overall, it is great to see the European Commission defining high-risk AI classifications. However, there will inevitably remain substantial concerns if Wednesday's draft does not also regulate over the use of AI in non-critical or high-risk use cases.

Leufer contends that any publicly available database of AI uses must include "all public sector AI systems, regardless of their assigned risk level" if it is going to inform and protect the public adequately.

According to Leufer, this expectation was clearly defined by Access Now during the public consultation phase because "people typically do not have a choice about whether to interact with an AI system in the public sector".

Unfortunately, unless these requirements are met, the EU could end up passing regulations that are too weak to prevent the use of AI for facial recognition and other over-reaching practices. 

This could greatly threaten the public's right to privacy and could result in technologies that influence people's lives with automated systems known to cause severe errors, prejudice and discrimination.

Written by: Ray Walsh

Digital privacy expert with 5 years experience testing and reviewing VPNs. He's been quoted in The Express, The Times, The Washington Post, The Register, CNET & many more. 

0 Comments

There are no comments yet.

Write Your Own Comment

Your comment has been sent to the queue. It will appear shortly.

Your comment has been sent to the queue. It will appear shortly.

Your comment has been sent to the queue. It will appear shortly.

  Your comment has been sent to the queue. It will appear shortly.

We recommend you check out one of these alternatives:

The fastest VPN we test, unblocks everything, with amazing service all round

A large brand offering great value at a cheap price

One of the largest VPNs, voted best VPN by Reddit

One of the cheapest VPNs out there, but an incredibly good service