It was revealed this week that US banks have been running 'unreported trials' of specific camera software, including facial recognition technology.
The news comes amidst the release of new FTC guidelines regarding artificial intelligence and EU regulators proposing tighter restrictions on the widely scrutinized software.
Banking on it
Reuters reported on April 19 that major banks, including JP Morgan Chase & Co., City National Bank of Florida, and Wells Fargo & Co., have either trialed or are planning to trial 'facial recognition and related artificial intelligence systems'.
City National, the news agency says, will commence trials of the facial recognition technology to identify both customers at cash machines and employees in stores. JP Morgan, on the other hand, is pressing ahead with a small series of trials involving "video analytics technology" in the state of Ohio.
Wells Fargo was using footage of crimes and matching them to known offenders over a decade ago, according to a former employee of software company 3VR, but the bank declined the opportunity to comment on their current fraud prevention tactics.
According to Reuters, Bank of America met with AI company AnyVision – whose technology is currently being used by oil company BP and a hospital in LA – several times in 2019. The organization claimed they have ditched other related investments made around ten years ago to reduce loitering at ATMs.
The FTC lays down its guidelines
The news comes as governmental bodies on both sides of the Atlantic are proposing guidelines and regulations for the ethical use of artificial intelligence technologies.
The FTC recently published guidelines on its website encouraging companies to aim for 'truth, fairness and equity' in their use of artificial technologies. Their tips include compiling diverse data sets, ensuring your technology is transparent as possible so it can be scrutinized, and avoiding sweeping claims about how unbiased your algorithms are.
Somewhat worryingly, one of their key instructions – which they dedicate an entire paragraph to – is the advice to 'do more good than harm'. Many consumers expect this as a basic standard (or, even better, 'do no harm at all') when it comes to all technologies, but particularly new and potentially dangerous ones.
EU develops tough stance on facial recognition
Across the pond, The Financial Times reports that EU regulators have proposed strict new regulations on facial recognition and restricted its usage to 'a small number of public-interest scenarios' and suggest that practices like real-time tracking will need judicial approval at the state level.
There is also a proposal of large fines – up to 6 percent of a company's global turnover – if they are found to be using facial recognition technology during processes such as recruitment that perpetuate existing biases against minority groups.
The regulators also discussed a pre-emptive ban on 'social score practices, iterations of which are still in their infancy in China but yet to be used anywhere in the European Union or wider continent.
In the UK, in August 2020, judges ruled that the use of automatic facial recognition software by South Wales Police was, in fact, unlawful. This landmark ruling occurred after a man in Cardiff was identified in 2017, and again at a peaceful protest in 2018, without his consent. Ultimately, the view that these instances breached his human rights prevailed.
A growing market
Despite institutional pushback, most reports suggest that the size of the worldwide facial recognition market is set to at least double by 2025, with some estimates suggesting it will be worth a total $8.5 billion by that point.
By then, it's anyone's guess where artificial intelligence, biometrics, and facial recognition will be in terms of development and complexity. AI theorists have consistently warned that these technologies are moving at a much quicker pace than many of us realize, making it even more important to institute tough regulation.