UAE police are investigating a bank robbery in which criminals allegedly used deepfake artificial intelligence (AI) to clone the company director's voice and convince the bank manager to transfer $35 million.
As deepfake technology evolves to bring new opportunities, so do the threats that arise from it. While some of us thought that automation-spurred job losses would be the biggest concern within an AI-supported society, a new wave of challenges has emerged. Facial recognition, audio and video deepfakes (created by manipulating voices and appearances) when abused can pose a serious threat to one's privacy and safety, as seen in the latest fraud cases.
Deep voice elaborate scheme
In early 2020 in Hong Kong, a bank manager received what he believed to be a voice call from the director of a company whose voice he knew, someone he had spoken to on several occasions. This director called to share good news and ask for a favor – that the manager authorize transfers worth $35 million to enable a company acquisition.
The director claimed he had hired a lawyer named Martin Zelner to coordinate the acquisition. The bank manager could see emails from both the director and Zelner in his inbox, confirming the exact amounts of transfers. Unaware of deep voice technology, and with written confirmation in front of him, he acted accordingly. He transferred the full amount to several accounts across the US, and in the blink of an eye, $35 million vanished. The UAE investigators leading the probe believe at least 17 individuals were involved in this elaborate scheme.
We are currently on the cusp of malicious actors shifting expertise and resources into using the latest technology to manipulate people who are innocently unaware of the realms of deepfake technology and even their existence.
The first reported attempted of this type of fraud took place in the UK in 2019, when fraudsters tried to steal $240,000 from an energy firm by mimicking the CEO's voice with the help of AI. Unlike Hong Kong, this attempt was unsuccessful because it was successfully identified as fraud in time.
AI has crossed the uncanny valley
If you find the Hong Kong scheme unconvincing, and believe in your ability to distinguish a human voice or face from one created by AI, a new study published in the Proceedings of the National Academy of Sciences is here to shatter your beliefs. The study, conducted by Hany Farid, a professor at the University of California, Berkeley and Sophie J. Nightingale, a lecturer at the University of Lancaster, England, suggests that we are at the stage where humans can no longer spot the difference between real and AI-generated faces.
Fraudulent online profiles are a good example. Fraudulent passport photos. Still, photos have some nefarious usage... But where things are going to get really gnarly is with videos and audio.
According to the research, we'd have a slightly better chance of choosing accurately between a real and a synthetic image if we flipped a coin. The participants in the study were able to recognize fake images less than half of the time. To be precise, the average score was 48.2%. We can confidently say that contemporary AI creations have crossed the uncanny valley.
The creepiest part, however, is the section of the research showing that people have a 7.7% higher tendency to trust AI-generated images than real faces. When asked to rate a set of actual and fictitious faces based on the feeling of trustworthiness, the participants found the AI-generated faces 7.7% more trustworthy than real ones. This finding sheds new light on the whole "it could never happen to me” misconception that can be very dangerous in today's ever-evolving world.
We were really surprised by this result because our motivation was to find an indirect route to improve performance, and we thought trust would be that, with real faces eliciting that more trustworthy feeling.
Protect yourself
As disturbing as these findings are, there are still habits we can practice to minimize our chances of becoming victims of deepfake technology and other scams:
- Keep reminding yourself that avatars are among us and widely present – Not everything you see online is real. In fact, scams and phishing attacks are increasing in number day by day, and it's becoming almost impossible to discern them with the naked eye.
- Get all the help you need – there are some excellent advanced security solutions, like Lookout, that provide malware and Safe Browsing protection. They will scan all links you click on in social media, text messages, and online, then block threats before they can harm you.
- Always go the extra mile identifying a company or individual that urges you to do something or ask about sensitive data. The sense of urgency is a common trick to convince people to give away information quickly. Go directly to the source (make a few phone calls, visit them in person if necessary) to validate whether the request you've received is authentic. Never share any information digitally if you can't validate their identity with 100% confidence.