ProPrivacy is reader supported and sometimes receives a commission when you make purchases using links on this site.

The deepfake technology behind a $35 million bank heist in Hong Kong

UAE police are investigating a bank robbery in which criminals allegedly used deepfake artificial intelligence (AI) to clone the company director's voice and convince the bank manager to transfer $35 million.

 

As deepfake technology evolves to bring new opportunities, so do the threats that arise from it. While some of us thought that automation-spurred job losses would be the biggest concern within an AI-supported society, a new wave of challenges has emerged. Facial recognition, audio and video deepfakes (created by manipulating voices and appearances) when abused can pose a serious threat to one's privacy and safety, as seen in the latest fraud cases.

Deep voice elaborate scheme

In early 2020 in Hong Kong, a bank manager received what he believed to be a voice call from the director of a company whose voice he knew, someone he had spoken to on several occasions. This director called to share good news and ask for a favor – that the manager authorize transfers worth $35 million to enable a company acquisition.

The director claimed he had hired a lawyer named Martin Zelner to coordinate the acquisition. The bank manager could see emails from both the director and Zelner in his inbox, confirming the exact amounts of transfers. Unaware of deep voice technology, and with written confirmation in front of him, he acted accordingly. He transferred the full amount to several accounts across the US, and in the blink of an eye, $35 million vanished. The UAE investigators leading the probe believe at least 17 individuals were involved in this elaborate scheme.

We are currently on the cusp of malicious actors shifting expertise and resources into using the latest technology to manipulate people who are innocently unaware of the realms of deepfake technology and even their existence.

Jake Moore, a cybersecurity expert at security company ESET

The first reported attempted of this type of fraud took place in the UK in 2019, when fraudsters tried to steal $240,000 from an energy firm by mimicking the CEO's voice with the help of AI. Unlike Hong Kong, this attempt was unsuccessful because it was successfully identified as fraud in time.

AI has crossed the uncanny valley

If you find the Hong Kong scheme unconvincing, and believe in your ability to distinguish a human voice or face from one created by AI, a new study published in the Proceedings of the National Academy of Sciences is here to shatter your beliefs. The study, conducted by Hany Farid, a professor at the University of California, Berkeley and Sophie J. Nightingale, a lecturer at the University of Lancaster, England, suggests that we are at the stage where humans can no longer spot the difference between real and AI-generated faces.

Fraudulent online profiles are a good example. Fraudulent passport photos. Still, photos have some nefarious usage... But where things are going to get really gnarly is with videos and audio.

Hany Farid, a professor at the University of California

According to the research, we'd have a slightly better chance of choosing accurately between a real and a synthetic image if we flipped a coin. The participants in the study were able to recognize fake images less than half of the time. To be precise, the average score was 48.2%. We can confidently say that contemporary AI creations have crossed the uncanny valley.

The creepiest part, however, is the section of the research showing that people have a 7.7% higher tendency to trust AI-generated images than real faces. When asked to rate a set of actual and fictitious faces based on the feeling of trustworthiness, the participants found the AI-generated faces 7.7% more trustworthy than real ones. This finding sheds new light on the whole "it could never happen to me” misconception that can be very dangerous in today's ever-evolving world.

We were really surprised by this result because our motivation was to find an indirect route to improve performance, and we thought trust would be that, with real faces eliciting that more trustworthy feeling.

Sophie J. Nightingale, a lecturer at the University of Lancaster, England

Protect yourself

As disturbing as these findings are, there are still habits we can practice to minimize our chances of becoming victims of deepfake technology and other scams:

Written by: Danka Delić

With her BA in English Language and Literature, Private Pilot Licence, and passion for researching and writing, Danka brings further diversity to the team. As a former world traveler, she learned to appreciate cyber security and the necessity for digital privacy. Danka is a nature, animal, and written-word lover. She enjoys staying on the go, both mentally and physically, and spends most of her free time either reading or hiking with her dog.

0 Comments

There are no comments yet.

Write Your Own Comment

Your comment has been sent to the queue. It will appear shortly.

Your comment has been sent to the queue. It will appear shortly.

Your comment has been sent to the queue. It will appear shortly.

  Your comment has been sent to the queue. It will appear shortly.

We recommend you check out one of these alternatives:

The fastest VPN we test, unblocks everything, with amazing service all round

A large brand offering great value at a cheap price

One of the largest VPNs, voted best VPN by Reddit

One of the cheapest VPNs out there, but an incredibly good service