Banks Face Real Losses From Fake Voices

Last Updated: March 15, 2022

3 faces adjacent to each other, progressing from wire frame into realistic female head with fully shaded polygons

In most people’s imaginations, the words “bank robbery” conjure up visions of a high-risk and low-tech crime: An armed and masked criminal threatening bank tellers in full view of security cameras. However, a sophisticated new breed of cybercriminals has begun to target banks, says Mark Horne, CTO of Pindrop and shares why banks need to protect against deepfakes.

In a 2020 heist of a United Arab Emirates bank, no one entered a branch, cracked a safe, or drove a getaway car, but the criminals — up to seventeen — were able to siphon off $35 million of funds over the phone by using audio deepfakes. 

What Is a Deepfake?

For those unfamiliar with the technology, a deepfake is a computer-generated impersonation of a real person’s voice and/or appearance. Audio deepfakes use machine learning algorithms to process genuine recordings of the “target.” The more audio available to process, the more likely the deepfake algorithm produces a convincing facsimile of the original speaker and their unique voice.

Keep in mind that deepfakes can be used for good as well. Actor Val Kilmer, who was left unable to speak after surviving throat cancer, could reclaim his voice by commissioning a deepfake model. Other uses are well-intentioned but controversial and ethically fraught, like deepfakes used in fifty seconds of Roadrunner, Morgan Neville’s documentary about Anthony Bourdain. 

See More: Deepfakes at Work: Safeguarding Your Workplace & Battling the Threat

What Gives It Away?

Not every deepfake will leverage the most sophisticated technology. Bad actors employing shoddy deepfake imitations with an unnatural speaking pace may try to mask the poor quality with loud background noise or static. If a person can hardly hear the caller to begin with, it’s more challenging to detect if something is amiss. As the listener strains to hear the “caller,” they may overlook the cues indicating that the caller is a fraud. More convincing deepfakes pose more significant dangers, and just a few minutes of genuine speech may suffice to create a plausible imitation. 

When faced with a well-made deepfake, financial institutions and their employees need to use common sense security measures to prevent malicious activity. If, for example, an employee with financial access takes a call and the person on the other end asks them to wire $35 million, there should be safeguards in place. A fake caller — especially a fake recorded caller — cannot answer simple security questions. 

Similarly, just as modern computer security mandates two-factor authentication, it makes sense that such significant financial transfers should require several layers of official signoff. Then, of course, there’s the matter of educating employees and consumers about the warning signs of a deepfake, like stilted rhythm or excessive background noise.

Identifying Deepfake Threats Better

But identifying deepfake crime isn’t just a matter of good personal decision making, robust sign-off protocols, or anti-fraud education. Technology has a significant role to play, especially when detecting and combating the most accomplished, and therefore most dangerous fakes. The companies and governments that participate in working groups and competitions like the ASVspoof challenge, the National Defense Authorization’s Deepfake Working Group, and the Deepfake Detection Challenge have pioneered tools for quick and reliable identification of tampered audio. 

Sophisticated technologies analyze the harmonics, rhythm, frequencies, and tone to determine whether a voice is authentic or just a convincing imitation. Some of the audio data analyzed exist at frequencies beyond human hearing, so more advanced intervention is needed. The information considered isn’t just audio – these systems can use call metadata to identify calls from “spoofed” numbers. As deepfake technology evolves, businesses need to employ tools to protect against these threats.

While deepfake frauds are still rare, events like the bank robbery in the UAE demonstrate the danger malicious deepfakes pose. Given the ubiquity of the technology online, deepfakes are here to stay, and banks and other financial institutions that handle sensitive data should be prepared to combat them. 

Have you ever had an experience with a deepfake audio threat? Tell us about it on LinkedInOpens a new window , TwitterOpens a new window , or FacebookOpens a new window . We always learn so much from you!

MORE ON CYBERCRIME:

Mark Horne
Mark Horne

Chief Marketing Officer, Pindrop

Mark Horne is the Chief Marketing Officer at Pindrop. He is a holistic marketing executive with a proven career record of driving strategic development and operational execution of transformational, customer-centric initiatives that impact and support organizations’ mission and growth objectives. He has led high-performing organizations across the B2B cloud, software, and technology landscape. He has a comprehensive background in creating and spearheading strategies and programs that drive marketing planning strategy, brand awareness, customer demand, and revenue growth.
Take me to Community
Do you still have questions? Head over to the Spiceworks Community to find answers.