Beware! Voice deepfake fraud is on the rise, and traditional defenses won’t help

Beware! Voice deepfake fraud is on the rise, and traditional defenses won’thelp

Discover which types of organizationsare most vulnerable and what kind of approach ensures resilience

 

In our last blog post, we talked about how generative AI-powered deepfake is makingidentity theft unbearably easy for fraudsters, and why voice deepfake fraud isof particular concern.

This time, we’re going to go a littledeeper into

·      Whichindividuals are most often impersonated during a voice deepfake fraudattack

·      The organizationtypes being targeted

·      The damagesincurred from such attacks

·      Why typicaldefenses aren’t enough

And we’ll introduce what kind of approachto protection gets the job done.

 

The individuals most often impersonated

Criminals who commit voice deepfake fraud inthe business sector most commonly leverage the technology to impersonate:

·      A realcustomer to gain trust with an employee and trickthem into transferring money to a fraudulent account.

·      A companyexecutive to gain trust with an employee to divertfunds.

·      Acustomer facing employee to trick customers intoauthorizing fraudulent payments.

In the government sector, a cloned voice impersonating a citizen is used to misleadpolice officers into taking action that may put innocent people in danger.

Similarly, it can be used to impersonate apolice officer to deceive citizens into following instructions that aida crime.

 

Companies under attack and the damage

Organizations that are most vulnerable tovoice deepfake attacks are those that handle sensitive data and personalidentifying information (PII). This includes operations that involve accessingaccounts, credit lines, financial assets, or engaging with a large number ofconsumers or citizens, e.g.:

·      Financialinstitutions and banks

·      Healthcareproviders

·      Media andtelecoms

·      Ecommerceand online retailers

·      Government,police, and defense

In addition, any company that uses a videoteleconferencing platform to communicate internally with employees, as wellas externally with customers, partners, suppliers, investors, or others, are atrisk.

In a recentcase, for example, a finance worker at a global company was tricked into transferring$25 million to fraudsters who used deepfake to impersonate the company’s CFOduring a video conference call.

With video being a lot easier to fake thanvoice, it is critical to be able to detect the voice in the deepfake videoin order to ensure robust protection.

For, if the company fails to detect thevoice deepfake, the damages are great:

·      $243K was lostby a German energy company that was targeted by a voice deepfake fraudster

·      $35million was lostby a Hong Kong based company to fraudsters who cloned a director’s voice

 

Voice deepfake is the hardest to detect

Among the main reasons why voice deepfakeis difficult to detect are:

·      Deepfakevoice is particularly hard to detect in poor quality situations, suchas in noisy environments.

·      Voicefiles have less data than videos, so suspicioussignals are fewer, making synthetic voices more difficult to detect.

·      The technologyfor creating synthetic voices is more advanced than the detectiontechnology currently available.

·      Acloned voice is much more convincing than a clonedvideo at lower resolutions, as it requires fewer details to be generated.

 

Why typical defenses aren’t enough

Organizations turn to several defenses tocombat voice deepfake fraud.

These include enhancing deepfake detectionwith AI. But this can be expensive.

They also design programs for increasingawareness among employees and customers to reduce risky behaviors. However,ensuring cooperation and enforcing awareness is very challenging.

And they even opt to avoid the use ofvoice biometrics altogether for identity theft prevention until thetechnology matures.

 

Conclusion

With voice deepfake fraud on the rise andcurrent efforts failing to ensure sufficient protection leaves organizations atrisk.

What they need today is a new approach,which is multi-layered.

In our next blog post, we’ll get intoexactly what this means, what each of these layers are, and how theircombination is the only way to maximize protection against the rising threat ofvoice deepfake fraud.

In the meantime, to learn more about whyand how you can stop voice deepfake fraudsters in their tracks, we invite youto download our new whitepaper – How to prevent identity fraud with completevoice deepfake protection, by clicking here[GB2] .

 [GB1]Link to blog post.

 [GB2]Add hyperlink to landing page.

See Corsound AI Voice Intelligence In Action
Thank you.
Your submission has been received.
Oops! Something went wrong while submitting the form.