When Congress steps in: what the AI voice fraud crackdown means for your business

Gavel and digital circuit board representing AI voice fraud legislation and cybersecurity policy

In April 2026, US Senator Maggie Hassan sent letters to four major AI voice cloning companies—ElevenLabs, LOVO, Speechify, and VEED—demanding answers about how they prevent their tools from being weaponized for fraud. It was a clear signal: after years of watching synthetic voice technology enable billions of dollars in financial crime, Washington has had enough. For security and fraud prevention leaders, the moment is clarifying. The legislative machinery is moving. But legislation moves slowly. Fraudsters don't.

The scale of the problem has become impossible to ignore

The numbers driving congressional action are staggering. AI-enabled fraud losses in the US are projected to climb from $12.3 billion in 2023 to $40 billion by 2027, a compound annual growth rate of 32%. More than 10% of banks have already lost over $1 million each to deepfake voice fraud, and the average loss per incident now exceeds $500,000. Enterprises report average losses of $680,000 per voice fraud attack.

The technology driving this wave has become frighteningly accessible. Modern voice cloning tools can replicate a person's voice from as little as 3 seconds of audio—a snippet from a voicemail, a video clip, or a public earnings call. Human ability to detect AI-generated speech has dropped to below 30% accuracy, and for high-quality fakes, as low as 24.5%. Meanwhile, CEO fraud now targets more than 400 companies per day using deepfakes, according to Keepnet Labs.

What Congress is actually doing about it

Two parallel tracks of legislative action are now underway in Washington:

  • The AI Fraud Accountability Act (S.3982), introduced by Senators Lisa Blunt Rochester and Tim Sheehy with bipartisan House support, would make it a criminal offense to use a digital impersonation to defraud someone, carrying penalties of up to 3 years imprisonment. It also directs NIST to convene a working group to develop best practices for detecting and tracing digital impersonations.
  • Senator Hassan's enforcement push targets the supply side directly, pressing voice cloning platforms to explain what consent verification, watermarking, and law enforcement reporting mechanisms they have in place. Her April 2026 letters signal that platform accountability—not just criminal prosecution—is now on the table.

Together, these measures represent the most serious federal attention to AI voice fraud in history. The FTC would gain civil enforcement authority under the Accountability Act, treating digital impersonation fraud as an unfair or deceptive trade practice.

Why legislation alone will not protect your organisation

Regulation matters, but it has a fundamental limitation: it is reactive by design. A law criminalising voice fraud cannot prevent a call centre agent from being deceived by a synthetic voice in real time. It cannot stop a fraudster operating from outside US jurisdiction. It cannot detect whether the voice on a call today is real.

There is also an enforcement gap problem. Even with criminal penalties in place, attribution of AI-generated voice fraud is technically challenging. Watermarking—one of the measures Congress is pressing platforms to adopt—can be stripped or circumvented. And by the time a prosecution is brought, the damage is done: the wire transfer has cleared, the account has been drained, the synthetic identity has served its purpose.

The Better Identity Coalition's 2026 guidance and a PwC analysis on synthetic identity fraud both reach the same conclusion: organisations must pivot to technical controls that work at the point of interaction—before a transaction is authorised, before an identity is verified, before access is granted.

The technical answer: detection that works in real time

The most effective defence against voice fraud operates at the audio and biometric layer, independent of whether a regulatory framework has caught up. Real-time deepfake detection analyses the acoustic and biological signatures that distinguish a genuine human voice from a synthesised one—artefacts that remain even in high-quality clones invisible to the human ear.

Organisations deploying this kind of capability benefit from several layers of protection:

  • Liveness detection that identifies whether a voice signal originates from a live person or a replay/synthesis
  • Deepfake scoring applied to every call or voice interaction, flagging anomalies for review before authentication completes
  • Multimodal verification that cross-references voice with other identity signals, making synthetic impersonation dramatically harder

Legislation and technical controls are not either/or choices. A robust security posture needs both. But while waiting for Washington to act, the fraudsters certainly will not wait. The organisations best positioned to navigate the current wave of AI voice fraud are those that have already deployed real-time detection—and whose systems are continuously updated as synthesis technology evolves.

Corsound AI's Deepfake Detect platform provides real-time audio and video deepfake detection built for the threat environment of 2026 and beyond. If your organisation is evaluating its exposure to AI voice fraud, speak to our team to understand what genuine real-time protection looks like in practice.

See Corsound AI Voice Intelligence In Action
Thank you.
Your submission has been received.
Oops! Something went wrong while submitting the form.