The AI Fraud Onslaught: Synthetic Voice Attacks Cost Insurers and Customers Billions | Insurify

0

Automobile insurance fraud, which increases premiums for all drivers, was once almost entirely limited to staged accidents, forged documents, and identity theft. Many schemes involved large rings, even organized crime families, but the techniques were usually unsophisticated.

Technology and the easy accessibility of inexpensive yet advanced computer programs have altered the landscape and contributed to an exponentially growing market for insurance fraud. And, ironically, the industry’s increasing reliance on automation over people may be partly to blame for the onslaught’s success.

Excluding AI fraud data, the Coalition Against Insurance Fraud estimates total insurance fraud at $308.6 billion annually. The FBI estimates insurance fraud increases premiums by roughly $400–$700 per family per year.

Quest for convenience opens door wide for fraudsters

The most enormous explosion in computer crime has come from the emergence of synthetic voice (SV) attacks. Fraudsters use AI-generated or cloned speech to impersonate a customer, agent, or employee over the phone and steal money, data, or access to policy information.

Synthetic voice attacks on insurers increased by 475% in 2024, according to Pindrop’s Voice Intelligence & Security Report 2025. Many call centers reported about seven potential deepfake fraud cases a day, an increase of more than 1,300% from previous years.

“Fraud investigators are observing that these types of schemes have now become commonplace,” Christopher Migliaccio, a lawyer and founder of Warren and Migliaccio LLP, told Insurify. “As a result of the use of cloned voice technology and synthetic identity creation, fraudsters can operate at scale, increasing not only the potential exposure for erroneous payouts, but also significantly increasing the compliance burden upon the claims departments of insurers.”

A typical scheme involves someone, or a group, acquiring 10 to 20 seconds of a policyholder’s natural spoken conversation from social media or other sources. They then use AI tools to realistically recreate, or synthesize, a voice that can pass simple biometric voice-print identification tests.

The fraudster can then file a claim for a collision involving a vehicle valued at $35,000, assert damages of $12,000, and direct the settlement of the claim to either a repair shop or a rental agency that is often in collusion with the fraudsters.

The real policyholder remains unaware of what occurred until either a subsequent premium increase or a policy review, which may occur six to 12 months later.

Some industry groups, such as the National Insurance Crime Bureau, say the SV schemes are so new they don’t yet have concrete numbers on the attacks. But they know the threat is out there, and they know it’s big.

Insurers must shore up their defenses, experts warn

“The impact of these breaches is staggering, encompassing both immediate financial losses as well as long-term reputational damage,” said Rahul Powar, CEO at Red Sift, a global cybersecurity company.

AI, Powar said, is a “force multiplier” for fraudsters.

“They no longer need large, sophisticated teams or design skills,” he said. “A lone individual with free tools can now impersonate a CEO, generate realistic deepfake audio or video, and send convincing messages at scale.”

Some industry executives have argued that the insurance industry’s substantial investments in technology and automated systems — which accept policyholders’ voice requests for convenience — may exacerbate the problem of synthetic voice attacks.

“There’s a common misconception that the biometric voice authentication systems they use represent a hardened layer of security against synthetic speech,” Marcus Denning, a criminal attorney and senior lawyer at MK Law, told Insurify. “Most automobile insurers prioritize speed and customer satisfaction over the assurance that the individual making a claim is actually the policyholder.”

For example, he said, a call center that previously employed 50 experienced adjusters to detect fraud may now use automated, scripted systems to handle 80% of all incoming claims. And yet, Denning said, an internal audit of one of his clients revealed that the number of suspected synthetic interactions rose from three during the baseline period to 25 in the 18 months following the introduction of the automated systems.

“In order to protect themselves, insurers must adapt their accreditation protocols and implement tools that allow real-time fraud detection and the ability to confirm an individual’s actual identity beyond just the voice recognition process used today,” he said.

What’s next? AI use in fraud will grow

Based on data from January and February 2025, the trend in synthetic voice attacks shows no signs of slowing. Pindrop predicts that the fraud rate will continue to increase by 8% year over year, with 1 in 584 calls being fraudulent.

“At these levels, contact centers in the U.S. could potentially be exposed to a fraud risk of $44.5 billion in 2025,” the report said. “Organizations need to adopt the technologies, processes, and structural changes that could help defend against this new wave of AI-driven fraud.”


 

FOX28 Spokane©