The Terrifying Way Scammers Clone Your Voice to Defraud Your Family
Illustration by The Epoch Times, Getty Images

The Terrifying Way Scammers Clone Your Voice to Defraud Your Family

Artificial intelligence has added a new layer of subterfuge to spoofing, extending its reach and worsening its effect on victims.
Updated:
Phishing attacks, which as far back as the 1990s used fake emails to scam unsuspecting victims, evolved into smishing attacks, which use fake text messages. Now, that evolution has entered a new stage with voice phishing or vishing attacks, which involve voice cloning.

One method is to send a voicemail message generated by artificial intelligence (AI), seemingly from a panicked child or grandchild, to a family member. The message urges the family member to send money, often through a bogus bank link.

“The everyday vishing script is a high-pressure, ‘urgent problem’ phone call,” Nathan House, CEO of StationX, a UK-based cybersecurity training platform, told The Epoch Times.

“The caller spoofs your bank’s number, claims your account is compromised, and needs you to ’verify' a one-time passcode they just texted—actually your real two-factor code.

“Other variants impersonate police, utility companies, or a panicked relative demanding emergency funds.

“The hallmarks are a trusted name on caller ID, an emotional or financial threat, and a demand for immediate action—usually sharing credentials, reading back a code, or wiring money.”

Jurgita Lapienyte, chief editor of Cybernews, a Lithuania-based publication focused on cybersecurity, highlighted the growing prevalence of vishing.

She warned that while current AI voice cloning technology is only able to stick to a script and cannot react spontaneously to questions or responses in real time, it is only a matter of time until it actually learns to behave more like humans and can be weaponized against them.

“If I feel like I’m actually talking to a relative of mine, I will be more willing to lend them money, because I’m convinced that this is the actual person that I’m talking to, and this is really dangerous,” Lapienyte said.

image-5852849
A smartphone displays an incoming call from an unknown number, in this file photo. As cyberscams spread, scammers increasingly use spoofing—masking their numbers to make calls appear legitimate. Boris023/Shutterstock
The annual cybercrime report from the FBI’s Internet Crime Complaint Center (IC3), released on April 23, states that in 2024 it received 193,407 complaints of phishing or spoofing (a technique in which scammers mask their own phone numbers and trick victims into thinking that they are on a genuine call). That made it the most prevalent type of scam, compared with 86,415 complaints of extortion and 441 complaints of malware.

House said the FBI report shows how prevalent the phishing or spoofing problem has become.

“Phishing, spoofing, and their offshoots—voice phishing or vishing, smishing, QR phishing—have been the workhorses of cybercrime for years because they’re cheap to run and scale to millions of targets with almost no technical skill,” he said.

Lapienyte said it is becoming cheaper to scam people using voice cloning.

“In 2020, if you wanted to clone a voice, you would need probably around 20 minutes of recording,” she said.

“These days, with AI and automation, and other innovations, you just need a couple of seconds of someone’s voice, and you can ... make a recording resemble the person that you are trying to impersonate.”

House said scammers only need a few seconds of audio, such as “a TikTok clip or a brief wrong-number call,” to make a convincing replica using AI voice cloning tools.

image-5852839
An illustration shows a smartphone recording in front of a voice cloning screen in Los Angeles on June 9, 2023. A rising wave of scams involves fraudsters using AI voice cloning tools to impersonate victims' family members or friends. Chris Delmas/AFP via Getty Images

“That lowers the cost and skill barrier dramatically,“ he said. ”Criminals no longer need studio-quality samples or lengthy recordings. So they can scoop up snippets posted online, feed them into a free cloning engine, and start dialing.”

image-5852857

According to the IC3, people older than 60 suffered losses of nearly $5 billion from cybercrime in 2024 and submitted the most complaints to the agency.

And it could get a lot worse—the Deloitte Center for Financial Services said in a report published in May 2024 that AI scam losses could reach $40 billion in the United States by 2027.
In May 2023 Tejay Fletcher, 35, was jailed for 13 years and four months by a judge in London for his role in running iSpoof, a website that became a “fraud shop” for scammers who defrauded about 43 million pounds ($57 million) out of UK residents and a large, unspecified figure from victims in the United States.

Scammers bought spoofing software, such as Interactive Voice Response, from the site, which generated 112.6 bitcoin ($1.08 million) for Fletcher and his associates.

In 2019, The Wall Street Journal reported that a vishing scam tricked a UK energy firm executive into transferring 220,000 euros ($243,000) after it convinced him that he was speaking to his boss in Germany.

“Voice deepfake heists remain rare headlines, but they’re almost certainly underreported because companies dread the reputational hit,” House said.

“Only a handful have surfaced publicly, yet business-email-compromise losses—into which these CEO voice scams fit—are already measured in billions each year.

“It’s plausible that deepfake voice fraud has siphoned off many millions collectively.”

House said most incidents have been written off as wire transfer fraud.

Lapienyte agreed that there is underreporting of vishing and other scams—especially among the elderly, who are often ashamed to admit that they have been scammed “because they are feeling lonely and they don’t want to be ridiculed.”

FBI Director Kash Patel said in an April 23 statement: “Reporting is one of the first and most important steps in fighting crime so law enforcement can use this information to combat a variety of frauds and scams.
image-5852840
FBI Director Kash Patel attends an event at the White House on April 21, 2025. Patel urged the public to report cybercrime to help the agency combat growing threats. According to the FBI’s latest Internet Crime Report, 859,532 complaints were filed in 2024, with reported losses exceeding $16 billion—a 33 percent jump from 2023. Chip Somodevilla/Getty Images

“It’s imperative that the public immediately report suspected cyber-enabled criminal activity to the FBI.”

Chinese organized crime syndicates, such as the 14K triad, have in recent years built large-scale cyberscam hubs in Cambodia, Laos, and Burma (also known as Myanmar), which target Americans.
Erin West is a former prosecutor who now runs Operation Shamrock, which seeks to highlight the threat from the cyberscamming industry. She said the triads “should be feared at the level of any evil-doing nation state.”

“They’re that big. They’re that organized. They’re that well-funded,” she told The Epoch Times in a recent interview.

Lapienyte also said the Southeast Asian cyberscamming industry is now more likely to conduct large-scale vishing attacks than to just go for the “big fish.”

She said that while the scammers may target the elderly and those who live alone, the targets are often selected at random.

“It could be anybody,” Lapienyte said. “Scammers also have their favorite times to scam, during holidays or in the mornings, when people are more relaxed or not thinking clearly.”

She noted that elderly people are often targeted by romance fraudsters.

“In those romance scams, they are building a totally new persona,” Lapienyte said. “So they can definitely fake a voice, but I think it’s more dangerous when they manage to fake the voice of someone like a relative, or maybe someone in the company, or someone you know. ... It’s way more personal.”

image-5852838
Seniors shop during special hours for the elderly and disabled at Northgate Gonzalez Market in Los Angeles on March 19, 2020. Experts say older adults remain frequent targets of cyberscams, with underreporting especially common in this age group. Mario Tama/Getty Images

Lapienyte said voice cloning has also presented dangers for the media.

On April 11, the UK’s Press Gazette trade journal reported that several media outlets, including Yahoo News, had deleted articles over fears that the so-called experts they had interviewed had been AI-generated bots.

“Before AI arrived, you would verify that the person is a person,” Lapienyte said. “So you would pick up the phone and call them, just have a little chat, check this is a human being I’m talking to.”

She said AI is now able to fake videos as well as voices, making it harder to verify that an interviewee is a genuine person.

Lapienyte said that while AI is improving every day, at the moment, voice cloning software is unable to perfectly mimic humans and often sounds robotic or lacks a sense of humor.

She said it has none of the pauses, slips of the tongue, or unfinished sentences that humans make when they speak.

“But they will get there, I think, sooner rather than later, and then it’s going to be really scary,” Lapienyte said.

Starling Bank, a UK-based digital challenger bank, produced a YouTube film in September 2024 in which actor James Nesbitt warned of the rise of voice cloning scams.

In the video, Nesbitt says, “AI voice cloning fraud is on the rise, and anyone can become a victim.”

Nesbitt, whose advice is endorsed by Starling, suggested that people choose a safe phrase, such as a “family in-joke,” and deploy it to check that they are actually speaking to the relative or friend they think is on the other end of the phone.
image-5852837
James Nesbitt speaks on stage during an event in London on Oct. 14, 2017. Nesbitt suggests that family members use a safe phrase—such as a “family in-joke”—to confirm that you are really speaking with the person you believe is on the other end of the phone. Tim P. Whitby/Getty Images for BFI

House said a code word between family members “is a simple, effective speed bump.”

“If someone calls sounding like your son begging for bail money, [ask] for the agreed phrase—something no outsider could guess,“ he said. ”It forces the impostor to break character, or hang up.”

He said it is not foolproof but is a low-tech defense that could dramatically weaken voice cloning scams.

Lapienyte pointed to the reality of most calls: “The problem is that when someone close to you is calling, you don’t try to verify their identity. You don’t put up protective shields, ‘Is my mom really calling?’

“And you know, I don’t really want to live in that world where we do that.”

House said banks and other financial organizations could do much more to tackle vishing and spoofing.

“Banks should require out-of-band confirmation—callbacks to a known number—before any high-value transfer and never rely on voice alone,” he said.

He noted that phone companies in the United States need to finish rolling out caller ID authentication such as Secure Telephone Identity Revisited and Signature-based Handling of Asserted Information Using toKENs, collectively known as STIR/SHAKEN.

He said the telecommunications industry should also “aggressively label or block spoofed numbers.”

“Cybersecurity teams can run staff through vishing drills, adopt AI detection that flags synthetic voices, and place explicit warnings like ‘We will never ask for a code over the phone’ into every customer touchpoint,” House said.

“Together, these measures raise the cost for scammers and add friction at the critical moment of decision.”

AD