One method is to send a voicemail message generated by artificial intelligence (AI), seemingly from a panicked child or grandchild, to a family member, urging them to send money, often through a bogus bank link.
“The everyday vishing script is a high-pressure, ‘urgent problem’ phone call,” Nathan House, CEO of StationX, a United Kingdom-based cybersecurity training platform, told The Epoch Times.
“The caller spoofs your bank’s number, claims your account is compromised, and needs you to ’verify' a one-time passcode they just texted—actually your real two-factor code,” he said.
“Other variants impersonate police, utility companies, or a panicked relative demanding emergency funds.
“The hallmarks are a trusted name on caller ID, an emotional or financial threat, and a demand for immediate action—usually sharing credentials, reading back a code, or wiring money,” House said.
Jurgita Lapienyte, chief editor of Cyber News, a Lithuania-based publication focused on cybersecurity, highlighted the growing prevalence of vishing.
She warned that while current AI voice cloning technology is only able to stick to a script and can’t react spontaneously to questions or responses in real time, “it’s only a matter of time until it actually learns to be more like us and can be weaponized against us.”
“If I feel like I’m actually talking to a relative of mine, I will be more willing to lend them money, because I’m convinced that this is the actual person that I’m talking to, and this is really dangerous,” she said.

House said the FBI report shows how prevalent the phishing or spoofing problem has become.
“Phishing, spoofing and their offshoots—voice-phishing or vishing, smishing, QR-phishing—have been the workhorses of cyber crime for years because they’re cheap to run and scale to millions of targets with almost no technical skill,” he said.
Lapienyte said it is becoming cheaper to scam people using voice cloning. “In 2020, if you wanted to clone a voice, you would need probably around 20 minutes of recording,” she said.
“These days, with AI and automation, and other other innovations, you just need a couple of seconds of someone’s voice, and you can fake someone’s voice, make a recording resemble the person that you are trying to impersonate.”
House said scammers only need a few seconds of audio to make a convincing replica using AI voice-cloning tools, “say, a TikTok clip or a brief wrong-number call.”

“That lowers the cost and skill barrier dramatically,“ he said. ”Criminals no longer need studio-quality samples or lengthy recordings. So they can scoop up snippets posted online, feed them into a free cloning engine and start dialling.”

According to the IC3, people over the age of 60 suffered losses of nearly $5 billion from cybercrime in 2024, and submitted the most complaints to the agency.
Scammers bought spoofing software, such as Interactive Voice Response (IVR), from the site, which generated 112.6 Bitcoin ($1.08 million) for Fletcher and his associates.
“Voice-deepfake heists remain rare headlines, but they’re almost certainly under-reported because companies dread the reputational hit,” House said.
“Only a handful have surfaced publicly, yet business-email-compromise losses—into which these CEO-voice scams fit—are already measured in billions each year.
“It’s plausible that deepfake-voice fraud has siphoned off many millions collectively,” he added.
House said most incidents were written off as wire-transfer fraud.
Lapienyte agreed there is under-reporting of vishing and other scams, especially among the elderly who are often ashamed to admit they have been scammed, “because they are feeling lonely and they don’t want to be ridiculed.”

“It’s imperative that the public immediately report suspected cyber-enabled criminal activity to the FBI,” he added.
“They’re that big; they’re that organized; they’re that well-funded,” she told The Epoch Times in a recent interview.
Lapienyte also said that the Southeast Asian cyber scamming industry is now more likely to conduct large-scale vishing attacks, rather than just go for the “big fish.”
She said that while the scammers may target the elderly and those who live alone, the targets are often selected at random.
“It could be anybody. Scammers also have their favorite times to scam, during holidays or in the mornings, when people are more relaxed or not thinking clearly,” Lapienyte said.
She added that elderly people are often targeted by romance fraudsters.
“In those romance scams, they are building a totally new persona. So they can definitely fake a voice, but I think it’s more dangerous when they manage to fake the voice of someone, like a relative, or maybe someone in the company, or someone you know ... it’s way more personal.”

Lapienyte said voice cloning also presented dangers for the media.
“Before AI arrived, you would verify that the person is a person. So you would pick up the phone and call them, just have a little chat; check this is a human being I’m talking to,” Lapienyte said.
She said AI is now able to fake videos as well as voices, making it harder to verify if an interviewee is a genuine person.
Lapienyte said that while AI is improving every day, at the moment, voice cloning software is unable to perfectly mimic humans, and often sound robotic or lack a sense of humor.
She said it has none of the pauses, slips of the tongue, or unfinished sentences that a human makes when they speak.
“But they will get there, I think, sooner rather than later, and then it’s going to be really scary,” said Lapienyte.
In the video, Nesbitt says: “AI voice cloning fraud is on the rise, and anyone can become a victim.”

House said a code word between family members “is a simple, effective speed-bump.”
“If someone calls sounding like your son begging for bail money, asking for the agreed phrase—something no outsider could guess. It forces the impostor to break character, or hang up,” he said.
He said it’s not foolproof, but it’s a low-tech defense that could dramatically weaken voice-cloning scams.
Lapienyte pointed to the reality of most calls: “The problem is that when someone close to you is calling, you don’t try to verify their identity. You don’t put up protective shields, ‘Is my mom really calling?’
“And, you know, I don’t really want to live in that world where we do that,” she added.
House said banks and other financial organizations could do much more to tackle vishing and spoofing.
“Banks should require out-of-band confirmation—call-backs to a known number—before any high-value transfer and never rely on voice alone,” he said.
He said the telecommunications industry should also “aggressively label or block spoofed numbers.”
“Cybersecurity teams can run staff through vishing drills, adopt AI detection that flags synthetic voices, and place explicit warnings like ‘We will never ask for a code over the phone’ into every customer touchpoint,” House said.
“Together, these measures raise the cost for scammers and add friction at the critical moment of decision.”