From CEO Cloning to Fake Kidnapping Calls: Study Finds Awareness of AI’s Realism Cuts Scam Risk

- Advertisement -

Science & Technology (Commonwealth Union) –  New research by Abertay University in Scotland, suggests that the best defence against AI-powered voice scams is not standard warning alerts, but raising public awareness about how realistic synthetic voices have become.

The research, published in the Journal of Cybersecurity and supported by the Scottish Institute for Policing Research (SIPR), outlines one of the earliest psychological strategies designed to counter AI voice deception, taking a preventative rather than reactive stance on fraud.

AI-generated audio has grown so lifelike that it is now being deployed in high-profile scams — from imitating company executives to authorise multi-million-pound transfers to posing as relatives in staged kidnapping schemes.

The study found that brief explanations about AI’s capacity to accurately reproduce regional accents and speech patterns can meaningfully lower people’s instinct to assume a voice is genuinely human. The project’s lead researcher, Neil Kirk from the university’s Department of Sociological and Psychological Sciences, said that increasing public understanding in this way could significantly curb fraud as synthetic speech technology continues to advance.

Dr Kirk indicated fraudsters frequently exploit emotional triggers — such as urgent calls from ‘family members’ in distress or fabricated delivery problems — to rush victims into acting without thinking and when these pressure tactics are paired with AI-generated voices that sound authentic and replicate local accents, detection becomes far more difficult.

Voice-based scams can be tougher to identify than video deepfakes because they depend on a single sensory signal. This is already a serious issue in the UK. Research by Starling Bank found that 28% of UK adults have been approached by AI voice cloning fraudsters. Yet almost 46% are unaware that such scams exist, and only around one in three recognise the typical warning signs.

Researchers of the study pointed out that the victims of deep-fake scam calls end up losing an average of £595 (roughly 791 US dollars) per incident, with some cases exceeding £13,000 (roughly 17,200 US dollars), as indicated in the Annual Fraud Report for las year by UK Finance.

“AI voice technology is advancing faster than public awareness. If we don’t update people’s expectations now, we risk leaving entire communities vulnerable to scams. Fraudsters are already exploiting these gaps, and the consequences can be devastating. Education is the most powerful tool we have to close that gap, and it is something we can implement quickly and at scale. The study introduces the concept of MINDSET (Minority, Indigenous, Non-standard, and Dialect-Shaped Expectations of Technology), a belief that voice systems can’t handle local or regional speech. This bias makes speakers of underrepresented dialects particularly vulnerable to scams as they are more likely to believe an AI voice speaking this way is a real person” explained Dr Kirk.

 

Across two experiments with 300 Scottish participants, researchers found that capability-based messages informing participants that AI can authentically replicate Scottish accents and dialects significantly reduced the bias toward classifying voices as human.

 

Alerts that only pointed out the dangers of AI voice scams proved largely ineffective unless they also explained what modern AI systems are capable of. Although these warnings did not significantly improve people’s ability to distinguish between human and synthetic voices, they did encourage greater caution and reduced the tendency to automatically trust Scottish-accented AI voices as genuine.

Dr Kirk indicated that their results highlight practical opportunities to strengthen fraud prevention. Banks, telecom companies, and public awareness initiatives could embed capability-focused messaging into security checks or fraud alerts to better safeguard consumers. He further pointed out that educating people about what AI can do — rather than simply alarming them — may be the most scalable way to boost vigilance. Dr Kirk indicated however, that responsibility cannot rest with industry alone and governments and policymakers must collaborate with businesses to roll out coordinated education campaigns that bridge the awareness gap and enhance public safety.

The study builds on Dr Kirk’s earlier research, published earlier this year, which revealed how persuasive AI-generated voices can be — particularly when replicating regional accents such as Dundonian Scots. That previous work showed that listeners frequently mistook AI-generated speech for real human voices, especially when delivered in familiar local dialects.

Hot this week

Can a Birkin Bag Really Outperform Gold as an Investment?

Rising inflation and volatile markets are some of the...

How Is Climate Change Affecting What Kids Eat at School?

Climate change is not only about heatwaves and floods....

From Tashkent to Doha: A Quiet Call for Calm as Tensions Rise in Qatar

Through his expression of condolences and messages of sympathy...

Reeves Defends UK Economic Plan Amid Global Turmoil

The British Chancellor of the Exchequer, Reeves, said in...

Canadian manufacturing PMI surges to 13-month high in February

On Monday, 2 March ’26, data reflected that Canada’s...
- Advertisement -

Related Articles

- Advertisement -sitaramatravels.comsitaramatravels.com

Popular Categories