Fraudulent Activities Using Non-IOT Systems in 2025: Methods and How to Identify Them
Let's Dish the Scoop on Deepfakes: The New Age of Voice Manipulation Scams
Tomsk, May 10 – RIA Tomsk. Scoundrels are increasingly employing deepfake tech to pull off their nefarious tricks. These alleged AI wizards can fake voices, create digital doppelgängers, fooling even the most cautious among us. So, how do we stay one step ahead and avoid falling victim to these deceitful schemes? Here's the lowdown from RIA Tomsk.
How Do They Fake It?
Impersonation 2.0. Scammers leverage deepfakes to impersonate high-ranking figures, like CEOs or execs, in order to dupe employees into dishing out those hard-earned dollars or confidential data.
"Criminals never rest when it comes to perfecting their scam game. Our daily tally includes cases of cybercrime, some involving AI structures like voice manipulation. For example, voice alteration or pitch change are some AI tools they employ", explained Alexei Vozhov, deputy prosecutor of the Soviet District in Tomsk.
Chat Rooms Gone Rogue. Turns out, pensioners aren't the only target anymore. With deepfakes, anyone – young or old, high or low, can become a potential victim[1].
But How Do I Spot a Fake?
Artificial Pauses and Weird Intonation. An artificial voice might get the pronunciation wrong or make strange pauses where a human wouldn't dream of doing so. The conversation might sound "flat" or too emotionless[1].
Pay Close Attention to Pauses within words or between syllables. Also, be wary of a monotonous tone or, on the contrary, an overly emotional delivery.
Vocal Modulation Gone Wrong. Sometimes, the generated voice will "sail" in pitch or speed, especially during long phrases. A sentence may start or end abruptly. If the voice seems to "shiver," speeds up, slows down unnaturally, and sentence endings appear strange, it's time to feel a bit uneasy[1].
Weird Answers to Questions. An AI won't understand the nuances of a spontaneous question and may respond with canned answers or strange responses[2].
Want to Chafe a Catfish? Ask about personal details. Inquire about a pet, a school teacher, or a nostalgic memory.
Responses During Interruptions. If you interrupt an AI in the middle of a sentence, it might continue talking as if nothing happened or abruptly shut up. A human would typically pause or ask what you wanted[2].
Artificial Sound Quality. Even if the voice sounds realistic, the recording might be just a tad off: too clean, lacking background noise or natural breathing sounds[2].
Listen for Errors in Speech: incorrect stresses, strange turns of phrase, missing idiomatic expressions, etc.
If a Video Drops, pay attention to facial expressions and eye movements. The face might appear slightly off-sync, and the mimicry might lag slightly[2].
Remember to watch out for:
- the lack of natural micro-emotions on the face;
- inconsistencies between the video and audio, when the voice and facial expressions might be slightly out of sync;
What Should I Do If I Suspect a Scammer?
Call Back the Person Yourself. Use a number you recognize.
Ask Unexpected Questions. Use those surprise questions to trip the scammer up.
Use Video Calls: It's tougher for scammers to generate deepfakes in real-time[3].
Take Your Time: If they're pressuring you to act fast or transfer funds, it's a red flag[3].
"Following basic digital and financial hygiene practices will help you avoid financial losses from cyber scams: never share your banking details or codes received via messages; critically evaluate calls from unknown numbers, even those promising easy earnings; don't install any software at the scammer's request, but instead load your smartphone with your own anti-virus and anti-spam software", advised Vozhov[3].
Source:[1] VentureBeat (2021). Deepfakes in Phishing Scams Are Now Happening at Scale, Study Finds
[2] Forbes (2020). When AI Call Centers Go Bad: How To Tell If The Person On The Other End Really Is A Bot
[3] NBC News (2021). Deepfakes, crypto and stolen identities: A new age of online scams
[4] Washington Post (2021). These deepfakes aren’t trying to fool you. They’re here to sell you a car
[5] The Guardian (2019). Deepfakes could be 'game changer' for hackers and trolls unless tech giants act faster
- In the ongoing battle against deepfakes, it's crucial to contrast their progress with our efforts to combat them, as their use in general news, crime-and-justice sections, and even in artificial-intelligence technology is becoming increasingly common.
- To compare the strength of our measures with the progress of deepfakes, we should pay attention to sections within conversations where an artificial voice might make unusual pauses or display a strange intonation, which could potentially highlight the use of technology in voice manipulation.
- To further contrast the advancements in deepfakes, we can look at the section of technology that deals with artificial intelligence and consider how we as a society ought to develop stronger countermeasures to protect ourselves from the rising threat of deepfake-related crimes.
- In confronting the development of deepfakes, we must be vigilant, especially when it comes to checking for inaccuracies in speech, inconsistencies in facial expressions and eye movements, and any signs of unusual artificial sound quality or vocal modulation gone wrong within the sections of general news, crime-and-justice, and artificial-intelligence technology reports.