Your Digital Shadow Is Someone Else’s Weapon

Your digital life does not belong solely to you. Every photo, post and public moment offers a glimpse into the lives of your friends and family, too. Rather than fearing this, you should take responsibility and be mindful of the digital world you inhabit.
You receive a phone call. It’s your mother. She is crying and hysterical. She has been in an accident and urgently needs money to pay a tow truck driver to come and help her, as the one from her insurance company is too far away. Her passenger has been injured and she needs to get to hospital immediately. The call drops. You phone back. She doesn’t answer, so you transfer the money.
But your mother was never in danger. In fact, that wasn’t your mother at all. Her voice had been cloned using just three seconds of audio taken from a public Facebook video of her Christmas holiday.
This is not hypothetical. It happened to one of my family members, and I’m sure you know someone with a similar story. What makes it more unsettling is not just that it happened, but how little information was needed. There was no data breach. There was no sophisticated infiltration. All it took was a social media account, a publicly available video and a few minutes of someone’s time.
You are already being profiled
Before we delve into technical terms, let’s discuss what a digital footprint actually is. Many people significantly underestimate their own footprint. Put simply, it is all the information you leave behind whenever you browse, post, shop or appear on someone else’s contact list. Your ‘active footprint’ is information that you deliberately post or share in the digital sphere. In contrast, your ‘passive footprint’ comprises information collected automatically, such as app location data, cookie tracking and GPS coordinates embedded in photos.
This distinction is important because AI does not require a data breach to profile you. AI tools can automatically scan company websites, LinkedIn profiles, and public social media accounts to compile detailed dossiers on any of us. This includes your current projects, communication style, relationships, and routines. This information can then be used to carry out highly personalised attacks. They are personal by design.
In May 2024, for example, WPP’s chief executive, Mark Read, was targeted by this very type of attack. The attackers created a fake WhatsApp account using a publicly available image of Read, arranged a Microsoft Teams meeting, and used an AI voice clone of Read alongside YouTube footage to persuade a senior agency leader to set up a new business and hand over money and personal details. Read himself confirmed the attempt, though it was ultimately unsuccessful. The point is not that it failed. The point is that it could be carried out using publicly available information about a named, visible real person. Most of us are facing this level of exposure and not taking it seriously enough.
The number that should alarm you
Perhaps you’re already familiar with the concept of a digital footprint and consider yourself to be fairly cautious. Perhaps you’re tech-savvy enough to think you could spot a scam. But what does the data say about that confidence?
GenAI-enabled scams increased by 456% between May 2024 and April 2025. Over 82% of phishing emails are now created with the help of AI. Global scam losses reached one trillion dollars in 2024 alone. One in three consumers believes they could identify an AI-generated scam. However, 27% of those targeted were successfully defrauded, marking a 19% increase from 2024 to 2025. Clearly, confidence and capability are not the same thing.
IBM X-Force research provides further concrete evidence of this. In a controlled A/B test, the IBM team sent AI-generated and human-crafted phishing emails to over 800 employees at a global healthcare organisation. Using just five prompts, the team produced phishing emails in five minutes that achieved an 11% click-through rate. This was compared to a rate of 14% for emails crafted by experienced human social engineers who spent sixteen hours on the same task. However, the click rate is not the main point here. What is important is that what previously took a skilled team nearly two full working days now takes AI just five minutes. This is not a marginal efficiency gain for attackers. It represents a structural shift in how attacks can be scaled up, personalised, and deployed not just against executives, but ordinary people too.
However, these numbers do not tell the whole story: scale and speed are only part of it. These attacks are so effective not purely because of technical reasons. It is because your digital footprint gives attackers something far more powerful than data. It provides them with the raw material to engineer a psychological response in you. Your digital footprint is the input. Your neurobiology is the mechanism. What follows is an explanation of how this mechanism works.
Why you data becomes psychological weapon
Although it may be tempting to throw away your devices and delete your social accounts, this is not the answer. Attackers do not target your hardware. They target your neurobiology, which is a fundamentally different issue.
Psychologist Daniel Goleman coined the term ‘amygdala hijacking’ to describe the moment when an intense emotional response completely overrides logical thinking. Research has shown that when people are exposed to fear or distress, their frontal cortexes effectively shut down while their amygdalas dominate processing, eliminating critical thought entirely. Oxytocin, also known as the ‘moral molecule’, plays a central role in trust and is often exploited in social engineering. When trust and rapport are established, oxytocin is released. Combined with dopamine, the brain’s reward chemical, this creates a neurochemical state in which a person becomes willing to take actions they would usually refuse.
This is not abstract. Return for a moment to the opening scenario: the call that sounded exactly like your mother, her familiar voice breaking with distress, giving you the sense that someone you love needs you right now. The resulting response (the racing heart, the impulse to act immediately) was not a failure of intelligence. It was amygdala hijacking in real time, triggered by a voice clone trained on just three seconds of publicly available audio. By fine-tuning large language models with content from a target’s social media and their contacts’ accounts, attackers can create messages specifically designed to trigger that exact response by elevating oxytocin through familiarity and dopamine through urgency. The sense of pressure you feel when a message demands immediate action is no coincidence. It is the mechanism. This is why you never receive a phishing message that says, ‘Click this link tomorrow.’ Between now and then, your brain will return to a calm state and you will resume critical thinking. Urgency eliminates that window entirely, and that is precisely the point.
What AI is actually doing with your data
Attackers using AI are not acting opportunistically. They are being methodical. There are systems and processes in place, and the attacks are targeted rather than random.
As mentioned in the introduction, AI capabilities are used to gather and analyse data from previous breaches, social media profiles and other public sources in order to build a comprehensive digital profile of the target. This allows for a degree of personalised fraud that was previously unachievable.
The voice cloning aspect of this deserves particular attention. McAfee’s research found that just three seconds of audio are enough to create a voice clone with 85% accuracy. By late 2025, voice cloning had crossed what researchers call the ‘indistinguishable threshold’. Research from Queen Mary University of London has confirmed that AI voice technology has now reached a stage where the average listener cannot reliably distinguish between a deepfake voice and a genuine human voice. We conducted an internal test at Cyber Dexterity with the full knowledge and consent of our colleagues, and the results were, frankly, unsettling.
In practice, this means that the voice of someone you have trusted your entire life can now be convincingly replicated from just a few seconds of publicly available audio. The instinctive process of verifying a familiar voice is no longer reliable.
What you can actually do
Understanding the threat is only useful if it prompts a change in behaviour. So, what does this look like in practice?
The most powerful behavioural intervention is the pause. It takes the brain only 30 to 60 seconds to calm down after being triggered. The entire architecture of urgency-based scams depends on reducing this timeframe. Reclaiming this time does not require technical skill. It is a decision.
Beyond the pause, apply the following with intention:
- Audit your digital footprint honestly, focusing on what is currently searchable, not what you think is out there. Google yourself. Check what your public social profiles reveal about your routines, relationships and location patterns.
- Agree on a code word with close family or colleagues to be used in emergency situations. A voice clone that is perfect in every other way will fail immediately if it cannot produce an unknown word.
- Treat urgency as a red flag, not a prompt. Any message that evokes fear, obligation or time pressure should prompt you to slow down, not speed up. This friction indicates that your cognitive defence system is functioning correctly.
- Separate your email addresses by purpose have one for financial and professional accounts. Use another for subscriptions and public-facing correspondence. The less cross-contamination there is, the smaller your attack surface.
- Delete any accounts that you no longer use. Dormant accounts are live data sources. They exist, they can be scraped, and they do not work in your favour.
Finally, stop saying ‘I have nothing to hide‘. This is the entirely wrong frame of mind. The issue is not what you are concealing. It’s the information you’re freely handing over that could be used to impersonate or manipulate the people who trust you. Your data represents more than just your own risk. It represents theirs, too.
References
- Goleman, D. (1995). Emotional Intelligence: Why It Can Matter More Than IQ. Bantam Books.
- Kahneman, D. (2011). Thinking, Fast and Slow. Farrar, Straus and Giroux.
- McAfee Labs. (2023). Beware the Artificial Impostor: AI Voice Cloning and the Rise of Voice Scams.
- Queen Mary University of London. (2025). AI-Generated Voices Now Indistinguishable from Real Human Voices. Published in PLOS One.
- IBM X-Force / Stephanie Carruthers. (2023). AI vs. Human Deceit: Unravelling the New Age of Phishing Tactics.
- TRM Labs. (2025). AI-Enabled Fraud: How Scammers Are Exploiting Generative AI. (Source for 456% GenAI scam increase, May 2024 to April 2025.)
- KnowBe4. (2025). Phishing Threat Trends Report. Via Security Magazine. (Source for 82.6% of phishing emails containing AI-generated content.)
- Sift. (2025). Q2 2025 Digital Trust Index: AI Fraud Data and Insights. (Source for global scam losses reaching $1 trillion in 2024 and the 1-in-3 consumer confidence statistic.)
- The Guardian. (2024). Deepfake Scam Targets CEO of World’s Biggest Ad Firm. (WPP / Mark Read incident.)
- Polydorou, B. (2025). Why People Still Fall for Cyber Scams. Cyber Dexterity.
- Potgieter, T. (2025). Understanding Cognitive Overload and Cyber Deception. Cyber Dexterity.
Taryn-Lee Potgieter – Head of Brand Growth | Psychology student at SACAP