Skip to main content

How Social Engineering Attacks Trick Even Experienced Employees

We are Seen as the Weakest Link When we Could be the First Line of Defence. 

This sentiment is echoed across countless articles, books, and myriads of websites by every Cybersecurity specialist that can get airtime in a very congested and widely observed space.  

Here is the thing though: are they correct?  

In a recent social engineering attack, an experienced treasury accountant at a reputable South African corporation made two separate payments totalling twenty-two million rand based on instructions received from someone they thought was their CFO via WhatsApp.  

The details of this latest social engineering attack reveal an elaborate and well-executed con that leaves me wondering if we are not, in fact, the main problem in an ever-increasing wave of cybercrime. When it comes to protecting our data, our personal information, our company’s sensitive material, and ultimately our finances, how do we not look at ourselves as the ones not only opening the door for cyber criminals and threat actors, but standing guard while they help themselves to the goods?  

We are living in an era of synthetic threats, of Hollywood deepfakes where Yoda and Batman can sit on opposite sides of a table on a beach in Havana, drinking rum, smoking cigars, and contemplating the colonisation of Mars, all this in 4K detail and Dolby Digital sound. The name is inspired by the level of realism we associate with a big-budget film production. The attack above while successful, was by no means as sophisticated. 
 
While it lacked the synthetic authenticity of this level of deepfake that most teenagers with a free account and a few hours to kill can create on the most basic of devices, it was still good enough to convince an experienced senior accounts person. They made two payments of 11 million rand to an account that prior to that moment, was not verified by any system; bypassing numerous business processes that most account professionals would recommend others follow when driven by good corporate governance. 
 
So what went wrong?  

Years of operating within strict, top-down environments where questioning authority is discouraged and delays cost money have conditioned employees to respond to urgency and seniority without hesitation. It’s precisely this conditioning that deepfake attacker’s exploit. Through emotional manipulation and fabricated authority cues, they trigger the same instinctive compliance that the business culture has reinforced, pressuring individuals to bypass the very processes designed to protect them. The threat isn’t that employees lack awareness of procedures it’s that they can be psychologically manipulated into believing the situation demands they override them. 

The Sword of Damocles hangs heaviest over those who sit at the intersection of finance, technology, and access;, the gatekeepers of sensitive transactions and critical tech systems. For our accountant, the weight of that blade was all too real. Comply with what appeared to be a direct instruction from a senior executive, or risk appearing incompetent, hesitant, and unfit for the role. In any serious corporate environment, either outcome carries consequences that could result in more than just embarrassment, we are talking about the kind of moment that quietly ends careers. 
 
The message came through WhatsApp, apparently from the CFO. The profile picture looked right. The language sounded right. A lawyer was copied in requesting that the accountant sign an NDA, lending the request legal weight. There was an acquisition in progress, sensitive, urgent, not for wider circulation. Collectively, these engineered conditions created a scenario that appeared to sit within the accepted boundaries of exceptions management, one where bypassing established business process felt not like a failure of judgement, but like proactive delivery of an urgent and legitimate request. 

Trust heuristics, authority bias, manufactured urgency, and a convincing web of formal process worked in combination to ensure that the victim expedited two payments directly to the threat actors. 
 
This is where our story takes a turn.  

The protagonist of our story, while guilty of being overwhelmed by their emotions and susceptible to the biological wiring that is abused when humans are the target of social engineering, had time to review events post payment and was unconvinced of the legitimacy of the events. This led them to investigate using an app known as True Caller to verify the contact information of the CEO.  

While prevention is far better than cure, in this instance curiosity and our “Sixth Sense” won the day, and a speedy escalation to management and the authorities led to the bank returning one of the payments while the other payment is being disputed by lawyers in court.  
 
You might think of our protagonist accountant as the guilty party in all of this, however we must not discount the courage and quick thinking that led to the reporting of the event and the return of half of the funds with the possibility of the balance being rescued as well.  

If anything, it prompted the question; how do we encourage people to listen to their intuition and gut more often, instead of overriding it with logic and fear? 

This case is less about someone being careless and more about how the brain behaves under pressure and how it can be further manipulated by employing the right social engineering tactics.  

 
This is exactly why we teach one simple behavioural sequence: 

When authority, urgency, and secrecy collide, our brains switch into fast processing mode, or as Daniel Kahneman puts it, “fast thinking.” It stops us analysing deeply and starts pattern matching:.  

  • “It looks like my CFO.”  
  • “It sounds legitimate.” 
  • “I don’t want to look foolish.” 
  • “What will happen if I delay this?” 

We call these mental shortcuts, heuristics. We use them every day to move quickly and make decisions without burning mental energy. 

The problem is this: 

Attackers understand your shortcuts and they don’t concern themselves with trying to hack your system first. They hack the moment, creating just enough realism to simulate authenticity, and then they apply pressure. Through this method, speed replaces scrutiny and emotion replaces verification, leading to the victim being isolated and the establishment of synthetic trust. 

This little engram is our Cognitive Circuit Breaker, it acts as a behavioural firewall for moments where we are unsure, a little emotional or bewildered, and here is how it works. 
 

S — Slow Down 

Urgency is the accelerant in most fraud cases, and when it enters the conversation, we should automatically see it as a red flag.  

Slowing down is imperative, even a 30-90 second pause, can restore clarity and offer a unique perspective.  

T — Think Critically

Consider what the request is asking for. Does it involve money, sensitive data, passwords or system access? Ask yourself whether the request makes sense in the given context.

Daniel Kahneman’s work on System 1 and System 2 thinking reminds us that slowing down and engaging our analytical mind, rather than relying on fast, intuitive responses, is exactly what pressure-based manipulation is designed to prevent.

O — Observe the Pattern 

When caught up in the wave of events, contemplating if what is happening is real or not, we should try to spot patterns in the messaging or methods third parties apply against us in the situation.  

For instance, Authority combined with Secrecy and Urgency is a well-known and often discussed social engineering tactic. Add to this, a move away from business processes, verification, and requests that isolate us from our support structure, and we have enough information to start asking better questions and verifying through trusted channels.  

P — Prove It 

It would be almost impossible for anyone to remember the countless examples and scams that cyber criminals use regularly to commit social engineering and cybercrimes. Instead of trying to recognise their methods, we should focus our attention on forcing verification. Using channels outside of the existing conversation to verify elements of the request or the people involved, while following your organisation’s approved workflow, will always be in the best interests of everyone, except the attackers. Verification protects people, we should not see it as a confrontation.  

Security culture starts with leaders communicating these principles among others throughout the organisation.  
 
Empowering their people to follow business processes and verify when in doubt. We are no longer only defending against malware. We are defending against synthetic threats that challenge how we see the world and engage with it. 
 
Isolation and urgency are how the enemy attempts to divide and manipulate us into situations where our biology and our inherent “good nature” fail us, turning us into the weakest link.  

 
However, if leadership empower us, if we stay informed and connected through clear communication and business processes, and if we learn to rely on verification by default in all things that deviate from these processes or threaten to disconnect and divide us, we will become the first line of defence. Adaptive, intuitive, flexible, and more capable than any firewall or any security system. 
 
And should we ever find ourselves caught with the Sword of Damocles hanging over our heads, isolated and in a compromising position, remember that our most powerful defence isn’t an update, or a system. It is the four letters we should memorise and on which we can always rely on when any call of action that comes our way.  

Go on. Try it… S.T.O.P   

References: 

Arnsten, A. F. T. (2009). Stress signalling pathways that impair prefrontal cortex structure and function. Nature Reviews Neuroscience, 10, 410–422. https://www.nature.com/articles/nrn2648  

Chaiken, S. (1980). Heuristic versus systematic information processing and the use of source versus message cues in persuasion. Journal of Personality and Social Psychology, 39(5), 752–766. DOI: https://doi.org/10.1037/0022-3514.39.5.752 

Cialdini, R. B. (2001). Influence: Science and practice. Allyn & Bacon. https://www.amazon.com/Influence-Practice-Robert-B-Cialdini/dp/0205609996 

Dhamija, R., Tygar, J. D., & Hearst, M. (2006). Why phishing works. Proceedings of CHI 2006. ACM. https://dl.acm.org/doi/10.1145/1124772.1124861 

Kahneman, D. (2011). Thinking, fast and slow. Farrar, Straus and Giroux. https://www.amazon.com/Thinking-Fast-Slow-Daniel-Kahneman/dp/0374533555 

Vishwanath, A., Herath, T., Chen, R., Wang, J., & Rao, H. R. (2011). Why do people get phished? Testing individual differences in phishing vulnerability within an integrated, information processing model. Decision Support Systems, 51(3), 576–586. https://www.sciencedirect.com/science/article/abs/pii/S016792361100090X (DOI: https://doi.org/10.1016/j.dss.2011.03.002) 


Basil PolydorouHead of Learning Solutions  | BsC Cyber Psychology Candidate

Subscribe to the Cyber Dexterity NewsRoom?

Receive updates, articles and fresh insights from Tony’s Blog Theme – “My Two Cents Worth.”