.

Social Engineering – The Oldest Trick in the Book is Still the Best Trick in the Book

By Chris Horner, Information Security Consultant, Rebyc Security

In today’s digital environment, there are security risks from all sides. Systems are no longer the only targets. Today’s technical controls, in general and when configured correctly, are quite good. As a result, attackers tend to focus on attacking end users directly. Their thinking is – why bother defeating technical controls when you can get someone on the inside to open the front gate for you? These are commonly referred to as social engineering attacks and are very widespread. In fact, the 2023 Verizon Data Breach Investigation Report highlighted a striking trend – 74% of successful cybersecurity attacks incorporated social engineering elements.

This attack vector is not new – it’s actually as old as time. We’re all familiar with the Trojan horse story. This is one of the very first recorded attacks that incorporated social engineering. Deceit and trickery have always been around. It’s just over the last few decades that this tactic has been given a name.

At its most basic level, social engineering means using deceit to get someone to do something they shouldn’t do. This type of attack bypasses technical defenses by exploiting human psychology. While tech stacks change or apps come and go, human psychology is generally the same and people will tend to operate in a predictable fashion. Typically, an attacker has the goal of gaining unauthorized system access or obtaining valid login credentials. Attacks can come from multiple avenues – through malicious emails, phone calls, or text messages. They can be very cleverly disguised by using domains and websites that look (at a quick glance) to be legitimate. Some even impersonate routine requests from trusted colleagues. A determined attacker can put together a very convincing scenario even with little technical skill.

This type of attack generally take advantage of one of these four things:

  • Fear – The fear of missing out (FOMO) or the fear of facing repercussions.
  • Trust – Generally, people want to be helpful to others.
  • Curiosity – An offer of access to confidential information or something that sounds too good to be true.
  • Greed – Offer of an incentive (usually monetary) to take an action.

To understand why these attacks work, it’s important to understand how the human mind works. On the surface, human brains operate very much like a computer. They take an input, analyze it, and then a response (output) is manifested by an outward action. Simmering underneath all of this is our subconscious mind. Studies have shown that thoughts happen first in our subconscious mind, and it’s our conscious mind that needs time to catch up. If something looks out of place or danger is sensed by our subconscious mind, our conscious mind is then activated for further evaluation of the situation. Often, we call this a “gut feeling”.

Here’s the key part where social engineering comes into play – if all the details look or sound ‘right’, an attacker can trigger a subconscious compliant response from their target before they even realize what is happening. The very best social engineers are excellent at intercepting this process and triggering this automatic response.

However, a key differentiator between a human brain and a computer is that human brains can also run on emotions. Emotional responses can override logic, even when someone “knows better”. We’ve all done it right? We’ve all looked back on a situation and told ourselves we knew better than to do what we did. This is also a reason you’ll see so many scams pop up after natural disasters. Remember, generally people are empathetic and want to be helpful to others. Scammers take advantage of this. Combine an attack that triggers an automatic response from an emotional component and you have all the makings of a seriously effective attack.

Technical  controls like a firewall can offer a straightforward defense. Those either let something in or they don’t. It can seem overwhelming to think about defending against multiple and unpredictable types of attacks coming at multiple people (who can also be unpredictable) in an organization. Even if a company runs a social engineering test, it will typically be very boiler plate.

Here’s the typical playbook: send out a phishing email, measure the number of clicks or the number of credentials captured, and put the offending users through some kind of remedial training. I argue this is no longer enough. Organizations need to demand better from their social engineering tests and it’s up to pentest companies to deliver better. A thorough test needs to be treated like a practice drill. My kids school runs practice fire drills when there is no real fire. The military runs drills in peacetime. And why? It’s so when the time comes, people recognize the situation and there is already an automatic response instilled on what to do. (This would be a good time to reread the section above on how the human brain and subconscious works.) The goal here is to implement the responses in your team that you want to see – don’t let attackers do it for you. Use these tests to find the gaps in not only people but also processes and procedures.

These tests should not just be about getting credentials. With a big enough pool of targets, honestly that just becomes a numbers game. If you get user creds from someone – that’s great.  The real question is – what can actually be done with them? Use tools like theHarvester or amass to see what assets are discoverable. Try the captured credentials in things like their email program, the VPN, intranet, or any other application that they use. Look to see if things like MFA are in place, or if conditional access policies detect and prevent the login request. For more advanced organizations, test to see if MFA can be bypassed with tools such as Evilginx. Over dozens and dozens of tests, I’ve seen every result on the spectrum. I’ve collected login credentials but found I couldn’t use them from the outside thanks to strict policies and filtering. I’ve also seen where creds were collected, we were then on the VPN and into the internal network with Domain Admin achieved in under 30 minutes. Again, looking at these two extreme examples and using the traditional rating methods, both organizations technically ‘failed’ the test. In reality, the debrief reports to each went very very differently because of the difference in impact to their business.

Aside from technical controls, here are some ways you can train your team to identify these types of attacks:

  • Establish clear protocols – ensure your team knows how sensitive requests such as remote system access or password resets will be communicated.
  • Validate unfamiliar callers – and if there is any doubt hang up and call back at a known number.
  • Doublecheck links that are in emails – for example hovering over them to see where they truly go. If in doubt, manually navigate to a known and verified site.
  • Watch for “External Email” warning banners and be especially wary when these appear on messages that seem to originate internally. Rotate these so they stand out and don’t become stale.
  • Have a process for people to report suspicious emails or phone calls without repercussion, and let people know how an active attack will be communicated throughout the company.

Social engineering is a problem that is not going away anytime soon. I know it’s fashionable to make layer 8 jokes and call people the weakest link. I’ve never subscribed to that theory. With proper controls and training, people can actually be the biggest ally in spotting and stopping these attacks. Regardless, if all it takes is a phone call or email to topple an organization, the end users there are not the main problem. It starts at the top. Security posture is not tied to one single component. I always say it takes a village. Let’s raise the bar for how we instill a security culture in our people and our processes. Test those processes, repeatedly. All of this combined with redundant technical safeguards can help organizations withstand these attacks, making them stronger and more resilient.

Hot Topics

Related Articles