Recognising and resisting social engineering attacks

Lloyd Webb, VP Solutions Engineering EMEA at SentinelOne, explains how people should defend against social engineering attacks.

The cyber security spotlight tends to shine brightest on the latest malware threat and headline-making data breach, but there’s a more subtle threat that can be just as damaging – social engineering attacks. Unlike their tech-centric counterparts, social engineering attacks exploit human vulnerabilities rather than technical ones.

Bad actors take advantage of people’s innate tendency to trust, comply, and share information, and use psychological manipulation to make their attacks exceptionally effective. Phishing – fraudulent emails, text messages, phone calls or websites – is a type of social engineering attack designed to trick users into downloading malware or sharing sensitive information. Phishing attacks are common, with 79% of businesses reporting a phishing attack in the past twelve months.

As businesses increasingly rely on interconnected systems and digital communication, they are exposed to a heightened risk of social engineering attacks. To mitigate this growing threat, organisations need to understand the psychology and tactics behind social engineering campaigns and recognise which psychological triggers are utilised by attackers. By mapping out the motivations and tactics used by attackers to exploit users’ cognitive biases and emotions, business leaders can learn how to resist attacks and stay one step ahead of cyber criminals.

Social engineering tactics

Social engineering attacks are multifaceted and ever-evolving, making them a constant threat to individuals and businesses. Understanding the fundamental techniques and tactics utilised by adversaries is critical for organisations to recognise and mitigate the threats they pose in the short and long term.

Phishing

Phishing is a common tactic used in social engineering campaigns. Email phishing is a type of phishing attack that involves the use of phoney emails that are made to appear to be from a legitimate person or an organisation.

A phishing email might purport to be from a person close to the recipient, asking for help or from a bank asking to verify access details. The goal is for the recipient to reveal sensitive information or click on a malicious link.

© shutterstock/TippaPatt

Spear phishing

This is a highly targeted form of phishing, where attackers personalise fraudulent communications to a specific victim. Not only are these personalised messages much harder to recognise as phishing, but they may also replicate a company’s internal documents or communications quite convincingly – making it even harder to detect as fraudulent.

These attacks often target individuals with privileged administrative access within an organisation or those overseeing financial resources.

Pretexting

Pretexting attacks have a fabricated scenario at their core, typically involving a person of authority or a valid reason for requesting sensitive information. This could include attackers impersonating IT personnel in need of resolving technical issues or posing as a senior leader seeking information to support a valued customer.

Baiting

These types of attacks exploit the innate human desire to get a good deal, whether it is the promise of a lucrative job or an offer of free software, music, or movie downloads.

If the victim bites, the download will deliver malicious software, which compromises the device and potentially spreads through the network.

Multi-channel attacks

As the name suggests, multi-channel attacks use a combination of communication channels to deceive individuals or organisations. Apart from the typical emails, phone calls and texts, malicious actors may include social media and even in-person interactions.

Creating this manipulative illusion of legitimacy and credibility makes it extremely difficult for the victim to recognise the fraud.

Challenging cognitive bias

Regardless of which tactic the actors use, they rely heavily on exploiting the intricacies of the human psyche and cognitive bias – people’s tendency to jump to conclusions based on shortcuts, emotions, or social influences. This can lead to mistakes in judgment and provide attackers with opportunities for manipulation.

Developing an awareness of which biases cyber criminals are playing on can help overcome the pitfalls of bias. Some of these are:

  • Confirmation bias: Social engineers provide false evidence aligning with targets’ beliefs to gain trust and confirm beliefs
  • Recency bias: Attacks are timed to match recent experiences and give weight to recent events, making victims less vigilant
  • Overconfidence bias: Cyber criminals encourage targets to overestimate their abilities and trust their judgment, making them vulnerable

Spotting common threats and pitfalls

Regardless of the type of attack, they all have the common objective of skilfully manipulating the targets into giving up the information the attackers are after while pulling on the strings of human emotions, cognitive biases, and social dynamics.

One helpful approach to overcoming these orchestrated attempts is employee training and awareness programmes. After attending such training sessions, the individuals are more likely to recognise social engineering schemes’ red flags and resist an attack.

Here are six of the most common pitfalls to look out for:

  1. Unusual requests: Probably the top warning sign to remember is out-of-the-blue requests for sensitive information, money, or assistance. Catching someone off guard works, and cyber criminals know that all too well.
  2. Pressure points: Creating a sense of urgency and pressure to act quickly is a commonly used tactic. Adversaries may claim there is imminent danger, compromised accounts, or potential financial losses that require immediate attention. Such high-pressure and high-stakes scenarios are designed to override rational thinking and encourage hasty actions.
  3. Unverified sources: Exercise caution if a request or communication comes from an unverified or unfamiliar source. Verifying the sender through secondary means, such as confirming the identity or the nature of the matter with the person or company directly, helps keep impersonators at bay.
  4. Clumsy content: Poor spelling and grammar or unusual language are signs of a fraudulent message. Bad actors often don’t pay close attention to the correct use of spelling and grammar, and their negligence can be used to spot deception.
  5. Exploiting emotions: One of the most commonly used tools in a social engineer’s arsenal is emotional manipulation. Messages that provoke strong emotional reactions – such as excitement, fear, or sympathy – can cloud judgment, making people more susceptible to manipulation.
  6. Requests for sensitive information or credentials: Potentially, the most obvious red flag is an unsolicited request for sensitive information or login credentials. Genuine contacts would not ask for sensitive details without a valid reason, so maintaining a healthy suspicion towards unsolicited requests for sensitive data is a key defence against scams.

To the trained eye, social engineering campaigns are littered with red flags as often bad actors use not just one but a combination of methods to ensure the likelihood of success. Periodic training helps people recognise and resist social engineering, which in turn helps to bolster cyber security overall.

AI innovation in social engineering

In the context of social engineering schemes, the rapid rise of generative Artificial Intelligence is a significant cause for alarm. Adversaries can leverage AI innovation to elevate social engineering campaigns, introducing more sophisticated messaging and eliminating some of the red flags that currently give away their malicious intent.

Additionally, automated data collection and AI-powered content creation can further personalise and enhance the quality of fraudulent messaging and more successfully manipulate victims.

Furthermore, advancements in deepfake technology have introduced a new avenue for social engineering attackers. By harnessing Machine Learning (ML) algorithms to create highly realistic images, audio, and video, adversaries can more easily deceive viewers into thinking they are authentic. This is particularly concerning given the potential for deepfakes to allow attackers to impersonate high-profile individuals, such as senior leadership or government authorities, for illicit gain.

Defending against social engineering attacks

In the face of social engineers preying on the human element to infiltrate corporate networks, safeguarding against these threats requires a concerted effort from businesses.

© shutterstock/JLStock

Education and vigilance among employees play pivotal roles in mitigating the risk of falling victim to social engineering attacks. Maintaining a healthy dose of scepticism, especially when something appears unusual, is imperative to thwarting potentially critical attacks.

But recognising that people are human and make mistakes, organisations cannot rely solely on training. It’s vital to establish a proactive cyber security posture. This involves integrating continuous threat detection, response capabilities, and autonomous threat hunting and guarding against threats across all of an organisation’s attack surfaces.

By training personnel to spot social engineering attacks, and fortifying cyber security measures, business leaders can actively reduce their cyber vulnerability and ensure the protection of operations and sensitive data.

Contributor Details

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Featured Topics

Partner News

Advertisements

Media Partners

Similar Articles

More from Innovation News Network