Technology Will Not Save You: Human Psychology and Social Engineering in the Face of AI

Cybercrime is steadily increasing year by year with attacks becoming ever more sophisticated and bolder. News about cybercriminals targeting yet another company, stealing data and extorting them through ransomware, is now commonplace. In the technological arms race that is cybersecurity, and with AI entering the scene, defenders are working overtime to keep attackers at bay, and to keep your data and devices secure. But with all the technological progress going on, one major factor is still lagging, human psychology - something cybercriminals are using to their advantage through social engineering attacks. Unfortunately, there is no security update or patch on the horizon, but through education, training, and testing, you and your organisation can become resilient enough to withstand even the most insidious attacks.

Hacking the human, not the machine

Social Engineering in the Face of AI. 8 out of 10 cyberattacks involve some form of social engineering. Social engineering differs from other cyberattacks as it relies on human error rather than on vulnerabilities in systems. It involves manipulating individuals into divulging confidential or personal information that may be used for fraudulent purposes. Social engineering exploits psychological manipulation and natural human tendencies, such as the desire to be helpful or to respond to authority. These attacks often create a sense of urgency, pressuring targets into acting quickly without questioning the legitimacy of the request.

We have all been targeted by social engineering attacks, be it through phishing emails, or that suspicious phone call (vishing) promising you a money transfer if you only share your bank details. While some of these attacks are easy to see through, others are masterfully put together and can cause grave consequences to the target if they are successful.

Take the attack on MGM Resorts International last year for example. Through public information gathered on LinkedIn, the attackers impersonated employees of a supplier, called the IT helpdesk and managed to trick them into giving them high level access into the company’s systems. The consequences? A data breach and ransomware attack that caused operational disruptions, about $100 million in lost revenue and an additional $10 million in technology consulting services, legal fees, and expenses of other third-party advisors.

Going along with, and not questioning that unsolicited email or phone call can escalate very quickly. Now with the rise of AI, things are only looking to get worse.

More sophisticated, automated and scalable attacks

The primary concern with AI in social engineering is the heightened sophistication of attacks. AI algorithms can tailor phishing emails with unprecedented precision, using data gathered from social media and other public sources to craft messages that are highly convincing and personalised. This level of customisation makes it increasingly difficult for individuals to discern malicious communications from legitimate ones.

Furthermore, the AI-generated deepfakes – realistic synthetic audio and video – poses a significant threat. These deepfakes could be used to impersonate trusted individuals or authority figures in attacks, adding a new layer of deception and leading to highly credible and targeted social engineering attacks. Microsoft’s AI speech synthesizer VALL-E reportedly only needs three seconds of your voice to clone it convincingly.

AI also enables a greater automation factor, allowing cybercriminals to launch large-scale attacks with minimal effort. Automated systems can send out thousands of phishing emails, adapting and learning from each interaction to improve success rates. This scalability and efficiency of AI-driven attacks present a daunting challenge for individuals and organisations alike.

You cannot rely on technology to save you

On the flip side, AI also empowers the defense against social engineering. Machine learning algorithms can analyse patterns to detect attack attempts more effectively than traditional methods. AI-driven security systems can provide an early warning against potential social engineering breaches.

However, the attacking side needs only one win.

So, when a social engineering attack inevitably cuts through the filters, your organisation needs to know what to look for and how to act. Because it does not matter how sophisticated your firewalls and anti-malware software are if the first thing you do is give away the keys to the castle when the attackers come knocking.

Train and test your organisation’s resilience

One critical line of defense against social engineering is a well-informed workforce. A culture of security should be fostered where employees are trained and aware, and leadership actively participates in security initiatives to set a tone of vigilance and commitment to cybersecurity.

Regular training sessions should be conducted containing methods on how to identify and respond to social engineering attacks. We know that a well-developed awareness program including real-world examples will keep security at the forefront of everyone’s mind and increase resilience. Awareness training combined with simulated social engineering attacks are particularly effective.

Simulations provide employees with practical experience and by creating relevant scenarios that mimic actual social engineering tactics, employees can learn to recognise and respond appropriately to threats in a controlled environment

Victor Rheborg

In addition, it serves as a test to gauge the effectiveness of current training programs, giving you insights on where to improve and what to focus on in coming trainings.

To get the most out of your simulations, make sure to:

  • Tailor Simulations: Customise the scenarios to reflect the specific threats your organisation might face. This makes the training more relevant and effective.
  • Feedback: After a simulation, provide and seek immediate feedback to and from participants. This helps in reinforcing learning points and learning about what the organisation finds difficult.
  • Track: Perform recurring simulations and record the outcomes of each simulation to track improvements over time and to keep awareness high.

It is important to ensure that simulations are conducted with respect and sensitivity. The goal is to educate, not to trick or embarrass employees. As part of fostering a culture of security within the organisation you need to create an environment where security is a shared responsibility, and employees feel comfortable and encouraged to report suspicious activities without fear of reprisal.

The future Is more unknown than ever

Looking into the future is never easy, but with the rapid progression of AI it is harder than ever to know what the coming years will look like and what they will mean for society. But what we do know is that criminals will continue to do what criminals do and human psychology will continue to be prone to manipulation.

Social engineering represents a significant and growing threat in the realm of cybersecurity and its reliance on human psychology is particularly challenging to combat. However, through education, training and testing, individuals and organisations can better equip themselves to detect and defend against these attacks. Your overall resilience to cyberattacks will depend heavily on the ability to adapt to and anticipate the evolving tactics of threat actors, with social engineering at the forefront of these challenges.

Read more about och Cyber & Digital Risk offering here.

Victor Rheborg

Senior Associate

Let's connect

Technology Will Not Save You: Human Psychology and Social Engineering in the Face of AI Technology Will Not Save You: Human Psychology and Social Engineering in the Face of AI
I want an Advisense expert to contact me about:
Technology Will Not Save You: Human Psychology and Social Engineering in the Face of AI

By submitting, you consent to our privacy policy

Thank you for connecting with us

An error occurred, please try again later