How AI is being weaponised for phishing cybercrime

December 11, 2023 |
Robert Hall

Written by
Robert Hall

AI phishing is already a thing, and it's only going to get worse.

Cybercriminals have used the rise of AI to fine-tune their phishing attack operations and generate newer, faster tactics to create successful hacks. When we look at some of the ways cybercriminals use AI, we can see how sophisticated phishing attacks have become.

In this blog, we'll take a closer look at generative AI phishing, why hackers are adopting AI for criminal activity, and what organisations and individuals can do to protect online activities across an organisation.

What is phishing?

As you are probably aware, phishing is not a new concept, but it is worth reviewing what it involves. A phishing attack occurs when a hacker sends a bogus email with harmful links or files. These links frequently attempt to persuade victims to reveal sensitive financial information, send money, or download malware.

Spear phishing takes it a step further. A hacker will send targeted emails to an individual or organisation while impersonating the sender, such as the CEO.

Cybercriminals utilise both types of social engineering to persuade or manipulate their targets into performing an action.

How do cybercriminals use AI? 

AI is frequently making headlines for a variety of reasons, including its use in cybercrime. Since AI is computer-generated intelligence that mimics human behaviours, it's understandable that cybercriminals would try to exploit it. But how are they doing this?

 

Fraudulent chatbots 

Hackers are using AI chatbot tools and features to circumvent traditional firewalls and company security defences, tracking email addresses and acting as the hacker's right hand by discovering flaws in a network's security armour.

 

Language analysis 

AI can analyse language and information, impersonate humans, and pass off fraudulent emails as legitimate. AI can precisely target individuals and develop tailored material that can readily bypass security measures.

 

Fake websites 

Hackers are going to great lengths to create authentic-looking emails, and this extends to destinations such as websites. Hackers can fool users into thinking they've landed on a real website by copying and customising it. This increases the perceived legitimacy of their phishing scams.

Outsmarting AI-detecting tools

For AI detector technologies to detect AI, they require large samples of language. Hackers are smart, and they send emails that are slightly below this level to avoid AI detection. As a result, AI detectors are prone to reliability issues.

Why do cybercriminals use AI for phishing?

Sophistication

Because AI can research all types of data and information, the amount of sophistication of emails, attachments, and websites adds to the legitimacy of the phishing attempt, making it more targeted and, as a result, boosting the likelihood of success.

 

Speed

AI has made it easier to bypass defences, and it is also speeding up attacks by quickly finding information, and seamlessly generating authentic-looking content.

 

Scale 

AI has facilitated amateur cybercriminals who would have previously generated less accurate or persuasive emails. It has also helped experienced hackers by allowing them to target more victims in a single attack and, because of the increased speed, reach more inboxes.

 

The impact of generative AI phishing attacks – the stats

AI phishing attempts have evolved in both scale and sophistication, and they are now so sophisticated that they are increasingly breaching security protections, leaving businesses vulnerable. Recent industry reports provide a clear picture of the scope of the problem:

  • There was a 25% rise in phishing emails bypassing Microsoft defences in 2022, demonstrating how phishing emails are overcoming traditional perimeter techniques.
  • In three out of four cases, AI detectors cannot distinguish whether an email was written by a chatbot or a human (71.4%), hence they go undetected.
  • Over half of phishing emails (55%) employ obfuscation tactics such as HTML smuggling.

[Stats taken from Egress Phishing Threat Trends Report 2023].

 

AI-generated phishing campaigns are tough to detect, and the figures indicate that this is just going to get worse.

 

How can I protect myself against AI phishing? 
Emails with generative AI are rapidly slipping through the cracks, appearing real and trustworthy. As a result, organisations must safeguard their networks and sensitive data with robust solutions, a tight security infrastructure, and sufficient end-user training and education.

Businesses should encourage employees to report suspicious emails to promote a sense of approachability and trust. It's also critical to keep up with software updates, two-factor authentication, and adequate threat detection.

Our product selection at Brigantia provides MSPs and their clients with the greatest cybersecurity technologies on the market. Businesses must rethink their anti-phishing strategy. Contact our team if you'd like to chat with one of our cybersecurity experts about your needs.

Recommended reading