Spear phishing’s new skin: Why AI-powered social engineering is a real threat

May 28, 2025 | Cybersecurity
Chris Speight

Written by
Chris Speight

Remember the old phishing emails? The ones from a ‘Nigerian prince’ who needed your help moving £10 million out of the country? Bad grammar, laughable formatting, and enough red flags to make a matador nervous. Those days are over. Welcome to today’s world, where phishing attacks are supercharged by AI, and where your users are the most targeted asset and your Achilles’ heel.

How AI phishing campaigns have changed

AI-driven phishing campaigns don’t look like scams anymore, they look like your boss, or your supplier, or that colleague you just had a Teams chat with.

Natural language generation AI models can now churn out perfectly crafted messages that mimic tone, timing, and context. We’re not talking about the old mass phishing emails, this is precise, specific, and terrifyingly convincing.

That invoice reminder from ‘Accounts’? It was generated by an AI trained on your company’s public documents and internal communication styles. That voicemail from your CEO urging you to approve a wire transfer? It’s deep faked and timestamped to match their actual travel schedule. This is not just social engineering, the threat has moved on, this is synthetic persuasion at scale.

Automated phishing campaigns

In the past, crafting a decent spear-phishing email took time, research, and a human touch. These days attackers have automated the reconnaissance and delivery phases. AI can:

  • Scan LinkedIn for job titles and reporting lines.
  • Scrape company blogs for lingo, announcements, and culture cues.
  • Analyse email headers and language for patterns.
  • Auto-generate email text in your regional spelling and tone.

Attackers are running phishing-as-a-service platforms powered by large language models. All it takes is an input like “impersonate finance director requesting invoice confirmation” and out pops a bespoke, razor-sharp phishing attack.

Why we can no longer look for traditional red flags

Looking at the issue differently, your staff may be doing everything right and they’re still getting caught out. If the phishing email matches the way your CEO actually writes (down to emoji use and sarcasm), the traditional red flags are just not there.

Security training that only teaches users to spot poor grammar and generic greetings isn’t just outdated - it’s dangerously misleading. We need to start thinking in terms of contextual awareness, not just pattern recognition.

What we can do about it …

  • ‘Sandboxing’ for unusual requests - especially financial ones. A policy that requires multiple specific checks for anything out of the day-to-day norm.
  • Get better quality training - teach people how to recognise this kind of phishing.
  • Limit public data exposure. Clean up what your company shares online. Every organisational chart, press release, and employee birthday post is recon material.

AI powered spear phishing

The age of AI-powered spear phishing isn’t coming, it’s already here, and it’s scaling faster than most organisations can adapt. This isn’t about paranoia; it’s about pragmatism. You’re not just protecting data anymore, you’re defending trust, credibility, and the mental bandwidth of your workforce.

Cybersecurity these days means acknowledging that your users aren’t stupid - they’re just outgunned by criminals with tireless machines that lie really, really well.

And if you're still waiting for that Nigerian prince, maybe it’s time for you to go back to pen and paper. ;-)

Recommended reading

Cyber security roundup, May 2025

In May 2025, major brands like Adidas, Co-op, and Coinbase fell victim to breaches affecting customer data, ...

Guiding growth: My journey through the GTIA Mentorship Program

I recently had the pleasure of taking part in the Global Technology Industry Association (GTIA) Mentorship ...

Cybersecurity roundup, April 2025

April 2025 has brought a mix of major cyberattacks, emerging threats, and important regulatory developments. ...