Skip to the main content.
AI-Generated Phishing Attacks Have Increased 126%—Are You Prepared?

AI-Generated Phishing Attacks Have Increased 126%—Are You Prepared?

Intro: A 126% Increase Isn’t Just a Spike—It’s a Signal

Phishing has always been one of the top cyber threats. But now, it’s evolving faster than most organizations can keep up with. According to The 2024 ESRA State of Phishing report, AI-generated phishing attacks have surged by 126% in just the past year.

That’s not a blip—it’s a structural shift. This stat represents millions of new phishing messages being generated, personalized, and deployed with lightning speed. And traditional defenses—like spam filters, blacklists, or even standard awareness training—aren’t enough to keep up.

Let’s break it down: a 126% increase means if your employees were seeing 100 phishing attempts last year, they’re likely seeing over 225 now. Many of them will look and feel more real than ever before.

 

What AI-Generated Phishing Really Looks Like

Today’s AI-driven phishing campaigns don’t rely on typos and clumsy formatting. Instead, they:

  • Use large language models (LLMs) to create fluent, personalized messaging

  • Mimic tone and language based on scraped data from social media and company websites

  • Replicate branding and internal language to avoid raising suspicion

  • Launch thousands of variants at once, bypassing pattern-matching filters

This isn’t phishing as usual. It’s smarter, faster, more adaptive—and highly scalable.

5-Oct-28-2024-03-15-50-2525-PM 
 

More Simulations? Not Exactly. More Strategy? Yes.

The default response might be: run more phishing simulations. But frequency isn’t the answer. Relevance is.

AI-enhanced phishing requires a different approach to human risk management:

  • Contextual Simulations: Use real-world AI-generated examples based on emerging tactics. For example, simulate spear phishing attempts that include actual project names or internal terminology, mirroring how AI tools are scraping public data to craft believable lures.

  • Behavioral Nudges: Encourage positive micro-behaviors like hovering over links, reading URLs aloud, or forwarding suspicious emails to IT with one click. Use in-context nudges—such as brief reminders in email footers or lightweight banners in internal messaging platforms—to prompt caution when users are interacting with potentially suspicious content. These gentle cues can reinforce secure behaviors without disrupting workflows or creating anxiety.

  • Critical Thinking Training: Move beyond click/no-click metrics by teaching employees how to recognize emotional manipulation. Run interactive workshops or use real phishing examples to decode urgency language, praise bait, or fear-based prompts. Include peer-based reflection for real insight.

  • Cultural Signals: Reinforce the norm that security mindfulness is a strength, not a slowdown. Have senior leaders model asking questions when unsure. Recognize teams that report the most suspicious messages, and normalize that flagging a false positive is better than missing a real threat.

This is about shaping mindset—not just monitoring click rates.

Learning isn’t a task—it’s a mindset 
 

How to Prepare Your Workforce for What’s Next

If 126% growth tells us anything, it’s that AI-powered phishing isn’t slowing down. So what can you do right now?

  • Build Reporting Confidence: Make reporting easy, fast, and judgment-free. Reward engagement, not perfection.

  • Train the Eye, Empower the Brain: Teach how to spot suspicious patterns and how to think about digital communication.

  • Segment Your Risk: Focus training efforts on high-risk roles (finance, HR, executive support) where targeted phishing is more likely.

  • Share Emerging Threats: Use newsletters, quick briefings, or micro-content to highlight real examples of AI-generated scams.

  • Measure Engagement, Not Just Errors: Who’s reporting? Who’s asking questions? Who’s modeling smart digital behaviors?

 

Final Thought: Prepare for the Volume—Focus on the Mindset

AI-generated phishing is a force multiplier for cybercriminals. But your greatest multiplier can be your people—if they’re given the right tools, understanding, and support.

Don’t just fight AI with AI. Fight AI with insight.

We help companies prepare their people to detect and defend against the next wave of phishing attacks—through culture, microtraining, simulations, and strategy.

Because when the volume goes up, your clarity has to go up too.

We've Got You Covered!

Subscribe to our newsletters for the latest news and insights.