From Compliance Fatigue to Cyber Resilience: A New Playbook for Banks
In the heavily regulated world of banking, compliance is non-negotiable. But for many security and risk leaders, the relentless cycle of audits,...
Intro: The New Insider Risk Isn’t Coming—It’s Already Here
Insider threats aren’t new. But AI and automation are changing the game.
Traditionally, insider threats were defined as malicious or negligent actions by individuals within an organization—think rogue employees stealing data, or someone accidentally emailing sensitive files to the wrong recipient. Controls were built around user access, role-based permissions, and monitoring known bad behavior.
Today, the scope and scale of insider threats have grown significantly. According to the Ponemon Institute report, insider-related incidents increased by 44% over the past two years, with the average annual cost of these threats now exceeding $15 million per organization. The rise of generative AI has introduced more opportunity for both intentional abuse and accidental exposure.
One high-profile case from the past year involved Samsung engineers who accidentally leaked confidential source code by pasting it into ChatGPT while debugging. In another case, a UK-based law firm discovered an intern had used a publicly accessible LLM to draft and store sensitive client documents, unknowingly exposing protected information. These incidents show how even well-intentioned employees, under pressure or without sufficient guidance, can create massive risks.
This new reality demands a more dynamic approach to insider risk—one that combines policy, technology, behavior, and culture to anticipate and address the threat at its source.
But in the AI era, insider risk has evolved.
How Insider Threats Are Changing
AI doesn’t just increase productivity—it increases the potential for unintentional or undetected harm. And it’s shifting the insider threat landscape in key ways:
Unintentional Leaks at Scale: Employees are pasting sensitive data into generative AI tools without realizing the risks. This isn’t sabotage—it’s workflow optimization gone wrong.
Malicious Actors with More Tools: Disgruntled insiders can now use automation and AI to exfiltrate data, impersonate colleagues, or bypass internal controls with minimal effort.
AI Agents Gone Rogue: AutoGPT-style tools operating with loose parameters or too much access can unintentionally trigger compliance or security incidents without malicious intent.
Harder-to-Detect Behaviors: AI can mimic normal behavior patterns, making traditional detection methods less reliable and increasing the burden on behavioral analysis.
This shift calls for a deeper kind of vigilance—one grounded in human context, not just technical red flags.
Where Human Risk Management Needs to Focus
Human Risk Management (HRM) programs must adapt to this new threat model by expanding beyond rules and roles. The focus should be on understanding intention, pressure, environment, and emotional signals.
Key areas to prioritize:
Culture of Trust and Transparency: Build a high-reporting, low-blame environment. Make it psychologically safe to flag suspicious behavior—even your own.
Early Detection through Behavior Mapping: Monitor friction points, stress indicators, and cultural signals that suggest someone might be struggling—or slipping.
Targeted Awareness for High-Risk Roles: Equip departments like finance, legal, and IT with education on how AI misuse or policy violations could manifest internally.
Personal AI Use Guidelines: Create safe spaces for experimentation and dialogue about how people are using AI tools—and what the risks might be.
User and Entity Behavior Analytics (UEBA) and other SOC tools are useful—but they only tell you what happened. They don’t explain why it happened.
To build a true safety culture, you need early signals based on context—not just after-the-fact alerts. That means connecting:
Training data
Policy understanding
Access patterns
Employee sentiment
Digital habits
When woven together, this creates a behavioral risk map that helps HRM leaders identify trends before they become incidents. It also helps CISOs and security leaders move from reaction to prevention.
Final Thought: True Safety Culture Starts With the Why
AI-driven insider threats are rising not because people are bad—but because the systems, expectations, and pressures they operate in are changing faster than most organizations can adapt.
If you want to build a resilient, responsible digital workplace, don’t just watch for what people do. Get curious about why.
That’s where prevention lives. That’s how risk becomes resilience.
And that’s what we help organizations design, measure, and scale every day.
In the heavily regulated world of banking, compliance is non-negotiable. But for many security and risk leaders, the relentless cycle of audits,...
4 min read
Cybercriminals are leveraging artificial intelligence to launch phishing attacks that are more sophisticated, convincing, and dangerous than ever...
5 min read
AI isn’t just powering innovation—it’s powering threats. From deepfake scams to AI-generated phishing attacks, cybercriminals are using these...
3 min read
Subscribe to our newsletters for the latest news and insights.
Stay updated with best practices to enhance your workforce.
Get the latest on strategic risk for Executives and Managers.