Skip to the main content.
Good People in Bad Systems: Why Your Employees Aren’t the Problem

Good People in Bad Systems: Why Your Employees Aren’t the Problem

A Familiar Mistake

An employee falls for a phishing email. Another shares sensitive data in an unsecured tool. A third skips required training. The immediate reaction? Frustration. Maybe even blame.

But what if we stopped asking, "Why did this person mess up?" and started asking, "What made this mistake possible?"

Cybersecurity failures aren’t always the result of carelessness or malice. In many cases, they’re the byproduct of broken systems, unclear expectations, cognitive overload, or misaligned priorities. In short: good people operating inside bad systems.

Screenshot 2024-06-26 at 3.51.15 PM

When Good People Make Risky Choices

Human error is inevitable—but it’s rarely random. Behavioral science tells us that most decisions are made under the influence of environment, context, pressure, and habit. People make trade-offs all day long, often subconsciously, in response to system cues and workplace dynamics.

Consider some of the most common trade-offs employees face:

  • Convenience vs. Security: Using a personal email to bypass slow approval chains.
  • Comfort vs. New Thinking: Avoiding new tools or secure platforms because of unfamiliarity.
  • Productivity vs. Policy: Sharing credentials to keep a project moving.
  • Speed vs. Accuracy: Rushing through a task under deadline pressure, skipping security steps.
  • Avoidance vs. Accountability: Choosing not to report a mistake out of fear of blame.

These aren’t decisions made in a vacuum. They’re shaped by the systems, cultures, and constraints that surround people every day.

When security policies create friction, when training is irrelevant or overly generic, when tools are clunky or confusing, people adapt. They workaround. They make choices that feel reasonable in the moment—even if they introduce risk.

But let’s acknowledge something important: many cybersecurity programs are underfunded and understaffed. Security leaders are navigating legacy systems, internal blockers, and overloaded IT pipelines. They're competing with countless other priorities for employee attention. Engagement is hard—especially when you're not only asking people to think differently, but to change embedded habits within a high-pressure environment.

Friction, distance, misunderstanding, misdirection, and cultural clashes all compound risk. Yet people are usually trying to do the right thing—or at least the easiest thing. The key isn’t to demand perfection, but to design systems that reduce ambiguity and guide behavior toward safer outcomes.

This isn’t about bad employees. It’s about environments that don’t support secure behavior.

Screenshot 2024-04-25 at 2.11.34 PM

Systems Shape Behavior

We all operate inside systems—the formal and informal structures of our workplace that tell us what’s expected, rewarded, and tolerated. These include:

  • Technology stacks
  • Communication flows
  • Leadership signals
  • Process design
  • Social norms

When those systems are misaligned with security goals, humans will behave accordingly. Not because they don’t care, but because they’re responding rationally to the environment they’re in.

But how well do you understand those systems? How many have been formally mapped, assessed, or linked to actual behavior, policy, or training outcomes? Have you identified where the points of friction exist—or where incentives are quietly nudging people away from secure actions?

Do you know which departments, roles, or personas are at higher risk based on their environment or operational pressure? Are frontline teams operating under different assumptions than leadership? This is the present—and future—of human risk management: understanding how environment shapes action, and how system misalignments become vulnerabilities.

It’s not just about awareness. It’s about insight, diagnostics, and strategic remediation. And it starts by asking where, why, and how the systems you rely on may be sending the wrong signals.

If the path of least resistance bypasses the secure route, people will take it. If reporting a mistake leads to punishment, people will stay silent. If the fastest way to meet a deadline is by emailing a spreadsheet to their personal account, they’ll do it.

Great cultures build great security

Culture and Design Beat Blame and Training

Too often, organizations default to training or punishment in the wake of an incident. But awareness without redesign is like teaching someone to drive and then giving them a broken car.

Instead, start here:

  1. Assess the Environment

    • Map out systems, workflows, and behaviors with security relevance
    • Identify where human factors interact with technical and process layers
  2. Find the Friction and Misalignment

    • Locate challenges, blockers, and risk-inducing workarounds
    • Understand where misunderstandings or misdirection arise
  3. Create a Strategic Remediation Plan

    • Prioritize high-impact misalignments
    • Design responses that adjust systems, not just messaging or rules
  4. Operationalize and Embed into Culture

    • Make redesigned systems part of daily work
    • Reinforce secure behaviors through leadership signals, team rituals, and integrated learning
  5. Address the System, Not Just the Symptom

    • Treat incidents as signals for improvement
    • Involve cross-functional teams to co-design sustainable change

Reframing Risk: From Blame to Design Thinking

In a world shaped by accelerating AI risk, growing complexity, and constant change, we need to move beyond the idea that more training is the fix. We need to move toward system-level thinking—where risk is managed not just through awareness, but through purposeful design.

This isn’t about excusing risky behavior. It’s about understanding it, anticipating it, and engineering for it.

When you start treating employees like part of the solution, not the problem, everything changes.

Listen Learn Act

Final Thought: Trust the Human, Fix the System

Your people are capable, creative, adaptive, and often doing their best under suboptimal conditions. So rather than asking how to "patch the human," ask how to patch the system around them.

Because if good people are making risky decisions, the question isn’t "What’s wrong with them?" The question is:

What kind of system made that risk seem like the best option?

This is where human risk management becomes strategic. If you want your culture to work for you, not against you, it starts with insight and action. Our team can help you assess your current environment, map your systems and human risk factors, and build a prioritized roadmap for change. We design campaigns, engagement programs, and strategic content that resonate—because they’re rooted in how people actually behave, learn, and make decisions under pressure.

You don’t have to do it all alone. By partnering with specialists in behavior, communication, and cybersecurity, you can accelerate each step: from assessment to remediation to cultural embedding. Do more with less, and build a system that supports your people—rather than leaving them to fend for themselves.

More from the Trenches!

Predictions for 2025: What Matters for Your Human Risk Strategy

Predictions for 2025: What Matters for Your Human Risk Strategy

We love predictions. They’re equal parts art and science, a kaleidoscope of insight, pattern recognition, and a touch of bold speculation. As we dive...

4 min read

The Hidden Human Risks That Won’t Show Up in Your Audit—Until It’s Too Late

The Hidden Human Risks That Won’t Show Up in Your Audit—Until It’s Too Late

Regulatory audits are an integral part of banking, designed to identify gaps in cybersecurity programs. For regional banks, where maintaining...

3 min read

We've Got You Covered!

Subscribe to our newsletters for the latest news and insights.