What is Human OS and Why Humans Are the New Endpoints
TL;DR — If devices are patched, your people need a plan too. Humans are now effective endpoints: they hold tokens, make access decisions, route...
Team CM
Dec 18, 2025 5:22:36 PM
I’ve travelled through Heathrow more times than I care to remember. The queues. The repeated checks. The belt removals. The very thorough screening.
So when news broke that a man boarded a British Airways flight from Heathrow without a ticket, boarding pass, or passport — simply by walking through behind other passengers — I had the same reaction as many others:
How on earth did no one notice?
Because this wasn’t a failure of scanners, systems, or technology.
It was a failure of human behavior.
Tailgating (sometimes called piggybacking) is a well‑known security risk where an unauthorized person gains access to a restricted area by following someone who is authorized.
In physical security, that might look like:
Walking through a secure door behind someone else
Following a crowd through a checkpoint without being checked
Blending in and relying on assumptions rather than verification
In cybersecurity, tailgating is classified as a social engineering technique — because it exploits human trust, habits, and behavior, not technical weaknesses.
The key point:
Tailgating doesn’t break systems — it bypasses them by exploiting people.
In this most recent Heathrow incident, what we know so far reads less like a technical breach and more like a slow‑burn mystery.
Reports indicate that an individual entered Heathrow, passed through multiple controlled points, and ultimately boarded a British Airways flight — without a valid ticket, boarding pass, or passport — by closely following legitimate passengers through checkpoints.
What’s striking is not just that it happened, but how little we still know.
Public reporting states the man walked onto the 7:20am British Airways aircraft headed to Oslo, Norway, on Saturday by tailgating other passengers through security and evading checks at the gate. the But when is the precise moment when verification broke down? Instead, what emerges is a picture of something more unsettling: a person moving calmly, confidently, and unnoticed through environments designed to stop exactly this scenario.
No fake badge. No forged documents. No hacking.
Just proximity, timing, and the quiet power of assumption — the belief that someone else must have already checked him.
This wasn’t a single missed scan or a broken machine. It was a chain of small human assumptions, unfolding step by step across layers of security, each one depending on the last to have done its job.
As uncomfortable as this story is, it’s not unprecedented.
In late 2023, another man boarded a transatlantic flight from Heathrow to New York by tailgating through checkpoints — again without proper documentation — and was only stopped after arrival.
Different year. Different destination. Same behavior.
That’s important, because when the same failure mode appears more than once, it stops being a fluke and starts being a systemic human risk issue.
This is where human risk management really comes into focus — because when something goes wrong, the question isn’t just what failed, but how many human factors lined up before, during, and after the event.
Several well‑documented behavioral factors may have played a role:
When people perform the same task hundreds or thousands of times, attention drops. Checks become gestures. Humans go on autopilot.
We are socially conditioned to assume others belong — especially in structured environments like airports or offices.
People don’t like to challenge others. “It’s probably fine” feels easier than “Can I see your pass?”
Holiday travel, long shifts, and crowded environments reduce situational awareness — exactly when social engineering works best.
These are human factors, not technical ones.
And when serious incidents are investigated properly, they don’t stop at the moment of failure. They look at errors, slips, lapses, pressure, context, hand‑offs, recovery, and what happened before and after the event.
Crucially, they also ask a harder question:
Did people feel safe enough to speak up when something didn’t feel right?
Because in cultures where mistakes are punished, reporting is risky, or blame comes first, near‑misses stay hidden. Weak signals go unshared. And learning never happens.
This is why under‑reporting is one of the biggest hidden risks in cybersecurity — not because people don’t see problems, but because they don’t feel safe raising them.
Strong security cultures are not perfect. They are open, curious, and learning‑oriented. They treat incidents as opportunities to understand human behavior — not just to assign fault.
In cybersecurity, physical security is often treated as someone else’s problem — facilities, operations, compliance.
But attackers don’t care about org charts.
Physical access has always been a gateway to far more than just a physical space. Once someone is inside, it can quickly lead to network access, credential theft, device compromise, data exfiltration, and even insider‑style attacks — often without needing to bypass a single technical control.
If someone can walk into a secure space, they can:
Plug in hardware
Shoulder‑surf credentials
Access unattended devices
Observe processes and behaviors
⚠️ From a human risk perspective, physical and cyber security are inseparable — because humans sit at the center of both.
This Heathrow incident is a textbook example of social engineering — the same category of attack behind:
Phishing emails
Impersonation scams
MFA fatigue attacks
Helpdesk manipulation
Business email compromise
In every case, the attacker isn’t defeating technology — they’re exploiting behavior, assumptions, and culture.
And that’s why awareness alone isn’t enough.
Traditional cyber awareness programs often focus on:
“Don’t click links”
“Check your password”
“Report suspicious emails”
But real resilience comes from understanding:
Why people behave the way they do
Where attention breaks down
How culture, pressure, and habit shape risk
That’s the difference between training and human risk management.
At Cybermaniacs, we focus on:
Assessing and understanding security culture — not just controls
Creating environments where people can report mistakes, near‑misses, and concerns without fear
Applying human‑factors thinking to cybersecurity, physical security, and digital risk
Helping organizations move from awareness to measurable human risk insight
If you haven’t assessed your security or cyber culture — or don’t know how to — give us a call. After all, you can’t fix what people are afraid to talk about. 🫣
At Cybermaniacs, we focus on:
Human behavior and decision‑making
Social engineering across physical and digital environments
Culture, mindset, and real‑world risk scenarios
Moving beyond tick‑box awareness to measurable behavior change
Because whether it’s an airport checkpoint or a corporate network, the pattern is the same:
The strongest security layers fail when humans are treated as passive controls instead of active risk variables.
If someone can board a plane without a ticket by simply walking confidently behind others, it’s worth asking:
Where else are we assuming “someone already checked”?
Where has repetition replaced attention?
And where are we relying on tools to solve what is fundamentally a human problem?
Security isn’t just built in systems.
It’s lived — or lost — in behavior.
TL;DR — If devices are patched, your people need a plan too. Humans are now effective endpoints: they hold tokens, make access decisions, route...
8 min read
The New Frontier of Human Risk: Securing the AI Loop In a world where artificial intelligence is embedded into code review, customer support, cyber...
8 min read
What you'll learn: How to scale human risk with adaptive enablement, not one-size-fits-all training. Segment by role/risk/behavior and deliver the...
8 min read
Subscribe to our newsletters for the latest news and insights.
Stay updated with best practices to enhance your workforce.
Get the latest on strategic risk for Executives and Managers.