Skip to the main content.
A Cascade of Avoidable Errors: The Microsoft Breach & Human Risk in Modern Security Practice

A Cascade of Avoidable Errors: The Microsoft Breach & Human Risk in Modern Security Practice

Key Considerations for CISO’s in the wake of the CRSB’s Report
on the MSFT Breach

As we all know, the need for cybersecurity is still on the rise, and I’d argue that it's become increasingly clear how our traditional, tech-centric defenses aren't enough. The recent report from the Cyber Safety Review Board on the Microsoft breach brought this into sharp focus. This wasn't just about technological failures; it was also about human errors and organizational culture—factors that traditional cybersecurity strategies often overlook, push to the side, or discount as ‘not as important to the mission of information security’.

Check Out A Case Study

I encourage every leader in the cybersecurity field to take a moment in the next few months to re-evaluate the very foundation of your human risk management, cyber culture and cyber awareness programs. It’s time to shift the perspective from viewing the human aspect of digital protection as an add-on to recognizing it as a cornerstone of our security posture. The insights from recent reports make it clear that enhancing our understanding of human factors isn't just beneficial—it's essential. As budgets are prepared for 2025, now is the time to consider what resources and changes are necessary to elevate your approach. 

Screenshot 2024-04-25 at 2.16.20 PM

The report contained several key points regarding organizational and cybersecurity culture, human risk, psychology, decision making, and cybersecurity policy, especially highlighting the broader implications of Microsoft's security practices and transparency. But, we’re also talking about a major tech company which I guarantee dedicates more time and money to cybersecurity than most other companies out there. It’s a pickle. 

Here are some key takeaways to consider: 

  1. Corporate Security and Transparency: The Cyber Safety Review Board's report is a stark critique of Microsoft's cybersecurity practices and corporate transparency, specifically pointing out the company's failure to prevent a significant breach by Chinese cyber operators. This incident underlines the critical importance of robust security practices and honest communication in maintaining trust and safety in the digital ecosystem.
  2. Cybersecurity Culture and Corporate Responsibility: The report emphasizes that Microsoft's security culture was found wanting and calls for an overhaul. This aspect is crucial, considering Microsoft's central role in global technology infrastructure, affecting national security, economic foundations, and public health and safety. The need for rapid cultural change within Microsoft, including a public commitment to security-focused reforms, suggests a broader industry-wide conversation on the responsibility of tech giants in ensuring cyber safety.
  3. Recommendations for Improvement: The board’s recommendation for Microsoft to pause the addition of new features to its cloud computing environment until significant security improvements are made underscores a preference for a security-first approach over rapid feature development. This recommendation suggests a shift towards prioritizing the security and integrity of existing services over the expansion or introduction of new features.
  4. Impact and Scope of the Breach: The detailed account of the breach's impact, including the intrusion into senior U.S. officials' email accounts and its extensive duration and scope, highlights the tangible risks and consequences of cybersecurity failures. It also brings to light the importance of continuous vigilance, robust detection mechanisms, and the rapid response to threats.
  5. Historical Context and Persistent Threats: The mention of the hacking group Storm-0558’s long history of similar intrusions since at least 2009 provides context for the persistent and evolving nature of cyber threats. It illustrates the challenge of defending against well-resourced nation-state actors and underscores the necessity for ongoing adaptation and enhancement of cybersecurity defenses.
  6. Microsoft's Response and Future Directions: Microsoft’s acknowledgment of the need for a new culture of engineering security within its networks and its commitment to identifying and mitigating legacy infrastructure, improving processes, and enforcing security benchmarks reflect an awareness of the need for systemic change. However, the effectiveness of these measures and the company's adherence to its commitments remain critical for rebuilding trust and ensuring a safer digital environment.

This all brings to light the complex interplay between cybersecurity, corporate culture, and the broader implications of security breaches in the modern digital landscape. But to frame this in the context of human risk, human factors and human error – let’s go one layer deeper. 

Schedule A Demo

Unpacking a few key phrases from the CRSB Report on the Microsoft Breach:

  • A “cascade of Microsoft’s avoidable errors”
  • "The Board finds that this intrusion was preventable and should never have occurred,"
  • The board criticized a Microsoft corporate culture that was "at odds with the company's centrality in the technology ecosystem and the level of trust customers place in the company."
  • Flat out say that MSFT must drive rapid cultural change
  • And would benefit from its CEO and Board of Directors directly focusing on the company’s security culture

This leads to a few particular questions: 

  1. What, specifically in the context of human security, are ‘avoidable errors’? 
  2. What does ‘preventable’ really mean?
  3. How did they measure this? 
  4. Would you be measured the same way post breach? 

So where to begin? First, is to understand that answering these questions can be done, should be done, and can be quantified, qualified, and analyzed for key risk factors within your organization.. 

The study of human risk and error is built upon a foundation laid by  influential thinkers in psychology and safety science. James Reason's Swiss Cheese Model revolutionized our understanding of systemic failures, illustrating how human errors pass through multiple layers of defenses, ultimately leading to accidents. Daniel Kahneman’s exploration of cognitive biases through his Dual-Process Theory shed light on how quick, intuitive decisions (System 1) and slower, analytical thinking (System 2) can lead to errors in high-stakes environments. Meanwhile, Charles Perrow's Normal Accident Theory and Sidney Dekker's work on human factors have challenged and expanded our notions of error and blame, suggesting that what often appears as human error is actually a consequence of complex system designs and interactions.

While these frameworks provide invaluable insights, they also reflect a tension between understanding human behavior as a variable we can control and as an intrinsic, often unpredictable part of complex systems. This historical perspective shows that while the science behind human risk and error is robust, it’s not infallible, urging us to continually challenge and refine our approaches to human risk thinking and change in the organizational context. 

In the landscape of cybersecurity, understanding the nature of errors—whether intentional or unintentional—is crucial for developing effective security measures. The distinction between these types of errors is not just academic but has significant practical implications for how we design our insider threat programs and overall security posture.

For instance, Intentional vs. Unintentional Actions:

  • Intentional Actions: These involve deliberate breaches of security protocols or malicious activities by insiders. Such actions are motivated by various factors, including financial gain, espionage, or personal vendettas.
  • Unintentional Actions: These occur without malicious intent and often result from human error, such as misconfigurations, poor security hygiene, or simply misunderstanding protocol. These errors can be slips or lapses (momentary losses of attention), mistakes (errors in planning or decision-making), or procedural violations (deviations from established processes due to perceived inefficiency or ineffectiveness).

When the Cyber Safety Review Board mentions "avoidable errors," it raises questions about the nature of the errors and the measures that could have been in place to prevent them. In a legal context, the term often implies negligence or a failure to act with the prudence that a reasonable person would exercise under similar circumstances. This is critical for CISOs to understand as it directly relates to the accountability and liability of the organization.

Preventable Errors:

  • The notion of preventable errors suggests that with the right processes, awareness, and technologies, these errors could have been anticipated and mitigated. From a cybersecurity perspective, this means that CISOs should not only focus on hardening systems against external threats but also on enhancing the resilience of systems to internal human errors.

Insider Threat Programs:

  • Modern insider threat programs need to extend beyond monitoring and controlling to understanding and mitigating the root causes of risky behaviors. This involves:

    • Behavioral Analytics: Leveraging data analytics to detect patterns of behavior that may indicate potential security risks.
    • Psychological Safety: Creating an environment where employees feel safe to report mistakes and security lapses without fear of retribution.
    • Training and Awareness: Tailoring programs to address specific types of unintentional errors and teaching how to avoid them through realistic scenarios and regular refreshers.

Measuring and Understanding Deeper Factors

To effectively manage human risks, CISOs should adopt a more nuanced approach that considers psychological, organizational, and cultural factors. This involves:

  • Cultural Assessments: Evaluating the organizational culture to understand how it might predispose employees to commit errors or violate policies.
  • Regular Audits and Feedback Loops: Implementing continuous feedback mechanisms to learn from incidents and near-misses, thereby improving policies and training.
  • Risk Modeling: Using advanced models to predict where errors are most likely to occur and implementing targeted interventions to prevent them.

By viewing "avoidable" and "preventable" errors through this comprehensive lens, CISOs can better prepare their organizations to handle the complex human factors that influence cybersecurity. This approach not only addresses the immediate risks but also contributes to building a more robust and resilient security culture.

As we reflect on the evolution of cybersecurity, it's clear that our traditional approaches have primarily focused on technological defenses—firewalls, anti-virus software, encryption. This perspective often attributes cybersecurity failures predominantly to human error, with an emphasis on therefore controlling employee behavior through stringent rules and punitive measures. It's a reactive approach that treats symptoms rather than causes, assuming that tighter control and harsher penalties will minimize risks.

However, this is inadequate for addressing the complexities and realities of modern digital environments. We’re here to advocate for a more holistic approach. By recognizing human behavior not just as a potential liability but as a pivotal element in the cybersecurity ecosystem, we acknowledge that behaviors are often symptoms of broader systemic issues—be they poorly designed systems that invite error, unrealistic policy expectations, or a workplace culture that prioritizes speed over security. 

Addressing Commonplace Biases and Non-Prioritization
of Human Element

The overwhelming focus in cybersecurity has traditionally been on the technical layers of defense. It’s estimated that 99.9% of professionals in the field come from technical backgrounds, often concentrating on the seven layers of the technology stack to defend against cyber adversaries. This technical bias overlooks the crucial eighth layer: the human layer. Here, human decisions and actions play a significant role in both creating vulnerabilities and mitigating them.

Why a New Approach Is Critical

Understanding the 'why' behind actions is essential for developing more effective and sustainable security strategies. By investigating the root causes of behaviors, organizations can design systems and processes that naturally encourage secure practices. This might include more intuitive system interfaces that make secure choices easy and natural for users, or policy frameworks that are adaptable to the real-world pressures employees face.

Screenshot 2024-04-25 at 2.33.17 PM

Fostering a Positive Security Culture

Cultivating a positive security culture is central to this new approach. Instead of relying solely on compliance and fear of retribution, a positive culture encourages engagement with security practices through understanding, commitment, and shared responsibility. 

The Path Forward

As we move forward, it’s crucial for CISOs and cybersecurity professionals to integrate these new perspectives into their strategic planning. This means prioritizing investments not only in technological defenses but also in understanding and shaping human behavior within digital systems. By aligning our cybersecurity strategies with the complexities of human behavior, we can build more resilient organizations that are better equipped to face the challenges of the digital age.

Transitioning from old to new thinking in cybersecurity isn't just a shift in tactics; it's a fundamental change in philosophy. By embracing this change, we can transform our approach from one that merely reacts to breaches to one that proactively strengthens the human foundations of our digital defenses.

Screenshot 2024-04-25 at 2.11.34 PM

 

More from the Trenches!

When Phishing Hits Organizations Hard

When Phishing Hits Organizations Hard

When it comes to phishing, these deceptive tactics are like a game of cat and mouse, with attackers constantly refining their methods to outsmart...

5 min read

What are Human Risks in Cyber Security Management?

What are Human Risks in Cyber Security Management?

Rational Choices vs. Emotional Undertones: Navigating Human Decision Making What are human risks in cyber security management? To make models work,...

8 min read