Skip to the main content.
The AI Risk Reckoning: Lessons from the Walmart AI Case

The AI Risk Reckoning: Lessons from the Walmart AI Case

The Case That Shook Legal Circles: AI-Generated Lies in Court

In a striking example of recent AI risk in the workforce, three lawyers recently found themselves fined a total of $5,000 after submitting fictitious, AI-generated legal precedents in a lawsuit against Walmart, insufficiently verifying the filing's accuracy. A federal judge, upon discovering the fabricated cases through a routine fact-checking process, found that the cited cases did not exist in any legal database. The discrepancies were flagged when opposing counsel and the court's research team failed to locate the referenced precedents, leading to an investigation that confirmed the use of AI-generated fabrications. This reaffirmed a fundamental principle: attorneys have an ethical duty to verify the authenticity of their references.

Beyond financial penalties, the lawyers faced professional embarrassment and reputation damage, sparking discussions within legal circles about the responsible use of AI in legal research. This case underscores the growing concern over uncritical reliance on generative AI tools like ChatGPT in high-stakes professions.

This incident is not an isolated one. It is part of a broader trend where organizations across industries rushed to implement AI in 2023, and many employees using Shadow AI, without fully understanding the risks. While the efficiency gains were undeniable, the “move fast and break things” mentality has resulted in real-world consequences. Now, in 2025, is the bill starting to come due? 

Companies are just beginning to grapple with the fallout from poorly implemented early stage AI strategies, and in this case, legal teams—traditionally risk-averse—are learning the hard way what happens when AI is used irresponsibly.

Hands protect

Where is the Line? Legal AI Use Cases Gone Wrong

This case is just one of several where legal professionals have faced repercussions for over-reliance on AI.

See exhibit A: Pras from the Fugees, who is arguing his lawyer used unapproved AI to build closing arguments. Maybe instead of 'killing me softly with his song' we should be talking about 'AI hallucinating all day long'. 

 Other law firms have faced scrutiny for submitting AI-generated briefs riddled with hallucinations—false or misleading information confidently stated as fact by AI systems.

Beyond this, other cases have surfaced that further highlight the dangers of AI misuse in the legal profession. In Missouri, a self-represented litigant was fined $10,000 after submitting nearly two dozen AI-generated citations that turned out to be completely fictitious. Similarly, in a high-profile deepfake-related case, an expert’s affidavit supporting Minnesota’s AI law was dismissed after it was discovered that some of its references were hallucinations produced by ChatGPT.

The question now is: where do we draw the line? AI-assisted research and drafting are already a reality, but legal professionals must implement strong validation and risk management processes. Unlike other industries, where AI errors may result in minor corrections, legal missteps can lead to case dismissals, financial penalties, and even malpractice claims.

This case is just one of several where legal professionals have faced repercussions for over-reliance on AI. Other law firms have faced scrutiny for submitting AI-generated briefs riddled with hallucinations—false or misleading information confidently stated as fact by AI systems.

The question now is: where do we draw the line? AI-assisted research and drafting are already a reality, but legal professionals must implement strong validation and risk management processes. Unlike other industries, where AI errors may result in minor corrections, legal missteps can lead to case dismissals, financial penalties, and even malpractice claims.

So why are lawyers—some of the most risk-averse professionals—falling into this trap? The reasons are likely twofold:

  1. Lack of AI risk awareness – Many professionals assume AI tools are infallible or are unaware of their limitations.
  2. Increasing workloads – AI presents a tempting shortcut for overburdened legal teams, leading some to forgo essential verification steps.

Companies must recognize that generative AI is not a “set it and forget it” tool. It requires oversight, policy alignment, and targeted training to ensure it is used ethically and effectively.

Fighting AI threats starts with empowering your workforce

How to Get Ahead of AI Workforce Risk: A Human Risk Management Approach

Understandably most organizations are behind the curve when it comes to AI competency training, AI safety protocols, and governance policies. It is a lot of change in a very short amount of time. The good news is having a mature human risk management program addresses these challenges by integrating AI risk awareness into workforce training, policies, and daily operations.

Here’s how companies can proactively mitigate AI-related legal risks:

1. Train Your Legal and Compliance Teams on AI Risks

  • Provide AI competency training tailored for legal professionals.
  • Ensure teams understand AI’s strengths, weaknesses, and risks of hallucination.
  • Implement case studies of AI failures to drive home the real-world consequences.

2. Establish AI Usage Policies & Governance

  • Define clear guidelines on where AI can and cannot be used in legal workflows.
  • Require human verification of AI-generated content before submission.
  • Align AI policies with ethical and regulatory standards.

3. Implement Multi-Channel AI Risk Training for All Employees

  • Go beyond one-time AI awareness training—make AI safety an ongoing initiative.
  • Use microlearning, simulations, and interactive content to reinforce best practices.
  • Target training based on job function—what a lawyer needs to know about AI differs from an engineer or a marketer.

4. Update Acceptable Use Policies (AUPs) & Ensure Employee Understanding

  • AUPs should explicitly address AI-generated content and verification requirements.
  • Conduct company-wide awareness campaigns to ensure policies are understood, not just tick-box acknowledgments.
  • Implement tracking and feedback loops to measure policy adoption and gaps.

5. Partner with Experts in AI Risk and Human Risk Management

  • A strong content partner (like us) ensures training and awareness programs stay current with evolving AI risks.
  • Provide organizations with targeted, engaging, and role-specific content to mitigate AI risk at scale.

ai gears

The Bottom Line: AI is Here to Stay—Manage the Risk Before It Manages You

AI’s integration into legal work and other high-stakes professions is inevitable. The question is whether organizations will take a proactive or reactive approach. Companies that establish AI competency, governance, and risk mitigation strategies now will be the ones that thrive in the era of AI-driven work.

Human risk management is about getting ahead of emerging threats before they become costly mistakes. Has your organization taken the necessary steps to train its legal staff and build AI risk resilience? If not, now is the time to act.

Want to ensure your company is AI-ready? Let’s talk about how we can help you scale AI training and governance for your workforce today.

More from the Trenches!

Predictions for 2025: What Matters for Your Human Risk Strategy

Predictions for 2025: What Matters for Your Human Risk Strategy

We love predictions. They’re equal parts art and science, a kaleidoscope of insight, pattern recognition, and a touch of bold speculation. As we dive...

4 min read

AI Misuse and Automation Risks: How Digital Risk Culture Shapes Resilience

AI Misuse and Automation Risks: How Digital Risk Culture Shapes Resilience

As artificial intelligence (AI) tools evolve and proliferate, so too do the risks associated with their misuse. Attackers are leveraging AI to create...

4 min read

The Hidden Human Risks That Won’t Show Up in Your Audit—Until It’s Too Late

The Hidden Human Risks That Won’t Show Up in Your Audit—Until It’s Too Late

Regulatory audits are an integral part of banking, designed to identify gaps in cybersecurity programs. For regional banks, where maintaining...

3 min read

We've Got You Covered!

Subscribe to our newsletters for the latest news and insights.