Predictions for 2025: What Matters for Your Human Risk Strategy
We love predictions. They’re equal parts art and science, a kaleidoscope of insight, pattern recognition, and a touch of bold speculation. As we dive...
In a striking example of recent AI risk in the workforce, three lawyers recently found themselves fined a total of $5,000 after submitting fictitious, AI-generated legal precedents in a lawsuit against Walmart, insufficiently verifying the filing's accuracy. A federal judge, upon discovering the fabricated cases through a routine fact-checking process, found that the cited cases did not exist in any legal database. The discrepancies were flagged when opposing counsel and the court's research team failed to locate the referenced precedents, leading to an investigation that confirmed the use of AI-generated fabrications. This reaffirmed a fundamental principle: attorneys have an ethical duty to verify the authenticity of their references.
Beyond financial penalties, the lawyers faced professional embarrassment and reputation damage, sparking discussions within legal circles about the responsible use of AI in legal research. This case underscores the growing concern over uncritical reliance on generative AI tools like ChatGPT in high-stakes professions.
This incident is not an isolated one. It is part of a broader trend where organizations across industries rushed to implement AI in 2023, and many employees using Shadow AI, without fully understanding the risks. While the efficiency gains were undeniable, the “move fast and break things” mentality has resulted in real-world consequences. Now, in 2025, is the bill starting to come due?
Companies are just beginning to grapple with the fallout from poorly implemented early stage AI strategies, and in this case, legal teams—traditionally risk-averse—are learning the hard way what happens when AI is used irresponsibly.
This case is just one of several where legal professionals have faced repercussions for over-reliance on AI.
See exhibit A: Pras from the Fugees, who is arguing his lawyer used unapproved AI to build closing arguments. Maybe instead of 'killing me softly with his song' we should be talking about 'AI hallucinating all day long'.
Other law firms have faced scrutiny for submitting AI-generated briefs riddled with hallucinations—false or misleading information confidently stated as fact by AI systems.
Beyond this, other cases have surfaced that further highlight the dangers of AI misuse in the legal profession. In Missouri, a self-represented litigant was fined $10,000 after submitting nearly two dozen AI-generated citations that turned out to be completely fictitious. Similarly, in a high-profile deepfake-related case, an expert’s affidavit supporting Minnesota’s AI law was dismissed after it was discovered that some of its references were hallucinations produced by ChatGPT.
The question now is: where do we draw the line? AI-assisted research and drafting are already a reality, but legal professionals must implement strong validation and risk management processes. Unlike other industries, where AI errors may result in minor corrections, legal missteps can lead to case dismissals, financial penalties, and even malpractice claims.
This case is just one of several where legal professionals have faced repercussions for over-reliance on AI. Other law firms have faced scrutiny for submitting AI-generated briefs riddled with hallucinations—false or misleading information confidently stated as fact by AI systems.
The question now is: where do we draw the line? AI-assisted research and drafting are already a reality, but legal professionals must implement strong validation and risk management processes. Unlike other industries, where AI errors may result in minor corrections, legal missteps can lead to case dismissals, financial penalties, and even malpractice claims.
So why are lawyers—some of the most risk-averse professionals—falling into this trap? The reasons are likely twofold:
Companies must recognize that generative AI is not a “set it and forget it” tool. It requires oversight, policy alignment, and targeted training to ensure it is used ethically and effectively.
Understandably most organizations are behind the curve when it comes to AI competency training, AI safety protocols, and governance policies. It is a lot of change in a very short amount of time. The good news is having a mature human risk management program addresses these challenges by integrating AI risk awareness into workforce training, policies, and daily operations.
Here’s how companies can proactively mitigate AI-related legal risks:
AI’s integration into legal work and other high-stakes professions is inevitable. The question is whether organizations will take a proactive or reactive approach. Companies that establish AI competency, governance, and risk mitigation strategies now will be the ones that thrive in the era of AI-driven work.
Human risk management is about getting ahead of emerging threats before they become costly mistakes. Has your organization taken the necessary steps to train its legal staff and build AI risk resilience? If not, now is the time to act.
Want to ensure your company is AI-ready? Let’s talk about how we can help you scale AI training and governance for your workforce today.
We love predictions. They’re equal parts art and science, a kaleidoscope of insight, pattern recognition, and a touch of bold speculation. As we dive...
4 min read
As artificial intelligence (AI) tools evolve and proliferate, so too do the risks associated with their misuse. Attackers are leveraging AI to create...
4 min read
Regulatory audits are an integral part of banking, designed to identify gaps in cybersecurity programs. For regional banks, where maintaining...
3 min read
Subscribe to our newsletters for the latest news and insights.
Stay updated with best practices to enhance your workforce.
Get the latest on strategic risk for Executives and Managers.