How Attorneys Can Responsibly Use AI in Legal Drafting: Avoiding Sanctions for ChatGPT “Hallucinations”

Artificial intelligence tools like ChatGPT are increasingly attractive to attorneys facing time pressures and high client expectations. Drafting briefs, motions, or discovery responses can be faster when supported by a generative AI system. But in recent months, courts have penalized attorneys who relied on AI-generated content without proper vetting. A New York federal court fined lawyers who filed a brief containing fabricated case citations, and more recently, a California attorney was sanctioned for filing a response with false authorities generated by ChatGPT CalMatters report.
The lesson is clear: AI can be a powerful drafting assistant, but attorneys must implement safeguards to avoid ethical missteps and professional sanctions. Below, we’ll explore how lawyers can responsibly use ChatGPT and similar tools for legal writing, practical safeguards to prevent hallucinations, and the ethical frameworks guiding AI in legal practice.
The Allure and Risk of AI in Legal Practice
The appeal of AI is obvious. ChatGPT can summarize case law, draft arguments, and restructure complex documents in seconds. For small firms, solo practitioners, or overburdened litigation teams, this promises efficiency gains that rival costly research platforms.
But generative AI systems are not databases of precedent. They generate language based on probability, not legal authority. This means fabricated cases, incorrect quotations, and misleading interpretations can easily slip into drafts. Courts have shown little patience for attorneys who delegate legal judgment to a machine without rigorous review.
A software expert witness familiar with AI systems can explain that these hallucinations stem from how the technology works. Unlike Westlaw or Lexis, which provide verifiable sources, ChatGPT predicts text based on patterns. It may “cite” cases that sound plausible but do not exist. Without verification, filing such content can amount to a violation of Rule 11 or equivalent standards requiring factual accuracy.
Practical Safeguards for Attorneys Using AI
Attorneys do not need to abandon AI altogether. Instead, they should treat ChatGPT as a drafting assistant rather than a research authority. Here are safeguards every firm can implement:
1. Always Verify Citations
If AI outputs a case name, statute, or quotation, verify it against a trusted legal research database. The New York State Bar Association and other organizations emphasize that attorneys must independently confirm all authorities.
2. Use AI for Structure, Not Substance
AI is most effective for outlining arguments, rephrasing text for clarity, or generating alternative organizational structures. Substantive content—particularly citations and precedent—should come from established legal research tools.
3. Maintain Human-in-the-Loop Review
Firms should designate attorneys or paralegals to review every AI-assisted draft before filing. This human-in-the-loop approach aligns with the ABA’s Model Rules of Professional Conduct on attorney competence.
4. Keep Client Confidentiality Secure
Never paste sensitive client data directly into a public AI platform. Platforms may retain inputs, creating risks of inadvertent disclosure. Many bar associations, including the California State Bar, caution against exposing confidential material.
5. Consider Enterprise AI Solutions
Larger firms may adopt secure, enterprise-level AI tools that integrate with firm document management systems. These solutions provide guardrails, logging, and data protection features absent from consumer-facing versions of ChatGPT.
Ethical Considerations: AI and the Duty of Competence
Attorneys have ethical duties that intersect directly with AI use. Rule 1.1 of the ABA Model Rules requires attorneys to maintain competence, which includes understanding the benefits and risks of relevant technology. This “duty of technological competence” has been adopted in more than 40 states.
Submitting a brief with fabricated authorities is not only embarrassing—it may constitute a breach of this duty. Judges have stressed that lawyers cannot outsource their professional responsibilities to software. A software expert witness can help courts evaluate whether an attorney took reasonable steps to validate AI outputs, but ultimately, responsibility lies with the filing attorney.
Similarly, Rule 1.6 imposes obligations to protect client confidentiality. Feeding sensitive case details into ChatGPT without safeguards may risk violating this rule. Attorneys must therefore choose platforms and workflows that minimize disclosure risks.
Recent Sanctions and Regulatory Developments
The cases of AI misuse in legal drafting are no longer isolated anecdotes. In addition to the CalMatters coverage, courts in Texas, Illinois, and New York have issued fines and sanctions for improper reliance on AI-generated filings.
Bar associations are responding. The Florida Bar has issued draft guidance warning attorneys against citing unverified AI-generated authorities. The North Carolina State Bar is exploring rule changes to address AI. And the ABA recently convened a task force on artificial intelligence in the law, signaling that nationwide guidelines are on the horizon.
Courts themselves are also adapting. Some judges now include standing orders requiring attorneys to disclose whether AI tools were used in drafting. This transparency trend is expected to expand, making it critical for attorneys to implement consistent disclosure practices.
Responsible Use: A Path Forward
AI is not going away. Clients will increasingly expect attorneys to leverage technology for efficiency. The challenge is balancing innovation with professional responsibility. Attorneys who develop internal firm policies for AI use will be better positioned than those who react only when sanctions arise.
Practical steps include:
- Drafting firmwide AI usage guidelines.
- Training staff on the limits of ChatGPT and similar systems.
- Documenting verification steps in case a filing is later challenged.
- Consulting with technical professionals, such as a software expert witness, when adopting new AI tools.
By approaching AI with caution and discipline, attorneys can harness its benefits without falling into ethical traps.
Conclusion
Generative AI tools like ChatGPT offer powerful opportunities for legal drafting but carry equally serious risks. Attorneys who rely on AI outputs without verification risk sanctions, client harm, and reputational damage. By implementing safeguards—verification of authorities, human review, confidentiality protections, and adherence to ethical duties—lawyers can responsibly integrate AI into their practice.
For firms navigating this landscape, outside guidance can be invaluable. Sidespin Group helps businesses develop AI strategies and provides software expert witness services for litigation matters. With the right approach, attorneys can embrace AI’s efficiency while upholding the professional standards that define the practice of law.