Well-Known US Law Firm Apologizes for AI. “We Regret the Situation”

Image showing ai-hallucinations-in-legal-system

Renowned Wall Street Law Firm Faces Backlash Over AI-Generated Court Documents

The necessity of using artificial intelligence responsibly is a frequent topic of debate. Unfortunately, not all entities adhere to these crucial guidelines. The latest cautionary tale involves a highly respected New York law firm that recently admitted to violating procedures related to the use of AI before the justice system.

Unethical Practices at a Premier Legal Firm

Sullivan & Cromwell, a prestigious law firm headquartered on Wall Street, recently found itself at the center of a high-profile legal proceeding. The case involved the British Virgin Islands bringing an action against the Prince Group, a conglomerate alleged to be a front for a cyber-fraud syndicate operating in Cambodia.

During the proceedings, it was revealed that the renowned firm submitted court documents containing “AI hallucinations”—false or fabricated information generated by an artificial intelligence model. The lawyers admitted to the court that the AI-assisted documents included critical errors.

These mistakes involved:

  • Inaccuracies in cited statements and testimonies.
  • Incorrect interpretations of established legal precedents.
  • Fabricated legal citations that did not correspond to actual laws.

The Apology and Unanswered Questions

In an official statement, the esteemed law firm emphasized that they “deeply regret this situation.” They assured the public and the courts that they generally enforce strict rules regarding AI usage and claimed this incident was an isolated exception to their otherwise rigorous standards.

However, the statement notably omitted several key details. The firm failed to disclose the specific individuals responsible for the improper use of the technology, nor did they identify which large language model (LLM) was utilized. The dangers of unverified AI outputs are becoming glaringly evident across various sectors, ranging from legal missteps to severe risks impacting mental health and personal safety.

Global Courts Say “Enough” to Unregulated AI

The problem of using generative AI in litigation is far more pervasive than initially anticipated. The Australian justice system, for example, has encountered these issues repeatedly. The situation escalated to the point where the Australian Supreme Court decided to intervene, issuing stringent new directives that all legal practitioners must follow.

Rather than imposing an outright ban on artificial intelligence, these new guidelines establish a framework for safe usage:

  • Mandatory Verification: Lawyers must manually verify all AI-generated data against actual facts, current legal statuses, and authentic source quotes.
  • Approved Use Cases: AI remains permitted for drafting summaries, conducting preliminary legal analysis, and generating illustrative images.
  • Ultimate Responsibility: Human attorneys remain entirely accountable for any document submitted to the court, regardless of the tools used to draft it.

Just as in the legal field, medical professionals are facing similar scrutiny. This was highlighted recently when a hospital had to respond to allegations regarding an AI-generated emergency room discharge summary. Both industries clearly require rigorous human oversight to prevent disastrous consequences.

Frequently Asked Questions (FAQ)


What are “AI hallucinations” in a legal context?

AI hallucinations occur when an artificial intelligence model generates confident but entirely false information. In a legal context, this often manifests as fabricated court cases, fake judicial quotes, or incorrect interpretations of the law, which can severely compromise a legal proceeding if not caught by human review.


Can lawyers still use AI under the new court guidelines?

Yes, most modern court guidelines, such as those issued by the Australian Supreme Court, do not ban AI outright. Lawyers can use AI for administrative tasks, summarizing large volumes of text, and initial analysis. However, they are strictly required to fact-check and verify any AI-generated legal citations or quotes before submitting them to the court.


Why didn’t the law firm disclose which AI model they used?

The law firm likely withheld the name of the specific AI model to avoid shifting the blame onto the tech provider and to protect internal operational details. Ultimately, the responsibility lies with the human attorneys for failing to verify the output, regardless of the underlying tool used.

Source: The Guardian. Opening photo: Gemini.

About Post Author