shutterstock_2240966573_ascannio
1 June 2023CopyrightSarah Speight

Judge orders attorneys to declare non-use of AI

Texas judge issues order requiring all briefs to have not been created using generative AI tools | Platforms such as ChatGPT ‘make stuff up’ and are ‘unbound by any sense of duty, honour, or justice’, he said | EU hints at AI code of conduct 'within weeks'.

Attorneys will now be legally obliged to certify that they have not relied upon generative AI tools such as ChatGPT, Harvey or Google Bard to draft any part of a filing submitted at a Texas court.

Judge Brantley Starr of the US District Court for the Northern District of Texas issued a standing order on Tuesday, May 30, mandating all attorneys appearing in his court to make declarations to that effect.

The order requires all attorneys to file the certificate, Mandatory Certification Regarding Generative Artificial Intelligence, attesting that “no portion of the filing was drafted by” generative AI, and that any portion drafted by generative [AI] was “checked for accuracy, using print reporters or traditional legal databases, by a human being.”

In the order, 44-year-old Starr wrote: “These [AI] platforms are incredibly powerful and have many uses in the law: form divorces, discovery requests, suggested errors in documents, anticipated questions at oral argument. But legal briefing is not one of them.”

He went on to explain why. “These platforms in their current states are prone to hallucinations and bias. On hallucinations, they make stuff up—even quotes and citations.”

He added that another issue is reliability or bias.

“While attorneys swear an oath to set aside their personal prejudices, biases, and beliefs to faithfully uphold the law and represent their clients, generative artificial intelligence is the product of programming devised by humans who did not have to swear such an oath.

“As such, these systems hold no allegiance to any client, the rule of law, or the laws and Constitution of the United States (or, as addressed above, the truth). Unbound by any sense of duty, honour, or justice, such programs act according to computer code rather than conviction, based on programming rather than principle.

“Any party believing a platform has the requisite accuracy and reliability for legal briefing may move for leave and explain why.”

He added that the court will “strike any filing from an attorney” who fails to file the certificate on the docket.

Higher-than-human standards

Franklin Graves, a Tennessee-based tech IP lawyer and in-house attorney for HCA Healthcare, said he appreciated Judge Starr's statement that such platforms “are incredibly powerful and have many uses in the law…”

But he disagreed with Judge Starr’s point that “legal briefing is not one of them.”

“We are going to see a point at which a generative AI system is perfectly capable of briefing, arguably to a higher standard than a human,” wrote Graves on Twitter.

However, he went on: “I'm excited to see a judicial system recognising the importance of AI ethics and explicit call outs to hallucinations, bias and prejudice, transparency, and reliability.

“This MUST continue to be a focus for [the] #LegalTech community. We must also include privacy, sustainability, and accountability.”

He added that he hopes the American Bar Association “or similar organisations can adopt standard or model practices that can bring uniformity across jurisdictions as more certification requirements and similar orders are released by courts and legal systems.”

Attorney sanctioned over use of ChatGPT

The mandate comes after Steven Schwartz, an attorney in New York who has more than 30 years’ legal experience, was found to have submitted a brief to a Manhattan court that included bogus citations and quotes from six fictional cases.

Schwartz, an attorney at Levidow, Levidow and Oberman, admitted that ChatGPT had generated the information, but countered that he had been “unaware of the possibility that its content could be false.”

A hearing is scheduled for June 8 to decide whether Schwartz and a colleague, Peter LoDuca, should be sanctioned.

The emergence of generative AI tools has exploded in recent months and is already being used and developed by law firms. For example, Hogan Lovells recently launched Eltemate, which it said unifies the firm’s legal and AI software tools.

UK firm EIP announced the launch of its new AI software, Codiphy, on May 9, which provides specialist advice on commercial law and IP to clients in relation to software, data, and AI.

And in February this year, Allen & Overy announced the integration of Harvey into its global practice.

Harvey—built using OpenAI models based on GPT-4, and modified for law firms—will be used by more than 3,500 of A&O’s lawyers across 43 offices, said the firm.

Last month, Sam Altman, the CEO of OpenAI—which created generative AI tools DALL-E2, ChatGPT and GPT-4— testified before the US Congress on efforts to oversee and establish safeguards for AI.

Altman said at the hearing on May 16: “My worst fears are that we [the AI industry] cause significant harm to the world.

“I think if this technology goes wrong, it can go quite wrong.”

EU: AI code of conduct 'within weeks'

Meanwhile, in the EU, Margrethe Vestager—executive vice-president of the European Commission—said that she believed a draft code of conduct on AI could be drawn up within weeks, allowing industry to commit to a final proposal "very, very soon", reported Reuters.

Speaking at a news conference yesterday, May 31, Vestager said that the US and EU should push a voluntary code to provide safeguards while new laws, such as the EU’s AI Act, are developed.

"Generative AI is a complete game-changer," she said. "Everyone knows this is the next powerful thing. So within the next weeks we will advance a draft of an AI code of conduct.”

Leaders of the G7 at the Hiroshima Summit also called for pan-global regulation on AI to make the technology “trustworthy”. In a meeting last week, May 26, they agreed to create a forum called the ‘Hiroshima AI process’ to discuss the issues presented by generative AI, such as copyright and disinformation, by the end of 2023.

Did you enjoy reading this story?  Sign up to our free daily newsletters and get stories sent like this straight to your inbox.

Today's Top Stories

UK patent litigation firm opens Dublin office to cater for UPC work

UPC in the news: The court's most-read stories

‘A new dawn for patents’: the UPC opens its doors

Already registered?

Login to your account

To request a FREE 2-week trial subscription, please signup.
NOTE - this can take up to 48hrs to be approved.

Two Weeks Free Trial

For multi-user price options, or to check if your company has an existing subscription that we can add you to for FREE, please email Adrian Tapping at atapping@newtonmedia.co.uk


More on this story

Copyright
26 May 2023   Former US Copyright Office general counsel maintains that AI use of copyrighted material when training models can be deemed as fair use | Another ex-GC forcefully argues against this viewpoint in a letter to the same IP subcommittee.
Patents
25 May 2023   Its Eltemate software tools will offer smart databases, e-discovery and apps under one brand | Move follows similar legal tech services offered by A&O, EIP.