shutterstock_2259165337_jrdes
29 June 2023Marisa Woutersen

‘Beware hallucinations’: INTA panel shares risks of AI tools

Panellists discussed the benefits, pitfalls and ethical concerns of tools such as ChatGPT for legal professionals at the association’s virtual meeting, says Marisa Woutersen.

The International Trademark Association (INTA) Annual Virtual Meeting 2023 day one started with a session exploring the use of generative AI technology in the IP sector.

Running from June 27-29, the event’s first seminar brought together a panel consisting of Peter Van Dyck, partner at Allen  &  Overy, Matt Hervey, head of artificial intelligence at Gowling WLG, Susan Kennedy, assistant professor of philosophy at Santa Clara University, and Stephen Lee, chief IP counsel at Target.

Lee highlighted the benefits that AI tools bring to large retail sectors.

“Anything you can envision and make things more efficient and productive… AI can improve upon and definitely get some more efficiency out,” said Lee, emphasising how AI tools can boost productivity, efficiency, and innovation across a range of tasks such as document drafting, idea generation, artwork, coding, and customer service.

Legal traps and pitfalls

When discussing the legal pitfalls that IP professionals and their clients may face when using AI systems, Van Dyck underscored two major risks: data leakage and the potential infringement of IP rights.

The unintentional disclosure of sensitive information entered as prompts in AI systems can compromise confidentiality, while training AI systems on copyrighted or trademark data may lead to infringement, said Van Dyck.

Hervey expanded on the risk of IP infringement, particularly with the shift from narrow AI to generative AI. Generative AI models, such as large language models, are “trained on vast amounts of often unstructured data”, including copyrighted materials, and therefore raise concerns about potential infringement in both the training process and the outputs generated by AI systems, said the Gowling WLG partner.

Hallucinations, inaccuracy, fake citations

The panel addressed the specific pitfall of hallucinations caused by AI.

“We now need to realise that if [legal professionals] have used a large language model, it might produce incorrect information, which is incredibly plausible, well-written, maybe even includes fake citations.” said Hervey

“It does require a different mindset as to where these tools should and shouldn't be used. I'd also say there's increasing research on tech ventures, to reduce the risk of hallucinations, which include refining large language models on a corpus of more relevant information to your subject matter, to require it to search across specific trusted datasets to give external citations, not its own fake citations.” continued Hervey.

Human review, refining training on more relevant information, and searching trusted datasets were suggested as strategies to mitigate the risks.

Van Dyck shared his personal experience with hallucinations caused by Allen & Overy’s Harvey, the AI tool developed in partnership with his firm.

“I asked the tool to help me with legal research to give me some case law on certain trademark related topics. It came up with three cases, two of which were entirely spot on, did exist, we're not hallucinated. But then a third one, that was exactly the same format, was hallucinated,” he recalled, highlighting the importance of human review.

“The main way to deal with hallucinations today is by training our people, having policies in place, and emphasising the need to double-check the outputs.

“We’re also working on technical solutions to make the verification process easier,” he added.

Kennedy noted that hallucinations can confidently misrepresent facts and fabricate sources.

She emphasised the importance for companies to have a policy in place and make sure that people “understand the limitations, what’s an appropriate use of this, and how important it is to have that element of human review”.

“There’s no substitute yet for human judgement and expertise,” she added.

Ethical concerns

However, there are ethical concerns surrounding AI, the panel said.

“There could be bias in the data sets themselves… the data they’re pulling from is the internet. And we’ve all seen the reliability and sometimes the unreliability of the internet,” said Lee.

Privacy risks, bias in training data, and socio-technical concerns such as automation bias were some of the issues raised by the panel.

The speakers stressed the importance of transparency, disclosure, and context-specific risk mitigation strategies.

The three speakers also explored the challenges surrounding copyright and authorship in relation to AI-generated content.

In terms of IP in relation to AI, Lee stressed the need for clarity on ownership and indemnity, recognising that the role of IP professionals will evolve as AI tools become more prevalent in law firms and companies.

Van Dyck responded: “I think the short-term I see is becoming more efficient. But I don’t see a fundamental change.”

“There will be certain simpler jobs that may disappear. But they’ll be replaced by new jobs,” he continued.

The speakers agreed that AI tools are currently more of a co-pilot and not capable of operating independently.

However, challenges related to different jurisdictions and regulations in using AI were also acknowledged, particularly with large models trained on diverse data sources.

Kennedy explained: “Figuring out ways that we can actually coordinate this, now that we have these really large models that have huge amounts of training data is becoming a hard-pressed issue.

“Trying to come up with a regulation for bias is going to be really difficult when we don’t yet have a consensus on [whether a] model is biased or not,” she continued.

AI legislation

Van Dyck mentioned the positive aspects of the EU AI Act, such as a risk-based approach and a transparency emphasis, but he noted that legal certainty is more needed.

“To me, we don’t necessarily need a whole lot more legislation. I think what we need today is legal certainty… We are reaching the stage that… AI-specific legislation is good and can work. But we also need legal certainty on the existing legal frameworks,” said Van Dyck.

Did you enjoy reading this story?  Sign up to our free daily newsletters and get stories sent like this straight to your inbox

Today’s top stories

BCLP adds to partner haul from Dentons

Barbie maker seeks to block Burberry ‘BRBY’ mark

US Copyright Office to host webinar on global impact of AI-generated works

Already registered?

Login to your account

To request a FREE 2-week trial subscription, please signup.
NOTE - this can take up to 48hrs to be approved.

Two Weeks Free Trial

For multi-user price options, or to check if your company has an existing subscription that we can add you to for FREE, please email Adrian Tapping at atapping@newtonmedia.co.uk


More on this story

Trademarks
11 April 2023   At least four fake “Ernie” bot apps allegedly found in Apple’s App Store | Chat bot touted as ‘China’s ChatGPT’ | Search engine giant Baidu controls 75% share of the China market.
Patents
30 June 2023   How do you protect marks beyond Planet Earth? A panel at INTA’s annual virtual meeting explored the critical issues, finds Marisa Woutersen.
Copyright
10 July 2023   US comedian Sarah Silverman and other authors accuse Meta and OpenAI of copying material to train AI software | Separate case sees authors Mona Awad and Paul Tremblay level a similar claim against OpenAI.