shutterstock_2253228203_somyuzu
8 August 2023FeaturesInfluential Women in IPRose Place

Generative AI tools: An agent of bias?

In the legal sector, the use of artificial intelligence (AI) is increasing. Some law firms have adopted AI tools that process documents and review contracts, while others are going as far as generating legal recommendations and predictions.

Meanwhile, huge strides are being made in the wider industry to encourage equality and diversity initiatives: the theme of this year’s World Intellectual Property Day, for instance, was Women and IP: Accelerating Innovation and Creativity.

However, by using these tools might law firms and IP services companies be unwittingly holding back efforts to improve diversity?

Legal’s love of AI

Generative AI uses existing data to create new content such as text, audio, or videos. Under this umbrella falls large language models (LLM), such as ChatGPT, Claude, and Bard, which are learning algorithms that generate text based on large datasets, input by humans or scrapings from the internet.

Parminder Lally, partner at Appleyard Lees, believes that IP lawyers can benefit from the technology.

“Generative AI tools could help to speed up the process to draft a patent application or a response to a patent office objection, which could potentially impact cost and efficiency. It could also impact the billable hour model that many firms use,” says Lally.

“Large firms, particularly multinational law firms with IP departments, may have the resources to start using these AI tools or developing their own custom ones to streamline their activities.”

Automating routine tasks allows for more client-facing time and, in the competitive field of law, those offering the most efficient and cost-effective solutions may gain an advantage.

Adjacent to AI’s rapid development, however, are its risks. “We’re really enthusiastic about generative large language models but recognise that there are issues that have to be navigated very carefully,” says Nina O’Sullivan, partner at Mishcon de Reya.

Corrupted data

As companies increase their AI use and develop custom tools, there is a growing concern about its outputs containing biases. Lally explains: “AI tools have been trained using huge volumes of data [but] they do not know anything about the quality of that data.”

The assumption is that machines are neutral and will therefore provide a neutral output. On the contrary, the data that informs AI is man-made, whether this is curated for a specific purpose or taken from the internet.

“AI is taught on human information, which may contain historic human bias, so there is certainly a risk in AI systems that this bias is repeated,” says Daniel Gray, associate at Mishcon de Reya.

“The increased risk… is that where human bias may sometimes appear more sporadic or random, implicit bias in AI systems may be more systematic and consistent.”

Biased results have real-life consequences, according to Lally. “AI tools could potentially pick up on patterns that human reviewers do not spot as clearly, such as differences in language or style that could be more prominent among one group of people than others.

“If an AI tool is used for recruitment or promotion, it could reject some candidates for reasons a human might not even spot.”

In 2014, Amazon developed a hiring tool, ‘AMZN.O’, an AI trained to review job applications and highlight the top candidates using Amazon’s past recruitment data. The AI observed their previous hiring patterns and selected predominantly male candidates, penalising applications that included the word ‘women’.

The problem has manifested itself in other ways. A black MIT researche r found that facial recognition software could not detect her face until she wore a white mask, likely a result of the data being trained mainly on white faces.

WIPO reports that in 2022, only 16.2% of inventors named in international patent applications were women. If AI is perpetuating historical human biases, then the IP industry is at risk of repeating its mistakes, for example, if patent offices rely on AI to examine applications.

Flavia Murad Schaal, founding partner of Mansur Murad, is concerned that the data may have gaps where information from diverse groups is less accessible and difficult to obtain and that this data “will be excluded from the learning process and create a bias in the examination procedure.”

The ‘black box’ problem

Tackling such discrimination is problematic due to a lack of transparency as to how AI systems come to an informed conclusion.

Named the “black box” problem, creators struggle to track the specific piece of data that led to an AI’s decision. To mitigate this challenge, some of the EU’s newly drafted AI regulations are focusing on ‘explainability’ and transparency to deal with the bias issue either directly or indirectly.

“Explainability'' requires an understanding of how an AI conclusion was reached, and forms a large part of the EU’s AI Act currently being developed.

Blazing a trail in AI legislation, the Act expects creators to have a full understanding of the data or algorithm that is feeding their AI system.

Transparency is a common theme within the act and AI creators will be expected to keep full records of data sets and processes that have led to their AI’s development.

Importantly, AI systems must be consistently monitored and reviewed, particularly as learning systems are fed new information and start producing different outputs.

Gray explains: “The AI act is so important in requiring companies to go back to the very foundations and explaining how they've taken steps to avoid the inherent risks of the systems.”

The New York City Department of Consumer and Worker Protection (DCWP) has taken a firmer approach to tackle AI bias specifically.

The law, enacted July 5, 2023, prevents employers from using AI for recruitment purposes unless the system has an independent bias audit with results made available to the public. Additionally, job applicants are given an opportunity to request a different, non-AI selection process.

The UK’s AI white paper, published in March 2023, echoes similar sentiments of explainability and transparency seen in the EU’s Act, however, it is not yet enforcing any regulations through legislation, leading to criticism during a July debate in the House of Lords.

Nina O’Sullivan says these issues present a problem for lawmakers.

“IP is all about encouraging creativity. The [UK] government wants to encourage the use of AI but they have to drive very carefully between doing that and at the same time not damaging the interests of creators.”

The future of AI in law

The potential impact of AI on diversity has been flagged but largely remains unknown territory.

Whether other jurisdictions will follow the EU’s AI Act, which has been likened to the union’s widely adopted General Data Protection Regulation, is an open question, but widespread compliance with the highest regulatory standard is probably the safest bet for multinational companies.

Further recommendations to avoid discriminatory outcomes include involving diverse voices at the developmental stages of AI to ensure that the data reflects all groups.

The recurring theme is the continuous need for human oversight throughout the entire AI life-cycle, beginning with high-quality and diversity-informed data.

Already registered?

Login to your account

To request a FREE 2-week trial subscription, please signup.
NOTE - this can take up to 48hrs to be approved.

Two Weeks Free Trial

For multi-user price options, or to check if your company has an existing subscription that we can add you to for FREE, please email Adrian Tapping at atapping@newtonmedia.co.uk


More on this story

Copyright
12 June 2023   Firefly is a ‘commercially and legally safe’ tool that automatically attaches credentials tag that flags AI use | Software firm promises to protect customers from infringement claims.
Copyright
2 February 2023   In part II of a series on art and AI, Muireann Bolger hears how lawsuits against machine learning ‘artists’ may end with Spotify-style databases of licensed works.
Copyright
17 August 2023   Recent court cases are beginning to hint at how copyright issues concerning the training of generative AI algorithms will be handled—but firmer guidance is needed to reassure artists, say Matt Savare, Bryan Sterba and Chloe Rippe of Lowenstein Sandler, in part I of a two-part series.