shutterstock_1608442303_fit_ztudio
30 June 2023FeaturesInfluential Women in IPMarisa Woutersen

Unmasking AI bias

The increased use of artificial intelligence (AI) could be reinforcing many biases, according to a leading Black-led, nonprofit US organisation Capital B.

In a report published in Vox magazine last month, the organisation shared concerns that such revolutionary technology could exacerbate existing discrimination.

The report comes after EU lawmakers backed the introduction of more stringent AI rules on June 14.

The landmark AI Act, which has been in development since 2021, is thought to be the first of its kind in the world.

Consequently, the act could herald a ban on the use of the technology in biometric surveillance, emotion recognition, and predictive policing of AI systems.

For many, this regulation is critically important. Transparency in the decision-making processes of AI systems is limited, making it often challenging  to trust the fairness and objectivity of AI-generated outcomes.

Facial recognition issues

As Capital B’s report notes: while facial recognition software has been celebrated for its potential, it has been shown to fall short when recognising Black faces accurately.

AI applications in areas such as hiring, loans, and healthcare have been found to harbour discriminatory biases.

Further, social media filters tend to work less effectively on darker skin tones, and voice-activated assistants such as Siri and Alexa have shown discriminatory behaviour in their speech recognition technologies.

AI bias has also seeped into educational institutions. During the COVID-19 pandemic, the reliance on anti-cheating software employing facial recognition and video analysis for virtual test-taking revealed bias by penalising students with disabilities and those facing challenges in securing stable internet connections or private spaces for test-taking.

This has, unsurprisingly, adversely affected students with disabilities and increased anxiety levels for test-takers with mental health conditions, according to Capital B.

The root of AI bias can be traced back to inherent biases in the training data, assigned tasks, and the algorithms governing learning processes.

After all, human beings are the forces behind AI and their input data can lack diversity or lean towards certain demographics—cementing prevailing prejudices.

Problematic inputs

Speaking at the International Trademark Association (INTA) virtual meeting held in late June,  Stephen Lee, chief IP counsel at Target, shared his thoughts about the problems posed by generative AI models.

During the session “Can a trademark professional use ChatGPT legally and ethically?” he pointed out that there's “all sorts of bias that could be in the models’ inputs”, including within the data sets used to train the model, that can then lead to problematic outputs.

Fellow panellist  Susan Kennedy, assistant professor of philosophy at Santa Clara University, also echoed these concerns describing it as a “garbage in, garbage out” problem.

In this way, she said, “societal prejudices or issues can be perpetuated and reflected back in those outputs”, especially if the data has been gleaned from the internet rather than scholarly sources.

“This affects end users who are using these models, when they start to over-rely on or overestimate the ability of them. I'm concerned that people may be starting to trust these systems a little too much or are a little too confident in their outputs. We need to ensure that humans are always going to be reviewing the outputs, because I really do think that's essential,” explained Kennedy.

But, she added, it can be hard to uncover some of these biases when there is a dearth of transparency about how these models are built.

“What exactly is the training data, and how is that going to look different in different jurisdictions?”, she queried.

“It can be difficult to be an informed user of some of these systems and to try and understand what kinds of biases it might have: you have to be vigilant.”

Issues for IP

Such problems pose dilemmas for the IP industry as well, as it increasingly embraces the potential of AI.

As  Parminder Lally, partner at Appleyard Lees, points out, many in IP are exploring how generative AI tools can be used to make aspects of their services more efficient.

“For example, there are a number of tools available to potentially accelerate patent drafting or to review prior art. Such tools could be used by patent offices and courts too,” she tells WIPR.

But how can the sector ensure that it avoids the bias pitfalls presented by AI? In a series of reports, AI in the workplace, law firm Mischcon de Reya highlights the critical importance of addressing and tackling AI’s bias.

And as WIPR’s 2022 Diversity report illustrates, the sector’s progress towards a more visibly inclusive profession is still tentative at best.

Fewer than half of respondents felt that the sector was doing well in improving diversity or were unsure about its progress. And despite the focus on diversity and inclusion (D&I) in recent years, a quarter of respondents believed or were unsure whether their senior management was fully committed to D&I—the same figure as the year before.

Amid D&I efforts, could the increased use of AI in the sector compound a prevailing challenge when it comes to fostering inclusion in IP?

Despite such legitimate concerns, preventative measures, if enforced, could go a long way in quelling this disquiet.

If the AI Act is enforced, organisations using AI systems—including those in IP—will be required to provide evidence of the steps taken to prevent bias and its associated risks throughout the AI cycle.

The act further suggests practices such as identifying and analysing “known and foreseeable risks”, accounting for design choices and biases in data sets.

Other stipulations include maintaining high quality management systems, reviewing a system’s performance, and establishing human oversight to fully understand AI capabilities and limitations.

The countdown for the realisation of the AI Act has begun. And as the proposal inches its way towards its final stages before it can become actual law, many D&I advocates will be hoping that this will be sooner, rather than later.

Today’s top stories

Samsung hits Chinese tech firm with iPhone display suit

Apple infringed mobile patents, rules UK appeals court

Morgan Lewis opens office in China

Already registered?

Login to your account

To request a FREE 2-week trial subscription, please signup.
NOTE - this can take up to 48hrs to be approved.

Two Weeks Free Trial

For multi-user price options, or to check if your company has an existing subscription that we can add you to for FREE, please email Adrian Tapping at atapping@newtonmedia.co.uk


More on this story

Copyright
15 June 2023   Transparency rules over copyrighted content used in training data increase risks for foundational AI developers| Landmark AI Act thought to be the first of its kind in the world | Insights from Taylor Wessing, Gowling WLG, Appleyard Lees, Bird & Bird, Marks & Clerk and Potter Clarkson.
Copyright
12 June 2023   Firefly is a ‘commercially and legally safe’ tool that automatically attaches credentials tag that flags AI use | Software firm promises to protect customers from infringement claims.