Shutterstock/1918411583
12 March 2024FeaturesFuture of IPSarah Speight

‘Wishy-washy, behind the curve’: UK struggles to manage IP rights and AI

Off the back of a failed ‘bizarre experiment’ to mediate between Big Tech and creatives, the government’s failure to lead has left copyright holders in limbo, finds Sarah Speight.

The relationship between copyright owners and AI developers is in a state of limbo—and nowhere is this more keenly felt than in the UK.

Yesterday, March 11, members of the House of Lords met with senior representatives from the creative industries, to discuss the Artificial Intelligence (Regulation) Bill, introduced last November by Lord Chris Holmes.

Quoting generative AI models such as OpenAI's Dalle-3, and Getty Images’ litigation against Stability AI in the UK and US, Lord Holmes wrote on LinkedIn after the meeting: “Yet again we heard that these processes [data scraping by AI tools] are stealing the souls of artists…we have to think hard about creativity and the value we attach to it.

“Not that creatives are luddites—far from it—many are keen to incorporate AI to their own work but on the other side of this lack of clarity in the law is the issue of liability and ethical use.”

It’s fair to say that the situation has prompted some strong feelings.

Lack of consensus

Last month, on February 6, working group talks between rights owners and AI developers collapsed, after trying and failing to reach agreement on the UK government’s proposed voluntary code on AI and copyright.

This lack of consensus was acknowledged in an announcement by Michelle Donelan, Secretary of State for Science, Innovation and Technology, who said: “Unfortunately, it is now clear that the working group will not be able to agree an effective voluntary code.”

She added that ministers from her department (DSIT) and the Department for Culture, Media and Sport (DCMS) “will now lead a period of engagement with the AI and rights holder sectors, seeking to ensure the workability and effectiveness of an approach that allows the AI and creative sectors to grow together in partnership.”

(Interestingly, the DCMS chair Dame Caroline Dinenage MP condemned this as “woolly words” that “will not provide rights holders with certainty that their rights will be respected”).

Donelan said the government would now “explore mechanisms for providing greater transparency so that rights holders can better understand whether content they produce is used as an input into AI models.”

But Lord Tim Clement-Jones, a Member of the House of Lords and co-chair of the All-Party Parliamentary Group on Artificial Intelligence, is staunchly critical of how the government is handling the matter.

He told WIPR that there's “no guidance at all from government—they just thought that everybody with competing interests could sort it all out.

“The IPO [UK Intellectual Property Office] has barely given us a steer as to whether or not they think that ingestion of copyrighted material by large language models [LLMs] is actually copyright infringement. So it's hardly surprising that the talks have broken down.”

A ‘bizarre experiment’

Reema Selhi, head of policy at the Design and Artists Copyright Society was part of the working group tasked with devising the code.

Donelan’s announcement was expected, she tells WIPR, because that was her experience of the roundtables—that progress had not been made.

For Selhi, this just shows that voluntary arrangements and quasi-interventions are not helpful. “I would have expected, for example, a proposal towards greater regulation.”

She acknowledges Donelan’s mention of transparency, adding that it was something everyone could agree on during a debate held on February 6.

“That was the first time we’d heard her say that, and it was the first time I think the constituents of these working groups really felt transparency had been acknowledged by the government as something that needs addressing.”

But for the government to become a mediator between different entities was a “bizarre experiment”.

“For the government to take this kind of quasi-mediation position, as opposed to looking at which policy areas it could put forward legislative proposals on—that’s where they've missed the trick. It was the wrong thing for the government to become a mediator.”

Exception clause U-turn

At least the government got one thing right, adds Clement-Jones.

“The one bit of credit they should get is that relevant ministers—George Freeman, then Viscount Camrose—have drawn back from trying to impose a copyright exception so that LLMs can take material on board,” he notes.

But “what the government should have absolutely laid out in the first instance is that this is copyright infringement, and that licensing is necessary.

“They should have then proposed a transparency clause, a bit like the one in the EU AI Act, saying that when LLM developers use copyrighted material for training, they should declare what material they have used. And then it's up to the parties to decide whether or not they can reach an agreement of licensing.”

However, he asks whether there was much good faith involved on the part of the LLM developers during the roundtable discussions. “It's not in their interests to come to an agreement. I'm afraid we're still in the 'move fast and break things' era.”

Charles Courtenay, the Earl of Devon, is a partner and barrister at Michelmores, as well as a Member of the House of Lords. He is more optimistic.

“I think it's fairly obvious that the negotiation wasn't taking place on common ground, [and that] licensing is probably required,” he tells WIPR. “But I understand they were quite helpful discussions, nonetheless…and I think there is progress in the industry around ideas of good conduct.”

And, he believes that the roundtable was an appropriate way to begin.

Lords committee directive

Donelan’s announcement following the collapsed talks came just days after the House of Lords Communications and Digital Committee published its report, Large language models and generative AI, on February 2.

The committee received feedback from an impressive list of 41 witnesses during the consultation period, including academics, lawyers, government heads of departments, organisations, AI firms and media companies.

Those witnesses included big AI players such as OpenAI, Meta, IBM and Google.

To highlight the most talked-about of those (which has said that training LLMs without using copyrighted material is impossible), OpenAI said in its evidence: “We respect the rights of content creators and owners, and look forward to continuing to work with them to expand their creative opportunities.

“Creative professionals around the world use ChatGPT as a part of their creative process…By democratising the capacity to create, AI tools will expand the quantity, diversity, and quality of creative works, in both the commercial and noncommercial spheres.”

Current framework ‘failing’

And yet, while the Lords committee acknowledged that LLMs “may offer immense value to society”, it said that that “does not warrant the violation of copyright law or its underpinning principles.”

The committee largely came down in favour of rights owners, concluding that the government “has recently pivoted too far towards a narrow focus on high‑stakes AI safety,” expressing “even deeper concerns about the government’s commitment to fair play around copyright.”

It reported that “some tech firms are using copyrighted material without permission, reaping vast financial rewards”.

“The current legal framework is failing to ensure these outcomes occur and the government has a duty to act. It cannot sit on its hands for the next decade until sufficient case law has emerged.”

By spring this year, the government must “prepare to resolve the dispute definitively, including legislative changes if necessary.”

The committee also said that creators must be “fully empowered to exercise their rights, whether on an opt‑in or opt‑out basis”, and developers “should make it clear whether their web crawlers are being used to acquire data for generative AI training or for other purposes.”

‘No obligation’ on AI companies to be open and honest

Chris Fotheringham, a solicitor in the commercial and intellectual property team at Ashfords, believes the situation is “very wishy-washy”.

“There's no hard and fast rule, which we need. Why would AI companies agree on how something's going to work, when they are there to commercialise the technology?

“Frankly, if no one's going to bring a claim against them, it's not in their interest. There's no obligation on them to be open and honest. The way it works at the moment is that [the onus is] on claimants to prove infringement.

And, while the cost for claimants can be prohibitive, this is about access to justice.

“Some would say, why would a writer or publisher bring a claim if they can use that time to actually publish more works? That's what they're giving up,” adds Fotheringham.

UK ‘behind the curve’

Speaking in particular from visual rights holders’ perspective, Selhi would like to see regulation akin to the transparency obligations in the EU AI Act considered and adopted.

“That would mean that AI companies have to declare what works have been used for training.”

Secondly, she suggests the types of remuneration systems seen in the EU, such as through blanket licensing, extended collective licensing schemes, and even levies. DACS, for instance, has campaigned for a private copy levy for several years via the Smart Fund.

“I think the UK is behind the curve [on this],” Selhi says. “We don't have, for example, a private copy levy in the UK, [or] legislation that supports that.”

A lot of European countries are considering how they would use their private copy levy as a way to get remuneration from AI companies, she adds.

“If we had a private copy levy in the UK, we could then create a blanket licence for AI.”

The ‘joy’ of common law

Courtenay believes it is in the developers’ interests to license copyrighted material, he adds, “because the worst thing would be to build a business model on the back of unlicensed data, and then suddenly find that you've got an injunction preventing you from doing that.”

Indeed, he says that a licensing regime is necessary, noting “there are enough collective licensing societies to do that”. For example, he cites the Performing Rights Society for Music, and the Authors' Licensing and Collecting Society.

“There's been lots of hand-wringing that AI is breaking our model for rights protection. Well it doesn't—I think our legal system here, and certainly in the US, is robust enough. It just takes too long, perhaps, for a satisfactory outcome.

“The joy of a common law is it can accommodate inventions and novel developments and novel applications, but it does take time.”

Guidance on appropriate conduct would be helpful, he indicates. “But I just don't think we need to throw our hands up in horror and say, help, this is all completely uncharted territory.

“Of course, copyright can adapt—it has adapted to all new technologies, but it will need the courts to rule upon it. And then there is this limbo period, where everyone thinks that they're right.”

This, he adds, will necessarily involve a break in the pace of development. “But I'd much rather be in this position than in a position where individual creators have no protection. So I do think this tension is necessary.

“Legislating off the cuff or stopping the development of LLMs entirely or otherwise are all too damaging.

“Sitting round a table and trying to negotiate is probably the best means of resolving something. And hopefully the government can grasp the nettle and come up with some decent approaches in the next couple of months.”

A ‘free pass’

Clement-Jones, who was formerly the chair of the House of Lords Select Committee on Artificial Intelligence, takes a harsher view.

He believes that LLM creators “have had a free pass; we can't let that continue because that is going to totally impoverish the creative industries, if authors, musicians and other creators and artists are not recognised for the work they create.

“What then happens is that you can basically summon up painting in the style of David Hockney or whatever. And again, you'll set off down a trail where every artist and writer and musician can have their work ripped off.”

Instead, he suggests, the government “should be in agreement on what constitutes a proper code of practice. But you have to accept that in the first instance, developers have to ask for a licence. There's no way around this.”

And it doesn’t even need to be in the shape of legislation.

“A clear statement by the government and the UKIPO that in their view, ingestion of material by LLMs does constitute copyright infringement in the UK, would be immensely helpful.

“Don't forget that if you're an AI developer in the UK, you can get protection for AI-created work as long as there's some human element involved—unlike almost every other jurisdiction.

“So actually, developers have a better regime in the UK than almost anywhere else.”

'A necessary evil'

With or without guidance, regulation or legislation, how can rights holders realistically monitor any infringement from data mining performed by LLMs?

There are only some that will be tech-savvy enough to use the newly available software that corrupts AI code to prevent copying, suggests Fotheringham, who says the government should regulate the industry.

The question is: if we're going to use AI properly, to what degree do you allow LLMs to scrape data from copyrighted material?

“If we don't let that happen, then we don't adopt AI,” he says, suggesting that while this means that rights owners will lose out on some compensation, it’s a necessary evil.

“If we're going to innovate, and quickly—and if the government continues trying to collaborate like this, rather than take a hard stance—rights holders will suffer.”

Finally, with it being an election year in the UK, Fotheringham wonders “just how important it is to the government at the moment.”

Already registered?

Login to your account

To request a FREE 2-week trial subscription, please signup.
NOTE - this can take up to 48hrs to be approved.

Two Weeks Free Trial

For multi-user price options, or to check if your company has an existing subscription that we can add you to for FREE, please email Adrian Tapping at atapping@newtonmedia.co.uk