shutterstock_2209255393_zhuravlev_andrey
26 January 2023FeaturesCopyrightMuireann Bolger

Artistic licence?: Stability AI v Getty Images

The past year has seen an unprecedented explosion of AI technology with the release of products such as DALL-E, the art tool Stable Diffusion, and ChatGPT.

But this emerging technology has also stoked controversy over the question of IP ownership and fair use/dealing, paving the way for two landmark cases on either side of the pond.

This month, Getty Images sued Stability AI, the London agency behind the art tool Stable Diffusion for allegedly using copyrighted images in its training and creating images in the artists’ style without credit, compensation or permission.

In response to a request from WIPR, a spokesperson from Stability AI said: “Please know that we take these matters seriously. We are reviewing the documents and will respond accordingly."

A landmark in the UK

Notably, the action is the first major IP dispute in the UK concerning AI that has been trained on unlicensed creative works.

Meanwhile, Stability AI is facing another lawsuit in the US, after the trio of artists announced a class action lawsuit against the platform, for the alleged misuse of their unlicensed work (look out for part II for more).

The crux of the disputes is that generative AI learns from a large training dataset. From this data, it creates a model or set of principles, and it relies on that set of principles to output whatever a user requests.

For example, Stable Diffusion was trained on billions of publicly accessible images obtained from the internet via the ‘Common Crawl’ project.

John Collins, a consultant on the effect of disruptive technology and the founder of The Innovation Foundry, believes these models— and the disputes that have arisen—have highlighted a bigger issue: accountability.

He is vehement, arguing that the revolution in AI technologies will create a ‘Pandora’s box’ of IP problems.

“The vast swathes of data available for AI use without any recourse to asking and getting permission for its use from its owner brings into question the very nature of copyright, ownership, plagiarism and, at the heart, identification of the ownership and provenance of data,” he says.

In his view, the defendant’s key strategy in such cases will be to say: “The AI is doing what it does; I don't control it. I don't command it to do anything. I might ask it to write something because you've asked it to write something. But I have no influence over what it writes.”

The Facebook defence

Mark Nichols, senior associate at Potter Clarkson, agrees that the owners and users of an AI could feasibly argue that they simply put inputs into a black box and are not responsible for what comes out.

“The question of accountability for the output of an AI reflects the accountability of social media platforms for user generated content which they host. Such platforms have, historically, denied accountability (for anything from IP infringement to hate speech) on the basis that they merely offer a blank canvas,” says Nichols.

However, he emphasises one critical point in the Getty case that may differentiate it from similar, future lawsuits.

“While we have not yet seen pleadings, in all likelihood, Stable Diffusion has used Getty images in its training data set, and this is probably an infringement of copyright. This is evidenced because, unusually, the training dataset for Stable Diffusion is open, and some of the images output by the AI include analogues of Getty’s well-known watermark.

“At present there is no exception to infringement broad enough to cover such use. Getty’s position, however, is also unusual, in that not many rights owners will own so many works that a watermark (or other identification) will appear on an AI’s output.”

This means that many rights owners will not be able to rely on arguments which apply to Getty, leaving many in uncharted territory when it comes to safeguarding their rights.

Collins chimes with this assessment, arguing that many rights owners face an impossible task.

“There is currently far too little attention paid to the IP wild west approach to using someone else’s images, writings, music etc. and rightful compensation for their use at the very least, even acknowledgement of its use in the first place,” argues Collins.

“In fact, this use of AI brings into question the very nature of what IP is and how it needs to be treated legally in the future—starting now.”

Wider AI debate

What’s more, these questions come amid heated debate over the IP issues created by IP technology, and rumblings over whether to give AI developers greater access to unlicensed works.

Most notably, the UK Government’s June 2022 response to its consultation on Artificial Intelligence and Intellectual Property put forward plans “to introduce a new copyright and database exception which allows [text and data mining] for any purpose”.

As Sarah Chittock, associate at Marks & Clerk, observes, the proposal was recently rejected by the House of Lords who branded it “misguided” and warned that these changes “take insufficient account of the potential harm to the creative industries.

She notes, however, that as these AI image generators will be competing against artists and creators for work going forwards, “it will be increasingly important to establish whether the businesses behind AI generators must pay for use of the original creative works that their AI has learnt from.”

Both Collins and Nichols agree that it is crucial that better legislation around AI emerges.

“The important question, which the government must grapple with, is how favourable exceptions to infringement should be to the AI industry,” says Nichols.

“On the one hand, this is an industry which is booming and will continue to boom, and governments will want to attract AI startups. On the other hand, the creative industries are, and have been for many years, hugely important to countries around the world.”

Generative AI will, he forecasts,“compete directly with human-generated creative content, and governments will need to ensure that any boom in AI does not drive creators out of business”.

He further predicts that as technologies become more sophisticated, AI will be able to be trained from smaller datasets, making licensed training data much more feasible.

But then, he concludes wryly, “the big question will become whether to afford the output from an AI the same protection as something created by a human.”

Read early next week: Part II of AI and art—the US perspective.

Today’s top stories

The new IP landscape in Russia

MGA dolls case gets mistrial over ‘cultural theft’ claims

Already registered?

Login to your account

To request a FREE 2-week trial subscription, please signup.
NOTE - this can take up to 48hrs to be approved.

Two Weeks Free Trial

For multi-user price options, or to check if your company has an existing subscription that we can add you to for FREE, please email Adrian Tapping at atapping@newtonmedia.co.uk


More on this story

Copyright
20 July 2023   In a recent webinar, the US Copyright Office clarified when applicants need to disclose AI contributions to their work, say Mark Doerr and Alexis Marin of Greenspoon Marder.
Copyright
10 July 2023   US comedian Sarah Silverman and other authors accuse Meta and OpenAI of copying material to train AI software | Separate case sees authors Mona Awad and Paul Tremblay level a similar claim against OpenAI.
Copyright
21 July 2023   Dispute hinges on whether the output from an AI infringes as a derivative work | Stability AI says suit lacks merit, MidJourney argues that generative outputs do not bear similarity to artists’ works | Host of defects identified in original complaint.