shutterstock_2262576223_faithie
21 July 2023CopyrightMuireann Bolger

'An uphill battle': Artists fail to assert case against AI firms

Dispute hinges on whether the output from an AI infringes as a derivative work | Stability AI says suit lacks merit, MidJourney argues that generative outputs do not bear similarity to artists’ works | Host of defects identified in original complaint.

In a boon for generative artificial intelligence (AI) developers, a US judge has cast a sceptical eye over copyright infringement claims brought by a trio of artists.

The much publicised case emerged when Sarah Andersen, Kelly McKernan and Karla Ortiz filed a class action suit at the US District Court for the Northern District California against Midjourney, Stability AI, and DeviantArt in January.

In the complaint, the artists alleged that the developers’ AI models were trained on billions of images and infringed artistic works on a gargantuan scale.

At the time, Stability AI told WIPR that the suit was devoid of merit and demonstrated “a flawed vision of the purpose of generative AI”.

In a counter motion filed in April at a federal court in California, Stability AI insisted that its AI tool Stable Diffusion was trained on billions of images that were publicly available on the internet, and that training a model “does not mean copying or memorising images for later distribution”.

The hearings in the case took place in California on July 17 as the AI arena is beset by a slew of copyright claims, with new lawsuits filed against ChatGPT developer OpenAI and one against Meta this month alone.

This week also saw novelists Margaret Atwood and Viet Thanh Nguyen  join 8,000 authors in signing an open letter asking that permission is obtained and compensation given when a writer’s work is used by AI.

Meanwhile, the US Federal Trade Commission has launched an investigation that may compel OpenAI to disclose its methodology for developing ChatGPT and the data sources used to build its AI systems.

Flawed arguments

But fortune seemed to favour AI developers when Judge William Orrick delivered a tentative ruling on Wednesday, dismissing nearly every claim in the artists’ class action lawsuit due to the complaint’s flawed arguments and insufficient details.

However, he did allow the artists leave to amend their complaint to address some of the defects he identified.

According to Aaron Moss, chair of the litigation department at Greenberg Glusker and founder of Copyright Lately, Orrick’s stance presents critical takeaways for those pursuing cases involving generative AI—namely that the same old rules still apply when challenging this revolutionary technology.

“The fact that Judge Orrick was willing to grant leave to amend suggests that he believes that at least some of the plaintiffs’ claims may be plausible,” he told WIPR.

“At the same time, his approach to the motion suggests that he is not going to overlook well-established copyright standards in deciding this case—notwithstanding the fact that it involves new technology.

Outputs and similarity

The dispute hinges on whether the output from an AI infringes as a derivative work.

In a marked setback for the plaintiffs, Orrick honed in on the implausibility of citing copyright claims for output images when the suit failed to allege substantial similarity between the images and those outputs.

During the proceedings, Midjourney's counsel argued that, under Ninth Circuit law, a substantial similarity of output is required in order to properly allege an infringing derivative work.

Orrick seemed inclined to agree, remarking that: “I don't think the claim regarding output images is plausible at the moment, because there's no substantial similarity.”

The hearings attracted some lively debate on social media platforms with Mark Humphrey, partner at Mitchell Silberberg & Knupp, noting on LinkedIn that the artists have been left with an “uphill battle on some of these issues”,  as he cast doubt on how many claims would eventually pass muster.

“Ultimately it will be interesting to see how this case develops, and to what extent the plaintiffs’ claims can survive the pleadings stage,” he wrote.

Essentially, the artists’ main challenge now lies in proving that their works were actually used by the AI tool

But they argue that they are unable to determine whether their work was actually used without reviewing the copyright management information (CMI)—which, incidentally, they accuse the developers of removing.

The plaintiffs also fell short of differentiating between the defendants, and when alleging what role each of the defendants played with respect to the allegedly infringing conduct.

Notably, Judge Orrick seemed unconvinced by the plaintiffs' claim that the Stable Diffusion model incorporates copies of their works given the fact it was trained on more than five billion images.

One of the few claims to evade the judge’s censure was Andersen’s direct infringement claim against Stability AI: Orrick found that the artist—a registered copyright holder—had likely asserted a legitimate claim of direct infringement against Stability AI for copying her work at the ‘input’ stage when training set was created.

A second chance

However, he found that it was unlikely that such claims could plausibly extend to the other defendants, who used Stability's model after it was trained.

The plaintiffs' claims for secondary liability against Stability AI also drew unfavourable scrutiny as the judge pointed out that it remained unclear whether the developer had any control over the allegedly infringing conduct of the other defendants.

Among the other identified defects were the plaintiffs’ failure to satisfactorily identify the CMI that they claim the defendants tools had stripped or altered, as well as imprecise pleading regarding the right of publicity claim.

In response to Orrick’s criticism, the artists agreed that they would provide more specific details regarding their allegations in an amended complaint.

But Orrick queried whether the plaintiffs would, ultimately, be able to provide more facts about the way that the Stability AI source code operates—given that it is publicly available and, theoretically, the plaintiffs were able to review it when filing the initial complaint.

It wasn’t a complete defeat for the artists however: while the AI developers argued that certain claims, or portions of the claims, should be dismissed with prejudice, the judge rejected this request.

“I've never really allowed one side to make arguments without the other side responding to it,” he said before later concluding: “I’ll get an order out when I get an order out.”

As the judgment pends, Orrick’s findings could finally provide some answers for one pivotal question: does an AI output infringe, and, critically, how can this be determined?

Did you enjoy reading this story?  Sign up to our free daily newsletters and get stories sent like this straight to your inbox

Already registered?

Login to your account

To request a FREE 2-week trial subscription, please signup.
NOTE - this can take up to 48hrs to be approved.

Two Weeks Free Trial

For multi-user price options, or to check if your company has an existing subscription that we can add you to for FREE, please email Adrian Tapping at atapping@newtonmedia.co.uk


More on this story

article
12 July 2023   Eight individuals have accused Google of stealing emails, creative works, and photographs to build its commercial AI products | Bard complicit in ‘data theft from millions’ | Public discourse efforts over data scraping derided by complaint.
Copyright
26 January 2023   In the first of a two-part series on art and AI, Muireann Bolger explores a case that opens a Pandora's box of issues over the use of unlicensed creative works.