shutterstock_2276608417_tada_images
18 August 2023FeaturesCopyrightMatt Savare and Bryan Sterba

How to handle the ‘wild west’ of generative AI: part 2

Courts have yet to articulate how copyright authorship standards should adapt to partially generated artificial intelligence (GAI) works. Yet, that has not stopped technologists from testing the legal boundaries.

Litigation is pending between physicist Stephen Thaler and the US Copyright Register over whether he can or cannot claim copyright to a GAI-generated work.

In his registration application, Thaler stated that the machine was the author, but purported to transfer ownership to himself under the work made for hire doctrine.

Notably, Thaler does not contend that he is the author of the artwork. Rather, he argues for the machine itself to be deemed the author by the US Copyright Office—in direct contravention of the office’s policy.

According to the registration guidance, Works Containing Material Generated by Artificial Intelligence, 88 Fed. Reg. 16190 (2023), “[t]o qualify as a work of ‘authorship’, a work must be created by a human being”. It further outlines how the office “will not register works produced by a machine or mere mechanical process that operates randomly or automatically without any creative input or intervention from a human author”.

Authorship standards

Meanwhile, artist Kris Kashtanova argues that she is the author of images initially generated by GAI. In September 2022, Kashtanova sought registration for a comic for which she generated images from a GAI tool. The Copyright Office, while allowing her copyright in the comic as a compilation, refused her copyright to the individual images.

Despite Kashtanova’s somewhat laborious involvement in selecting and editing the images (she tested “hundreds or thousands of descriptive prompts” in Midjourney before landing on “as perfect a rendition of her vision as possible”), the office decided her efforts did not meet copyright law’s authorship standards.

It rested its position on legislative history, language outlined in the US Constitution, and case law such as Burrow-Giles Lithographic v Sarony (1884) which suggest that Kashtanova’s involvement in ideating and executing the images falls short of authorship. See Burrow-Giles, 111 U.S. 53, 61, 58 (1884)  that holds that an ‘author’ is the one “who has actually formed the picture,” and their copyrightable work is “representative[] of [their] original intellectual conceptions”.

The office’s positions on authorship are supported in existing case law, leading copyright treatises, and legislative history. However, all could change if the US Congress decides to amend the US Copyright Act to address artificial intelligence (AI) and authorship.

AI outputs—infringement, transformative, or derivative?

The Andersen et al v Stability AI et al (2023) class action also alleged copyright infringement in the works generated by defendants’ GAI tools. While this allegation may not be litigated due to pleading deficiencies, both sides’ arguments are worth noting.

The plaintiffs’ first argument for copyright infringement in the GAI-produced images is likely to fail, because they cite no particular AI-generated work bearing “substantial similarity” to their own works—a fundamental component of any copyright infringement claim (together with showing the defendant had “access” to such copyrighted work).

Their alternative argument, however, is more interesting: that the GAI-generated images “are based entirely on the training Images and [therefore] are derivative works,” thereby infringing their exclusive rights to create derivative works. But that too most likely fails.

If the plaintiffs’ characterisation of GAI as a “collage tool” were true, GAI outputs would often risk infringing artists’ reproduction and derivative rights. However, this analogy contradicts most reputable accounts of how GAI functions—ie, through learning what features correspond to certain inputs rather than memorising aspects of any particular work.

Substantial similarities

Even if a particular image is used to train the algorithm, a similar output is not per se an infringing, derivative work. For example, if an algorithm was trained on an artist’s depiction of a cat, a cat produced by that algorithm would not necessarily share ‘substantial similarity’ with the artist’s cat, such that it is an infringing copy or derivative work.

Rather, the GAI may have learned what public-domain qualities a cat has from looking at millions of other cats, such that it was able to produce a cat autonomously—much like if I asked you to draw a cartoon cat right now.

And, even if it did somehow ‘mess up’ and directly copy a part of the artist’s original cat, if that part is de minimis, a court could still find no infringement. So, while a GAI developer may have had ‘access’  to copyrightable materials in the training process, a plaintiff may have difficulty showing ‘substantial similarity’ between GAI output and copyrightable elements of the plaintiff’s work.

Publicity law

A concern shared by most participants of the senate IP subcommittee’s recent hearing, and which does not cleanly fit within the current copyright regime, is that of deep-fakes or the ability to copy the style or likeness of a particular artist. Copyright law does not traditionally protect ‘style’ because its constituent elements tend to lie in the public domain—eg, nobody can own light-weight pen strokes and a proclivity for drawing cartoon animals.

Right of publicity laws are promulgated at the state level and can afford protection to those whose likeness or characteristic artistic style is co-opted. The Copyright Office has suggested that Congress implement a “narrowly-tailored” federal right of publicity law, in light of improvement in deepfake technology and issues related to federal copyright preemption. Congress, however, has not yet addressed federal publicity rights.

Despite the overwhelming amount of media attention GAI has received in the last year, we are still in the very early stages of this technological revolution. As we witness with the emergence of any new technology (ie, the Internet; virtual, augmented, and mixed reality technology; blockchain, etc.), the law struggles to keep pace with such transformations.

The same is true with GAI, particularly with respect to privacy, autonomy, and IP. New statutes, rules, and regulations (and interpretations of them) are being created every day.  It will be interesting to see the evolution of existing IP laws to address this profound and pervasive transformation in the ways in which content is created and by whom—or what.

Matt Savare is a partner at Lowenstein Sandler, and can be contacted at: msavare@lowenstein.com

Bryan Sterba is counsel at Lowenstein Sandler, and can be contacted at: bsterba@lowenstein.com

Already registered?

Login to your account

To request a FREE 2-week trial subscription, please signup.
NOTE - this can take up to 48hrs to be approved.

Two Weeks Free Trial

For multi-user price options, or to check if your company has an existing subscription that we can add you to for FREE, please email Adrian Tapping at atapping@newtonmedia.co.uk


More on this story

Copyright
17 August 2023   Recent court cases are beginning to hint at how copyright issues concerning the training of generative AI algorithms will be handled—but firmer guidance is needed to reassure artists, say Matt Savare, Bryan Sterba and Chloe Rippe of Lowenstein Sandler, in part I of a two-part series.
Copyright
12 June 2023   Firefly is a ‘commercially and legally safe’ tool that automatically attaches credentials tag that flags AI use | Software firm promises to protect customers from infringement claims.