shutterstock_2248618607_supatman
6 October 2023FeaturesCopyright ChannelWilliam (Bill) Honaker

Pressing questions about AI and how to tackle them

There are three major questions that arise from the recent fascination with AI. What should be the focus of policies? Can-AI generated creations be protected? And will AI take my job?

On November 30, 2022, ChatGPT exploded onto the world stage. Pundits view it with awe—it’s either an amazing new tool that will revolutionise the world, or a potential evil that will change everything for the worse.

Governments are grappling with regulations, businesses are developing policies, and people are worried it will take their jobs. But a basic problem in all this hysteria is defining the object of all this concern.

1950s origins

AI has been around longer than you may realise. The origin of AI is attributed to a paper written in 1950 by Alan Turing, titled: “ Computer Machinery and Intelligence.

But the term ‘artificial intelligence’ was first coined by Stanford Emeritus Professor John McCarthy in 1955. He defined it as: “The science and engineering of making intelligent machines.”

After that, interest in AI wavered for a while. Its most recent resurgence can be attributed to IBM’s chess-playing supercomputer, Deep Blue, in 1996; and its question-answering machine, Watson, in 2011.

Today, AI is part of our everyday lives: from facial recognition technology to smart assistants to self-driving vehicles.

ChatGPT, a ‘generative AI’ tool, has fuelled the most recent debate. It is trained on large language models (LLMs) that use massive data sets to generate human-like content.

The platform is the fastest-growing consumer software application in history, with more than 180 million users to date.

What should be the focus of regulations and policies?

But focusing on AI without a good definition creates confusion and uncertainty. Definitions are important.

Eighteen years ago, when I started at Dickinson-Wright, one of my partners, a former golf club president, asked about my golfing skills. I said, “On a good day I can shoot in the mid-'70s.” He excitedly said he would get us a tee time. I clarified, “We’re talking about nine holes, right?” We never discussed golf again.

The US Copyright Office (USCO) has issued guidelines regarding AI-generated works. But there’s a big difference between ‘AI-generated’ and ‘generative AI’.

The problem is that AI is baked into a lot of digital products. For example, digital cameras have been using AI for facial recognition, redeye fix, subject recognition, zoom and enhancement, and other functions for years.

A step too far

I believe that the current blanket prohibitions restricting the use of AI-generated works go too far. Many copyright applicants will have to declare that AI was used, identify the AI in question, and registrants will need to file supplemental applications or risk losing their copyrights.

For example, based on the Copyright Office’s definition, it appears that if a digital camera is used, then all digital images are subject to this requirement.

What most people are concerned about is generative AI, like ChatGPT. This means a program that ‘creates’ something that is normally the province of humans, such as inventions, images, and articles.

In answer to the first question, the focus of regulations and policies needs to specify the particular AI involved.

Can AI-generated works be protected?

Physicist Stephen Thaler is credited with bringing this question to the fore. He has also muddied the real question: can creations that result from the use of AI be protected?

In 2019, Thaler created an AI computer named DABUS (Device for the Autonomous Bootstrapping of Unified Science.)

DABUS was named as the sole inventor on two patent applications filed in several countries. One was for a food container; the other for an emergency LED alert light related to the container.

The United States Patent and Trademark Office (USPTO) refused consideration of Thaler’s inventions in a decision on petition because he didn’t name a human as an inventor.

Thaler appealed this decision to the US District Court for the Eastern District of Virginia and the Court of Appeals for the Federal Circuit, but both upheld the USPTO’s decision.

He then filed a petition for writ for  certiorari, which was rejected by the US Supreme Court.

Thaler also attempted to copyright an AI-generated image titled A Recent Entrance to Paradise. The USCO refused registration and issued an opinion letter stating its reasons, which can be summarised as copyright registration is unavailable for AI-generated works in the United States.

In June 2022, Thaler filed a complaint in the US District Court in Washington, DC, requesting the court to order the Copyright Office to register his work. That request was refused and both parties moved for summary judgment.

Shift of focus

It's essential to shift the focus from AI as an inventor to its role as a powerful tool in the creation process.

AI, as defined by IBM, “combines computer science and robust datasets to enable problem-solving. It encompasses sub-fields such as machine learning and deep learning, which utilise algorithms to create expert systems that make predictions or classifications based on input data.”

In essence, AI is another tool that inventors can use during the invention process. Consider the example of engineers developing a new car part. They use computer simulations to test and refine the design, identifying weaknesses before prototyping.

The input from these simulations is widely accepted and integrated into the invention process without questioning the inventor's role. Similarly, AI can be viewed as an advanced computer simulation tool that aids inventors in finding innovative solutions.

During the USPTO East Coast Listening Session on AI inventorship, Corey Salsberg of Novartis shared an example involving the use of an AI tool called JAEGER designed by the company.

JAEGER was trained on a library of 21,000 molecules tested for anti-malarial properties. It generated 282 virtual molecules, of which two showed promising results.

However, it is important to note that this process involved human input throughout. Humans created the initial database, reviewed the AI-generated results, selected the most promising options, and conducted further analysis.

Therefore, the outcome can be seen as a collaborative effort between AI and human inventors.

AI as assistant

It's crucial to recognise that AI serves as a tool that assists people in the invention process. It still requires human input to supply relevant data, provide instructions on how to handle that data, and evaluate the results.

If an AI computer were to autonomously produce inventions without human interaction, they would not be patentable unless Congress decided to intervene and establish relevant laws. But, if they are aids to humans, the human input could be protected by patents.

This consideration of AI as a tool in the creative process was recognised by the USCO’s guidelines, which state:

“This policy does not mean that technological tools cannot be part of the creative process. Authors have long used such tools to create their works, or to recast, transform, or adapt their expressive authorship…In each case, what matters is the extent to which the human had creative control over the work’s expression and ‘actually formed’ the traditional elements of authorship.”

The Copyright Office will register that portion of the work that is a result of selection, arrangement, or modifying material to such a degree that the modifications meet the standard for copyright protection.

In these cases, copyright would protect the human-authored aspects of the work but exclude the AI-generated material.

This was the result in Zarya of the Dawn, an AI comic book that included artwork generated by AI but arranged by the author. The text was the author’s own.

The Copyright Office agreed to register the arrangement and the text but refused registration of the AI-generated artwork.

Will AI take my job?

As AI exists today, it won’t take jobs. It will be a great tool for many people to improve their jobs, but it won’t replace them. This is particularly true with ChatGPT, because it’s not always accurate and there are problems on the horizon.

ChatGPT hallucinates. An example is the recent case involving New York lawyers using ChatGPT to write their brief.

They filed a brief that contained cases supporting their position, but the cases didn’t exist. The AI just made them up. The court fined the lawyers $5,000.00 for filing false case sites.

What makes me even more intrigued by this prophecy is the recent revelation that these machines can go MAD. In this case, MAD is an acronym for ‘Model Autophagy Disorder’. This occurs when generative AI consumes its own misinformation as part of a training database.

Pundits are theorising that within the next few years, almost 90% of the content on the internet will be written by generative AI models. So they will be increasingly consuming their own content, to the point of believing their own garbage and going MAD in the process. So, with that in mind, the more I learn, the less I'm concerned.

The takeaway

ChatGPT and similar AI programs are fascinating tools. They’re going to continue to develop and be a part of our lives.

In developing policies, care must be taken to properly define what’s being regulated to avoid unnecessary restrictions on use and protection of results. Although it won’t take jobs, it will be an advantage to those who learn to use it, and realise its limitations.

Like the advent of the internet, in the beginning you could do without it, but today, it is an essential business tool.

William (Bill) Honaker is a member of Dickinson Wright and writes the blog, ' The IP Guy'. He can be contacted at: whonaker@dickinsonwright.com

Already registered?

Login to your account

To request a FREE 2-week trial subscription, please signup.
NOTE - this can take up to 48hrs to be approved.

Two Weeks Free Trial

For multi-user price options, or to check if your company has an existing subscription that we can add you to for FREE, please email Adrian Tapping at atapping@newtonmedia.co.uk


More on this story

Copyright
21 August 2023   Inventor of DABUS loses attempt to name his AI ‘Creativity Machine’ as author of artwork | Thaler also requested that copyright should transfer to him “as a work-for-hire” | Judge upholds Copyright Office’s previous denials.
Copyright
24 February 2023   US Copyright Office reviewed case after use of AI tool came to light | Decision to retract copyright for the AI-generated images brings the crossover between human and AI authorship under the spotlight.
Jurisdiction reports
12 October 2023   Artificial intelligence can imitate human intelligence and behaviours as well as mental processes such as learning, analysing and solution-finding.