shutterstock_630710354_jivacore
25 March 2021PatentsSusan Keston

‘Guidance would be incredibly helpful’: a view on UK AI

A UK Intellectual Property Office (IPO) consultation on AI and IP has drawn consensus around some issues and divergence in others. The UK government published its response on March 23, following a request for submissions on the vexing issue carried out between September and November 2020. HGF partner and AI expert Susan Keston offered her take on the summary of  responses—and what the UK may do next. You can read the consultation outcome  here.

Comment: Susan Keston, partner at HGF

The transformative power that AI will provide human beings with is without question in my mind. Google’s DeepMind has already demonstrated that AI can solve fiendishly complex protein folding problems and we are just beginning to see how AI can be employed to accelerate drug discovery and automate medical image analysis.

If AI is competing with human creators to invent, then society still benefits from the increased pace of innovation. The ease with which AI may solve technical problems worthy of patent protection does however raise the question of whether or not the legal test for inventive step is too easy a hurdle for a trained AI system.

However, in that case, it is possible to credit the person who trained the AI system that generated the invention with inventorship. Many AI-created inventions will be physical products whose designs have been created by a trained AI model based on certain constraints.

For example, the bristle design of a cross-action toothbrush was AI-generated as have been some unusually shaped antenna designs but these could be claimed in a patent as physical products without reference to the AI system that created them.”

Challenges for AI implementers

There has long been tension between a subset of open-source software proponents who believe that software patents can be a barrier to innovation of SMEs, but the reality is that open source software and patent protection have worked very successfully in parallel in protecting and encouraging innovation for profit.

My view is that patents are very much available for AI innovation and should continue to be for AI innovation that goes above and beyond foreseeable application of AI in a particular technical field. There is a challenge for implementers of AI how best to protect their investment in AI research and development, but a combination of using contract law, trade secrets and patents in an intelligent combination is the answer.

At present contract law is the best way to protect training data, which can be highly valuable in itself. The requirements for disclosure in terms of revealing specifics of training data or trained AI system algorithm parameters will become clearer in the next few years as the case law emerges.

Patent attorneys are accustomed to making judgement calls on including enough information in the patent to allow a skilled person to work the invention and yet being wary of disclosing commercially sensitive details. Filing large amounts of information in repositories in support of an application seems unnecessary and undesirable to me.

Alignment with EPO

Since many patents are drafted for international prosecution, international harmonisation of patentability in relation to AI software is indeed desirable. This includes closely aligning the IPO patent exclusion practice in relation to AI software with the European Patent Office (EPO).

Historically, the IPO has been perceived as more strict than the EPO in terms of implementing the exclusion to patenting computer programs as such, which meant that the EPO was a better prospect for borderline cases.

What would be incredibly helpful to patent applicants is explicit guidance from the EPO in terms of specific examples of AI innovations that are in principle patentable and unpatentable. The EPO supposedly has internal examples for examiners to use as guidance and the hope is that these will be published in future EPO guidelines. It would be good if the IPO did similarly.

Clarification by the IPO on how it applies patent exclusions to AI innovations by publication of enhanced IPO guidelines will be very welcome. This is more crucial than ever given the diversity of technical sectors outside electronics to which AI is being applied commercially in an innovative way (eg drug discovery, medical diagnosis and predictive maintenance in mechanical machinery).

Consequently, many in-house patent counsel with expertise outside electronics are facing the unwelcome challenge of having to navigate the complex landscape of software patenting when reviewing invention disclosures. This landscape presents considerable challenges to attorneys like myself who have been successfully protecting software with UK and European patents for decades.

“Coming of age” of AI

The fact that the UK government is even contemplating possible legislative change to broaden the AI generated inventorship criteria is the first real hint that we have had that there is a recognition that the law and practice related to computer-implemented inventions in general is not entirely fit for purpose and might need to evolve to adapt to the “coming of age” of AI.

Although proving infringement of software patents seems to have many similar considerations to proving infringement of a trained AI system, including the potential to implement the service remotely in the cloud via a remote interface, there are clear additional challenges. Due to the fact that a trained AI system evolves over time, the very nature of a trained AI system is such that it reasons and learns autonomously with minimal human intervention. So, it could be difficult to justify attributing an infringing act to a human being owning or operating the trained AI system.

If an output of a trained AI system, such as the food container or flashlight designs created by  DABUS was to infringe a patent claim, there would likely have been no way of a human being predicting what the output of the trained AI system would be. In such circumstances, it would seem unfair to make the human being liable for infringement. The logic of software corresponding to an AI model is much more intractable than other types of software code and its output is likely to be strongly depending on the set of training data that was used. This raises the question of whether or not the human being who trained the AI system has any infringement liability for the output. The courts are likely to face a real challenge when disentangling the potential liabilities for infringement allegedly committed by a “black box” AI system.

Susan Keston is a partner at HGF. She can be contacted at:  skeston@hgf.com

Did you enjoy reading this story?  Sign up to our free daily newsletters and get stories sent like this straight to your inbox

Today’s top stories

EPO mandatory video hearings to continue despite legality review

China launches special action against trademark squatters

Already registered?

Login to your account

To request a FREE 2-week trial subscription, please signup.
NOTE - this can take up to 48hrs to be approved.

Two Weeks Free Trial

For multi-user price options, or to check if your company has an existing subscription that we can add you to for FREE, please email Adrian Tapping at atapping@newtonmedia.co.uk


More on this story

article
22 July 2020   Artificial Intelligence (AI) offers exciting and unprecedented opportunities for life sciences and healthcare but also poses complex questions for the IP sector, as an LSPN Connect session discovered yesterday, July 21.
Patents
14 January 2022   The application of artificial intelligence to an IP team can dramatically improve results, explains Peter Erdödy of Dennemeyer Octimine.
Copyright
30 January 2023   Case centres on emerging field of artificial intelligence known as generative AI | Plaintiffs argue that tech companies violated open-source licences and infringed IP rights.