shutterstock_325646843_by-ociacia
7 August 2019PatentsSarah Morgan

AI and ethics: Controlling the AI beast

Here at WIPR, we’ve delved into the various IP aspects of artificial intelligence (AI), from robots owning copyright to patenting in the field, but until now, we’ve avoided one big question that is woven into every facet of the new technology: ethics.

Below, we investigate some of the ethics puzzles guiding and, in some cases, overshadowing AI’s development, including discriminatory data, accountability, and whether we need ethical guidelines at all.

A threat to democracy?

“We believe that Google should not be in the business of war” read a letter to their boss signed by thousands of Google employees in April last year, demanding that Google end its work with the US Defense Department on the Maven programme, an initiative to use AI to improve the targeting and surveillance capabilities of drones on the battlefield.

“We’re concerned that AI could be the most unprecedentedly powerful tool for undemocratic nations/states to craft a surveillance society." - Alice Lee, University of Hong Kong

Two months later, Google confirmed it wouldn’t renew its work on the contentious project.

The technology company is not alone. A multitude of players, including companies, governments and institutions, are grappling with how to develop and embrace AI in an ethical manner.

“We’re concerned that AI could be the most unprecedentedly powerful tool for undemocratic nations/states to craft a surveillance society, putting every move and possible move of their citizens under the omnipotent control of the governments,” warns Alice Lee, associate dean and associate professor of law at the University of Hong Kong.

This nightmarish future was not even sufficiently imagined in George Orwell’s classic dystopian novel “Nineteen Eighty-Four”, claims Lawrence Lau, adjunct associate professor of law, also at the University of Hong Kong.

Lau adds: “We are pessimistic at this point of time because the control of AI technology transfer may help to slow down the tyrannies, but the internal development of AI technology for tyrannical benefits cannot simply be halted. Pandora’s Box is not just open—the content within is ever expanding.”

Guidelines: what’s the point?

Do we need ethical guidelines or should creators be free to develop the technology without rules that smother innovation? If we do need guidelines, who should write them, and who should enforce them?

The AI space is already littered with policies—Rob McCargow, director of AI at PwC, says there were more than 70 sets of rulebooks at the last count.

This is perhaps reflective of the rapid pace at which the technology is developing. Since AI’s emergence in the 1950s, inventors have filed applications for nearly 340,000 AI-related inventions and published over 1.6 million scientific publications. More than half of the identified inventions have been published since 2013.

In 2019 alone China, the Organisation for Economic Co-operation and Development, and the European Commission have each released their own guiding principles. That’s without mentioning the multitude of technology players that have developed their own versions.

Microsoft is one such company—its ethics board AETHER (AI and Ethics in Engineering Research) is tasked with “establishing principles, deliberating and advising”.

Jon Machtynger, a Microsoft Cloud solution architect, says: “I don’t see the guidelines as a set of rules forcing us to behave in certain ways. Rather, they’re a set of considerations that help us focus on whether we’re doing, and are seen to be doing, the right thing in a transparent manner.”

But for Gwilym Roberts, chairman of Kilburn & Strode in London, it seems impossible to codify ethical guidelines for AI. “We have not been able to do it in other areas, so why should it be possible now?” he asks.

Even if guidelines can be codified, Christopher Markou, lecturer at the University of Cambridge, is wary that the number of frameworks will keep multiplying without getting any closer to agreement.

“There are a lot of companies, interest groups and other entities designing various ethical frameworks and codes. While these efforts usually come from the right place, it’s become hard to distinguish between them in terms of content, and impossible to ignore that commercial interests are dictating that ‘ethics’ be defined in ways that don’t harm their bottom line,” he says.

"I don’t see the guidelines as a set of rules forcing us to behave in certain ways." - Jon Machtynger, Microsoft

Machtynger is more optimistic, believing that while the different guidelines reflect the groups who wrote them, they tend to include many similar things.

Microsoft’s guidelines include the principles of fairness, reliability, privacy, inclusiveness, transparency, and accountability.

“You could argue that these are defined from a Western frame of reference. On the other hand, the ‘Beijing AI principles’  (China’s set of principles) speak about doing good for humanity, being responsible, ethical, and controlling for risks. They describe harmony and cooperation, adaption and moderation.

“There are similarities, but there does seem to be a subtle Eastern frame of reference,” he adds.

The University of Hong Kong professors suggest that an ethical code should cover the aspects of fairness, transparency, data security, protection of privacy, and the possibility of public scrutiny. “Doesn’t the thriving AI strength exhibited by the state-owned enterprises of China alarm the Chinese people and the rest of the world?” they questioned.

Governments are now having to grapple with when to set ethical policy, says Taylor Reynolds, technology policy director of the Massachusetts Institute of Technology’s (MIT) Internet Policy Research Initiative.

“They don’t want to stifle innovation by setting rules and regulations too early, but they also don’t want people be harmed by setting them too late,” he says.

Innovation in the AI field is booming, with the US and China leading the pack. Combined with Japan, these three countries account for 78% of total patent filing in the AI space.

Demonstrating China’s dominance is the fact that there are around 100 Chinese institutions in the top 500 patent owners, while 17 out of the top 20 academic players are in China, as ranked by the World Intellectual Property Office. The US and the Republic of Korea each have around 20, while Japan and Europe have four each in the top 500.

More than 300,000 publications have been published by organisations in China (341,833 scientific publications) and the US (327,880). China and the US have more than three times the number of scientific publications than the UK, which is ranked third with 96,359 scientific publications.

It’s important to remember that ethics is not the same as the law, says Markou, who believes that the tenor of the ethics debate in recent years has “started to elide the distinction between law and ethics and has allowed ‘ethics’ to be weaponised as a proxy for what laws should be”.

Roberts agrees, cautioning that law makers should be extremely careful before trespassing into the realm of ethics.

Markou adds: “Without a way to enforce compliance or effect behaviour change it’s all a bit ornamental and distracts from the more substantive and technical discussions that underpin effective law-making.”

Making a monster?

Humans exhibit a range of prejudices, and for the creators of AI it seems this new technology absorbs the bias. How do we ensure that AI decision-making evolves to be non-discriminatory?

McCargow says: “The issue around bias is something that’s coming to public prominence on a regular basis. As humans are subject to unconscious bias, almost any dataset that is used to train AI systems would have bias residing in it.”

Evidence of bias is rampant: take the use of facial recognition technology and the disparities in correct identification. A 2018 study from MIT and University of Toronto researchers found that Amazon’s facial analysis technology tends to mistake women, especially those with dark skin, for men.

In the life sciences field, bias crops up in mammogram data for breast cancer. “There have been models that were trained on a majority of white women, and these models performed poorly when applied to women of different skin tones and ethnic backgrounds,” says Reynolds.

Those in the AI field are well aware of the potentially life-threatening impact of these biases—in the case of breast cancer, black women have been shown to be 42% more likely than their white peers to die from breast cancer. This led MIT researchers to devise new models to address these concerns, making their model equally accurate for white and black women.

Despite these ongoing efforts, eliminating bias is not an easy problem to solve and, according to Roberts, AI doesn’t just entrench bias, it can amplify it.

“The problems are going to arise when AI has to deal with human elements such as witnesses, evidence, and social opinion. For now, it does not seem that the system is ready in areas where human whim or weakness prevail,” he says.

Markou believes that we can’t expect AI to have some magically transformative effect on society.

“Pretending that you’re somehow going to subvert the problem of discrimination by using a sophisticated and ‘un-biased’ algorithm only swaps one problem for a whole host of others,” he warns.

Markou cites another case of bias: the recidivism algorithm Correctional Offender Management Profiling for Alternative Sanctions (COMPAS). Used across a number of US states, the software is used to predict a defendant’s risk of committing another crime.

In 2016, non-profit ProPublica analysed COMPAS assessments for more than 7,000 arrestees in a Florida county. The investigation claimed that the algorithm was biased against African Americans.

“The risk-averse mentality that has dominated criminal justice means that quasi-functional systems like COMPAS are elevated to prominence—and granted generous trade secret protections inoculating them from scrutiny—because states have invested (often, heavily) in the belief that they will reduce crime, and help judges identify high risk offenders,” adds Markou.

Bots hold court

Going further, are we ready for robot judges? Estonia certainly thinks so—in March, it was revealed that the Estonian Ministry of Justice had asked its chief data officer to design a “robot judge” that could adjudicate small claims disputes of less than Ä7,000 (about $8,000).

This may free up judges to tackle the more difficult problems, but it raises even more questions, particularly on transparency. One of the greatest challenges in issuing AI judgments will not be expressing the verdict, but explaining the reasoning for the decision, warns Roberts.

Markou says: “Maybe there’s a case for them with ombudsman decisions, or no-fault divorces, but it becomes much murkier when you open up the possibility of computational systems passing sentences in criminal, family, or employment courts.”

Including as many representative samples in the training data as possible can help combat bias, explains Machtynger.

“These questions existed long before AI systems were widely available, but if used incorrectly, AI can magnify any problems that exist in human-based systems. Increasing the transparency of the underlying mechanics of AI helps address this, but so does increasing the accountability of organisations when decisions made by an AI system result in consequential decisions about people,” he adds.

How organisations should be held accountable is a question for another day, but for now, PwC’s McCargow sums up the technology’s uneasy relationship with ethics: “Even if AI continues to develop at this rate, unless you take the legislative, regulatory and level of public trust developments on the same speed of journey, you’re never going to unleash the maximum power of this technology.”

Accountability is key

In the IP sector, accountability is fundamental. Anthony Brennand, director of innovation at IP technology provider CPA Global, explains that in the IP industry, everything you do must be fully auditable.

“You can use AI technology for something such as auto docketing to create a deadline or submit a payment for a trademark renewal. If you miss those deadlines, you are required to have a full audit,” he says.

To meet this need, CPA Global has built transparency into its system. While the service provider originally envisioned using AI to create a completely automated system, CPA Global realised people need to review the data, partly because of the importance of transparency and partly because the market demands it.

Brennand adds: “Software providers will be under more pressure to take on additional accountability as more of the process becomes automated. There will be a demand from the market that unless you can provide some level of accountability, they won’t take the product.”

For Jayne Durden, senior vice president of strategy–law firms at CPA Global, it’s important for law firms to realise that AI offers a transformational change and not to overlook this potential by being overly cautious.

“Because it’s a new technology, lawyers are stuck in a loop that they have a duty to supervise the tools. They are taking time to understand the technology and questioning how much oversight they need over AI tools. This is not sustainable,” she warns.

Already registered?

Login to your account

To request a FREE 2-week trial subscription, please signup.
NOTE - this can take up to 48hrs to be approved.

Two Weeks Free Trial

For multi-user price options, or to check if your company has an existing subscription that we can add you to for FREE, please email Adrian Tapping at atapping@newtonmedia.co.uk


More on this story

Patents
12 April 2022   In a deluge of intangible assets, how do we know that our patent is novel? John Collins of Innovation Foundry and Digital Catapault explores a growing problem.