Urgent Need to Address Ethics of Artificial Intelligence
July 3, 2018
01

University World News

03 July 2018 Issue No:513

Students and researchers must be better prepared to deal with ethical issues in the use of big data, robotics, artificial intelligence and other technologies so that such research and applications are not used for nefarious purposes, including weapons of war, a conference of Pacific Rim university presidents heard least week in Taipei, Taiwan.

“We have to be very careful that artificial intelligence isn’t stained by a very unfortunate use of the technology to decide to kill people,” said Toby Walsh, professor of artificial intelligence (AI) at the University of New South Wales, Australia, speaking at the conference of the Association of Pacific Rim Universities (APRU).

The conference, entitled ‘Our digital future in a divided world’ and held on 24-26 June in Taipei, was attended by some 100 delegates from Asia and the Pacific, Australasia, and North and South America, 30 of them university presidents, rectors or vice-chancellors.

“One of the strong arguments around why we need to worry today about the potential use of AI in warfare is because we don’t know how to build machines that can deal with the ethical responsibility of who lives and who dies,” Walsh said.

“Although there are well-defined ethical principles for fighting war – international humanitarian law – we do not know how to write [software for] machines today to do that.”

“A high level of ethical principles cannot be integrated into algorithms,” said Pascale Fung, director of the Center for Artificial Intelligence Research at Hong Kong University of Science and Technology.

Responsibility over human lives is also a key issue in the development of self-driving cars. Understanding and managing the consequences of such technologies while they are still at the research stage is important, delegates said.

Tei-Wei Kuo, acting president of National Taiwan University, pointed to some well-known accidents caused by autonomous vehicles, sparking a debate about who is responsible. “That kind of issue will keep coming out,” he told the conference. “There are a lot of issues to be resolved before we get AI on the road.”

But AI ethics involves people in many disciplines. “We have to work on our entire [research] ecosystem and see where to position our research,” he said.

Pressures on ethical use are building

Ethical research is not just an issue for professionals who wish to see their research put to good use. Powerful pressures are building from outside, including from students and industry.

The risks to universities were highlighted by recent calls to boycott Korea Advanced Institute of Science and Technology (KAIST), based in Daejeon, South Korea, over a perceived collaboration in research on autonomous weapons.

A petition organised by Walsh in April and signed by more than 50 AI researchers in more than 30 countries said they would not collaborate with the university or host its professors over its partnership with defence manufacturer Hanwha Systems to set up a research centre on national defence and artificial intelligence at the university.

The call to boycott had swift, worldwide repercussions and became “one of the centre points of the discussion all the following week at the United Nations on what we need to do around the governance of autonomous weapons”, Walsh noted.

KAIST President Sung-Chul Shin reiterated at the conference that his institution “has no intention to develop autonomous weapons; the centre focuses on using AI for smart aircraft training and tracking”.

“We should implement the highest of standards in education as well as research in universities for making a bright future, otherwise we will confront the dark side of a dystopian society caused by the unintended consequences of this kind of technology intruding in every dimension – one guy with a bad mind can destroy the whole world in the coming future,” Shin said.

KAIST’s Shin noted that a wider effort was needed by universities to maintain public trust. “What we at KAIST have learned from this is that we have to pay much more attention to ethics in AI research,” he told the conference, noting that his university had quickly put out a statement that AI should not harm humans.

“We want to lead on AI ethics in the future so we would like to become a collaborator with other leading institutions around the world,” Shin said. “Because it’s not just the effort of one university.”

KAIST already has a mandatory ethics course for all students; “now we will include the ethics of AI in our ethics course”, he said.

Pressure from students, civil society

Walsh warned that although KAIST had responded “quickly and appropriately” at the time, “that sort of conversation has not finished”, adding that universities “are not off the hook by a long way”.

At the time Walsh’s petition became public in April, Shin was quick to insist that KAIST research was not to be used for weapons development, prompting Walsh to cancel the boycott call.

However, the debacle highlighted that if universities are not well prepared to confront ethical issues pertaining to new technologies, the lines will be drawn by others in the international community that could force embarrassing reversals, wastage of research resources and reputational damage to an institution.

“Engaging our students and early-years researchers in these discussions is really important,” said Kathy Belov, pro-vice chancellor for global engagement at the University of Sydney, Australia. Belov, a professor of comparative genomics, noted that in the fields of biomedicine and genetics, ethical principles are ingrained in research methods, and this needs to be the case for technologies such as artificial intelligence.

“Students have really strong views around all these ethical issues and we’ve noticed a lot of protests on campuses about collaboration with companies involved in development of weapons,” Belov said during a panel discussion. Those voices are getting louder on campus and “it is being driven more by our students than by our academics”, she said.

Civil society voices are likely to get louder as well. Last year the Ethics and Governance of Artificial Intelligence Fund, which launched in January 2017 with a US$27 million injection of funds from philanthropic foundations, including eBay founder Pierre Omidyar, announced US$7.6 million in funding to bolster the voice of nine civil society organisations in the shaping of AI in the public interest.

Industry funding

Industry collaborations that involve funding university research in AI could also be affected by a growing backlash against military use.

Walsh pointed to the recent case at Google which should be a wake-up call to universities. Over 3,000 Google employees recently signed an open letter to protest against the company’s participation in the United States Pentagon-funded ‘Project Maven’ to interpret video imagery with the aim of improving the targeting of drone strikes, and demanded that the project be cancelled.

In an open letter addressed to Google CEO Sundar Pichai, they demanded that Google formulate “a clear policy” stating that neither the company nor its contractors will ever develop technology for warfare. “Google should not be in the business of war,” the letter said. A dozen Google employees resigned in protest before Google announced on 1 June that it would discontinue its contract with Maven when it expires next year.

Google also announced it is drafting its own military projects policy, which, according to reports, will include a ban on projects related to autonomous weaponry.

By taking a stand on a point of principle, Google has “raised the bar”, Walsh told the Taipei conference. Now “many universities and increasingly people are going to be asking the same sorts of questions about what your [university] researchers are doing and about how we are contributing to making society a better place”.

“Increasingly, we will be questioned and we will be asked to call to account, just as Google was here,” Walsh said. “This is an example of the sort of conversations we are going to see increasingly and the kind of conversations that you are going to have in your universities.”

“We have a huge responsibility to ensure that the technologies that we work on – many of us are funded from the public purse – are for the public good,” Walsh said. That includes ensuring technologies “are used to improve the quality of life for everyone and to make the planet a better place”.

Hot-button issue in Japan

Military use of AI is also a hot-button issue in Japan. Jiro Kokuryo, professor of policy management at Keio University, said individual universities in Japan have made statements on whether or not they will apply for research funds from the Japanese defence ministry.

“There has been increasing awareness that the social and ethical aspects of artificial intelligence have to be addressed,” said Jiro, who is also heading the Human-Information Technology Ecosystem project under the Japan Science and Technology Agency. It will look at the issue longer term and ensure that researchers and engineers in both universities and industry “are more competent to take control of the technology”, he said.

He has been involved in drawing up guidelines for engineers and researchers, mainly in the academic sphere, but it needs to be extended to industry researchers, he told University World News.

Japan is investing heavily in automated cars, for example, where the problem of criminal responsibility for accidents is a major issue.