Tag #Regulating AI
Programme (1)
Event (3)
News (4)
Regulating AI
Background Artificial intelligence (AI) has reached a stage of maturity and extensive application across supply chains and manufacturing, in automation, public governance, media and entertainment. While industries and societies are quick in the uptake of AI to harness benefits and opportunities, many governments are catching up to develop responsible and appropriate regulatory frameworks, to prevent immense possible harm of mismanaged AI. During this active shaping process, the European Commission has unveiled its draft AI Act (AIA) in April 2021; ongoing discussion and a law-making process that seeks to establish key agenda and practices in the field of AI regulation continues in the EU Parliament in 2022. Simultaneously in the past 2-3 years, a number of Asian countries are actively rolling out policy papers, laws, and guidelines stages concerning AI regulation, embracing different emphases and approaches. The Hong Kong office of the Heinrich Böll Stiftung (hbs) and the Association of Pacific Rim Universities (APRU) are inviting to a series of three webinars that bring together experts and interested audiences from the Asia-Pacific region and Europe to discuss current ideas and approaches around the regulation of AI including: What kind of regulatory regime can put effective checks on misuse or socially dangerous developments without harming technological progress in the field? How can accountability of AI-supported decision-making be secured if the details of the process cannot be fully and transparently explained? How is it possible, in an environment of large-scale data usage, to safeguard privacy and data protection? The series seeks to share best practices, developments and governance frameworks, to deepen insights how to address AI-related governance and policy challenges globally. Activities Together we held 3 joint online expert forums focusing on Asia-Europe dialogue on AI regulation and governance, on 3 critical themes of debate that stand at the frontier of current attempts to develop AI regulatory policy and are likely to constitute future-shaping parameters on how AI will be implemented in global industries and societies. Participants include governmental and non-governmental actors and experts from Asia and Europe involved in the wider process of tech regulation. Deliverables included 3 webinars, video recordings, web articles, and followed by a publication of a policy insight brief developed from the proceedings.
The Heinrich Böll Stiftung (hbs), from Germany with a global network of more than 30 offices, is involved in the discussion regulatory and governance issues of digitalization especially through its Brussels, Washington and Hong Kong offices and its head office in Berlin. hbs is networked to relevant actors especially in Europe, including civil society and members of parliament, policy-makers and other experts involved in the EU’s AI Law initiative. Visit their website here.
Regulating AI: Protection of Data Rights for Citizens and Users
Event time: 09:30-10:30 CEST GMT+2 / 13:00-14:00 India GMT+5.5/ 16:30-17:30 South Korea GMT+9
June 15, 2022 - June 15, 2022
Regulating AI: Explainable AI
Event Time: 0930-1030 (CET, GMT+2)/ 1530-1630 (Singapore, Manila & Hong Kong, GMT+8)
May 25, 2022 - May 25, 2022
Regulating AI: Risk-based Approach of AI Regulation
Event Time: 0830-0930 (CET, GMT+2)/ 1430-1530 (Hong Kong, GMT+8).
May 5, 2022 - May 5, 2022
APRU and Government Partners Organize Workshop to Strengthen AI policy in the Asia-Pacific Region
On 31 August 2022, the Office of National Higher Education, Science, Research and Innovation Policy Council (NXPO) of Thailand in close collaboration with the National Electronics and Computer Center (NECTEC) and the National Science and Technology Development Agency and the Institute of Field Robotics (FIBO) under King Mongkut’s University of Technology Thonburi co-hosted a workshop to review research proposals to drive “AI for Social Good: Strengthening Capabilities and Government Frameworks in Asia and the Pacific” project. Co-hosts of this event include the United Nations Economic and Social Commission for Asia and the Pacific (UNESCAP), the Association of Pacific Rim Universities (APRU), Google.org, Australian National University (ANU) and leading universities and research institutes in Thailand and abroad. In this workshop, four AI policy research proposals were presented and reviewed by the experts. The four proposals are: 1) AI in Pregnancy Monitoring: Technical Challenges for Bangladesh, 2) Mobilizing AI for Maternal Health in Bangladesh, 3) Responsible Data Sharing, AI Innovation and Sandbox Development: Recommendations for Digital Health Governance in Thailand, and 4) Raising Awareness of the Importance of Data Sharing and Exchange to Advance Poverty Alleviation in Thailand. Presenting the background and importance of this project in Thailand was NXPO Policy Specialist Dr. Soontharee Namliwal. She proceeded to introduce project members from Thailand which are NXPO, NECTEC of FIBO under King Mongkut’s University of Technology Thonburi. Dr. Kommate Jitvanichphaibool, NXPO Senior Division Director and Dr. Suttipong Thajchayapong, Leader of NECTEC Strategic Analytics Networks with Machine Learning and AI Research Team – provided additional information relating to the research and application of AI in Thailand, namely 1) the poverty alleviation policy, 2) the healthcare system and guidelines for data collection and 3) Personal Data Protection Act B.E. 2562 and policy and guidelines for personal data protection. The experts also offered useful suggestions to the two projects submitted by Thailand to improve the coverage and maximize the benefits to the countries in the Asia-Pacific region. Initiated in 2021, AI for Social Good: Strengthening Capabilities and Government Frameworks in Asia and the Pacific is a collaboration between the UNESCAP, APRU and partners. Under this project, the UNESCAP and APRU, with funding from Google.org, established a multi-stakeholder network to provide support in the development of country-specific AI governance frameworks and national capabilities. For more information on this project, please visit here. View the article in a Thai version here.
September 6, 2022
No Easy Answers on Protection of AI Data Rights, Webinar by HBS and APRU Shows
On June 15, a webinar held jointly by the Hong Kong office of the Heinrich Böll Stiftung (HBS) and the Association of Pacific Rim Universities (APRU), a consortium of leading research universities in 19 economies of the Pacific Rim, highlighted the complexity of data rights for citizens and users, with risks deriving from both under-regulation and over-regulation of AI applications. The webinar held under the theme Protection of Data Rights for Citizens and Users completed a joint hbs-APRU series consisting of three webinars on regulating AI. The series came against the backdrop of ever more AI-based systems leaving the laboratory stage and entering our everyday lives. While AI enables private sector enterprises and governments to collect, store, access, and analyse data that influence crucial aspects of life, the challenge for regulators is to strike a balance between data rights of users and the rights for enterprises and governments to make use of AI to improve their services. The webinar’s three speakers representing an NGO network, academia and the private sector explained that the fair use of personal data should be protected while abusive manipulation and surveillance should be limited. Conversely, regulators should leave reasonable room for robust innovation and effective business strategies and facilitate effective operation of government bureaus to deliver public services. “We not only talk about the use of personal data but also a broader range of fundamental rights, such as rights to social protection, non-discrimination and freedom of expression,” said Sarah Chander, Senior Policy Adviser at European Digital Rights (EDRi), a Brussels-based advocacy group leading the work on AI policy and specifically the EU AI Act. “Besides these rights in an individual sense, we have also been looking into AI systems’ impact on our society, impact on broader forms of marginalization, potential invasiveness, as well as economic and social justice, and the starting point of our talks with the different stakeholders is the question of how we can empower the people in this context,” she added. M. Jae Moon, Underwood Distinguished Professor and Director of the Institute for Future Government at Yonsei University, whose research focuses on digital government, explained that governments are increasingly driven to implement AI systems by their desire to improve evidence-based policy decision-making. “The availability of personal data is very important to make good decisions for public interest, and, of course, privacy protection and data security should always be ensured,” Moon said. “The citizens, for their part, are increasingly demanding customized and targeted public services, and the balancing of these two sides’ demands requires good social consensus,” he added. Moon went on to emphasize that citizens after consenting to the use of their private data by the government should be able to track the data usage while also being able to withdraw their consent. Sankha Som, Chief Innovation Evangelist of Tata Consultancy Services, explained that the terms Big Data and AI are often intertwined despite describing very different things. According to Som, Big Data is the ability to manage the input side of AI and drawing insights from the data whereas AI is about predictions and decision-making. “If you look at how AI systems are built today, there are several different Big Data approaches used on the input side, but there are also processing steps such as data labelling which are AI specific; and many issues related to AI actually come from the these processing steps,” Som said. “Biases can, intentionally or unintentionally, cause long-term harm to individuals and groups, and they can creep into these processes, so it will not only take regulation on use of input data but also on end use, while at the same time complying with enterprise specific policies,” he added. The webinar was moderated by Dr. Axel Harneit-Sievers, Director, Heinrich Böll Stiftung Hong Kong Office. The series’ previous two webinars were held in May under the themes Risk-based Approach of AI Regulation and Explainable AI. More information Listen to the recording here. Find out more about the webinar series here. Contact Us Lucia Siu Programme Manager, Heinrich Böll Stiftung, Hong Kong, Asia | Global Dialogue Email: Lucia.Siu [at] hk.boell.org Christina Schönleber Senior Director, Policy and Research Programs, APRU Email: policyprograms [at] apru.org
June 27, 2022
Webinar by Heinrich Böll Stiftung and APRU takes deep dive into Explainable AI
On May 25, a webinar held jointly by the Hong Kong office of the Heinrich Böll Stiftung (hbs) and the Association of Pacific Rim Universities (APRU) highlighted that many of the algorithms that run artificial intelligence (AI) are shrouded by opaqueness, with expert speakers identifying approaches in making AI much more explainable than it is today. The webinar held under the theme Explainable AI was the second in a joint hbs-APRU series of three webinars on regulating AI. The series comes against the backdrop of ever more AI-based systems leaving the laboratory stage and entering our everyday lives. While AI algorithmic designs can enhance robust power and predictive accuracy of the applications, they may involve assumptions, priorities and principles that have not been openly explained to users and operation managers. The proposals of “explainable AI” and “trustworthy AI” are initiatives that seek to foster public trust, informed consent and fair use of AI applications. They also seek to move against algorithmic bias that may work against the interest of underprivileged social groups. “There are many AI success stories, but algorithms are trained on datasets and proxies, and developers too often and unintentionally use datasets with poor representation of the relevant population,” said Liz Sonenberg, Professor of Information Systems at the University of Melbourne, who featured as one of the webinar’s three speakers. “Explainable AI enables humans to understand why a system decides in certain way, which is the first step to question its fairness,” she added. Sonenberg explained that the use of AI to advise a judicial decision maker of a criminal defendant’s risk of recidivism, for instance, is a development that should be subject to careful scrutiny. Studies of one existing such AI system suggest that it offers racially biased advice, and while this proposition is contested by others, these concerns raise the important issue of how to ensure fairness. Matthias C. Kettemann, head of the Department for Theory and Future of Law at the University of Innsbruck, pointed out that decisions on AI systems’ explanations should not be left to either lawyers, technicians or program designers. Rather, he said, the explanations should be made with a holistic approach that investigates what sorts of information are really needed by the people. “The people do not need to know all the parameters that shape an AI system’s decision, but they need to know what aspects of the available data influenced those decisions and what can be done about it,” Kettemann said. “We all have the right of justification if a state or machine influences the way rights and goods are distributed between individuals and societies, and in the next few years, it will be one of the key challenges to nurture Explainable AI to make people not feeling powerless against AI-based decisions,” he added. Brian Lim, Assistant Professor in the Department of Computer Science at the National University of Singapore (NUS), in his research explores how to improve the usability of explainable AI by modeling human factors and applying AI to improve decision making and user engagement towards healthier and safer lifestyles. Speaking at the webinar, Lim explained that one of the earliest uses of Explainable AI is to identify problems in the available data. Then, he said, the user can investigate whether the AI reasons in a way that follows the standards and conventions in the concerned domain. “Decisions in the medical domain, for instance, are important because they are a matter of life and death, and the AI should be like the doctors who understand the underlying biological processes and causes of mechanisms,” Lim said. “Explainable AI can help people to interpret their data and situation to find reasonable, justifiable and defensible answers,” he added. The final webinar will be held on June 15 under the theme Protection of Data Rights for Citizens and Users. The event will address the challenges for regulators in striking a balance between data rights of citizens, and the rights for enterprises and states to make use of data in AI. More information Listen to the recording here. Find out more about the webinar series here. Register for the June 15th session here. Contact Us Lucia Siu Programme Manager, Heinrich Böll Stiftung, Hong Kong, Asia | Global Dialogue Email: Lucia.Siu [at] hk.boell.org Christina Schönleber Senior Director, Policy and Research Programs, APRU Email: policyprograms [at] apru.org
June 1, 2022
Heinrich Böll Stiftung and APRU Discuss Risk-based Governance of AI in First Joint Webinar
The Hong Kong office of the Germany-based Heinrich Böll Stiftung (hbs) and APRU successfully concluded the first in a series of three webinars on regulating artificial intelligence (AI). Held on May 5 under the theme Risk-based Approach to AI Regulation, the event constituted a valuable Asia-Europe platform for the exchange of insights on the risks that are associated with AI and the appropriate regulatory responses. The webinar series comes against the backdrop of AI reaching a stage of maturity and extensive application across supply chains, public governance, media and entertainment. While industries and societies are quick in the uptake of AI, governments struggle to develop appropriate regulatory frameworks to prevent immense possible harm resulting from mismanaged AI. APRU has been pursuing debates in the field of AI policies and ethics since 2016, and APRU in collaboration with UN ESCAP and Google has set up the AI for Social Good network. “This joint webinar series comes at the perfect time to bring together experts from Europe and leading thinkers from the highly diverse Asia Pacific region. We are looking to apply what we have learned to actively support the development and implementation of regulatory frameworks and polices that ensure that AI technology is used for the good of society,” said APRU Secretary General Chris Tremewan, emphasizing the importance of collaboration across regional boundaries. The webinar was moderated by Zora Siebert, Head of Programme, EU Democracy and Digital Policy, Heinrich Böll Stiftung European Union. Siebert pointed out that the European Commission has unveiled its draft AI Act (AIA) in April 2021, accelerating an active shaping process in the European Parliament. Siebert noted that policymakers in the U.S. and the EU have been keen to align on AI policy, with both sides wishing to enhance international cooperation. Toby Walsh, Scientia Professor of Artificial Intelligence, University of New South Wales, explained that AI can hardly be regulated in a generic way but will require novel regulative approaches instead. “Since AI is a platform, it is going to be much like electricity that is in all our devices, and there is no generic way to regulate electricity,” Walsh said. “The EU AI Act will set an important precedent, but it will depend on how it is going to be implemented and on the sorts of expertise the EU is going to have, because the people who are going to be regulated have vast resources,” he added. Alexandra Geese, Member of the European Parliament for the Greens EFA and coordinator for the Greens EFA in the AI in the Digital Age Special Committee (AIDA), picked up on Walsh’s electricity metaphor, stressing that “we want to be the ones who switch the lights on and off, as opposed to leaving the decisions to the machines.” Jiro Kokuryo, Professor at the Faculty of Policy Management at Keio University in Japan, provided an alternative perspective from East Asia, explaining that the society and the technologies should be allowed to co-evolve rather than be forced into a static process. “Nevertheless, Japan aligns completely with the EU in terms of human rights protection, and the EU’s risk-based approach is also agreeable,” Kokuryo said. The second webinar will be held on May 25 on the topic Explainable AI. The proposals of “explainable AI” and “trustworthy AI” are initiatives to create AI applications that are transparent, interpretable, and explainable to users and operations managers. The final webinar will be held on June 15 on the topic Protection of Data Rights for Citizens and Users. The webinar will address the challenges for regulators in striking a balance between data rights of citizens, and the rights for enterprises and states to make use of data in AI. More information Listen to the recording here. Find out more about the webinar series here. Register for the May 25th session here. Contact Us Lucia Siu Programme Manager, Heinrich Böll Stiftung, Hong Kong, Asia | Global Dialogue Email: Lucia.Siu [at] hk.boell.org Christina Schönleber Senior Director, Policy and Research Programs, APRU Email: policyprograms [at] apru.org
May 12, 2022