Transforming Higher Education: APRU Publishes Generative AI Whitepaper and Project Report
In response to the growing interest in generative AI and its transformative impact on higher education, the Association of Pacific Rim Universities (APRU) recently published a comprehensive whitepaper along with a project report, detailing the outcomes of an 18-month project Generative AI in Higher Education, supported by Microsoft.
Whitepaper Framework
The whitepaper, titled “Generative AI in Higher Education: Current Practices and Ways Forward,” was released in January 2025. It was authored by Professor Danny Liu from The University of Sydney and Professor Simon Bates, Vice-Provost and Associate Vice-President, Teaching and Learning, The University of British Columbia, who also served as the academic lead for the project. The work aims to serve as a roadmap for institutions to develop comprehensive AI strategies that align with their core educational values.
The authors developed the ‘CRAFT’ framework, outlining five interdependent elements essential for successful AI integration:
Culture represents both the greatest challenge and opportunity, requiring institutions to rethink their role in an AI-enabled world.
Rules must evolve from restrictive policies to enabling frameworks that encourage innovation while ensuring ethical governance.
Access remains a critical equity issue, as AI risks exacerbating existing digital divides without deliberate interventions to ensure equitable access to tools, infrastructure, and support.
Familiarity emphasizes the need for systematic development of AI literacy among all stakeholders.
Trust is identified as the foundation for progress, requiring transparency, collaboration, and demonstrated value across all levels of engagement.
The whitepaper calls for immediate, sector-wide action, proposing two key priorities:
Formation of collaborative clusters to foster cooperation among universities in areas such as AI application development, assessment redesign, and faculty training.
Elevation of students as partners through peer networks, ambassador programs, and co-design initiatives.
The authors stress that success will require fundamental transformation rather than incremental adaptation.
APRU Generative AI Project
The whitepaper is a main output of the Generative AI in Higher Education project launched in 2023 under APRU’s Future of University Working Group. The project aimed to explore the ways that generative AI can shape the future of higher education. The final report of the project, titled “Future Universities in a Generative AI World: Navigating Disruption to Direction” was published in February 2025, providing a narrative of the activities and workshop methodologies.
Over 18 months until December 2024, the project brought together over 70 participants, including academic experts, educators, students, and industry representatives, who contributed case studies, attended workshops, and provided strategic advice.
It was delivered in two phases, with Phase I (September-December 2023) collecting 33 case studies from APRU member universities and partner institutions to map the current use of generative AI in education and institutional operations.
Phase II was implemented throughout 2024 including three workshops:
Sensemaking Workshop (virtual) in March,
Foresight Workshop in June hosted at The Hong Kong University of Science and Technology (HKUST), and
Creative Sandbox session (virtual) in August.
These workshops provided a platform for participants to share practices, envision long-term futures, and identify critical considerations for institutions preparing for an AI-enabled world. The whitepaper and final project report compile the outcomes of these workshops and summarize the case studies.
“We trust the whitepaper will influence policies and support decision-making, thereby promoting a broader reimagination of universities as we enter the second quarter of the 21st century,” APRU Chief Executive Thomas Schneider said.
Related Article:
White Paper Offers HE a Balanced Plan for AI Engagement
February 10, 2025
more
White Paper Offers HE a Balanced Plan for AI Engagement
Amid explosive interest in generative AI and rising concern about its impact on higher education, the Association of Pacific Rim Universities (APRU) this week published a white paper on the future of generative AI in higher education as part of APRU’s “University of the Future” initiative.
The 18-month project, backed by Microsoft, set up a network within APRU’s 60 member universities in Asia, the Pacific, North and South America, to gain a deeper understanding of the opportunities and challenges generative AI (genAI) poses for higher education and identify ways to address knowledge gaps.
GenAI tools, such as ChatGPT and DALL.E2, swiftly produce content, including text and images, that can be difficult to distinguish from human-produced content.
“Universities are currently grappling with these implications, and medium- and longer-term strategies will require a better shared understanding of these tools: how they work and how to balance risks and benefits,” said APRU Chief Executive Thomas Schneider.
This article is part of a series on Pacific Rim higher education and research issues published by University World News and supported by the Association of Pacific Rim Universities. University World News is solely responsible for the editorial content.
“Higher education is now at a stage where it needs to transition to a holistic, supported, and scaffolded approach to generative AI adoption,” the white paper notes, pointing to a “cautious and somewhat piecemeal approach to generative AI” so far.
The project’s academic lead, Simon Bates, vice-provost and associate vice-president, teaching and learning at the University of British Columbia (UBC), Canada, told University World News the aim was to “take a pulse check in a point in time” of how APRU universities are coming to terms with the implications of these tools, the impact on higher education, and to get universities thinking about the future.
Going beyond statements of principle
Initially, when OpenAI’s ChatGPT and other generative AI (genAI) tools were released in late 2022, many universities made statements on genAI use, but few went further than that.
“There’s a big gap between those principles and practical actions, whether in teaching and learning or university business processes or research,” Bates said. “This white paper aims to support this next stage” – with a balanced plan of action for institutions.
“At the same time as embracing the tools, we have to be deliberate about protecting elements of the teaching and learning experience, the same for graduate students with the research experience, that should not be short-circuited,” said Bates. “Getting that balance right will be the big challenge for universities in the next few years.”
Larry Nelson, Microsoft’s Asia regional business leader for education, said that because “AI has been around and integrated into a lot of things that we already do, combined with the introduction of ChatGPT, the importance of generative AI and AI in education is uniquely profound”.
He pointed to a need to work with universities. “We often overestimate the short-term impact of some of these innovations and changes and underestimate the long-term impact,” he told University World News. “So much of the innovation has taken place in universities; it makes a lot of sense to get involved, get engaged, and partner [with universities] around that.” he added.
CRAFT Framework
Danny Liu, professor of educational technologies, University of Sydney, Australia, authored the white paper, titled “Generative AI in Higher Education: Current practices and ways forward”.
“Universities were stuck in inaction. They didn’t quite know where to start,” Liu told University World News.
The white paper devised the CRAFT framework based on literature reviews and feedback from several APRU-organised workshops. It covers five key elements: culture, rules, access, familiarity, and trust to help universities assess their current state and identify next steps in each of these areas.
For responsible integration of genAI into education, research, and operations, universities need a balance of rules, access, and familiarity (with genAI tools). A lack of one or more of these may lead to ethical, privacy, security, or other challenges, according to the white paper.
These are underpinned by ensuring trust between students, educators, leadership, and partners such as industry, government, the community, and AI itself. All these aspects must be part of the local and regional culture of the institutions.
“We think all five elements (of CRAFT) are essential,” said Liu. “There are universities that are progressing better in terms of the rules, or access, or familiarity. But no university is up there for all five,” he noted.
Trust can be built via rules on responsible use of AI. But, for example, rules barring staff from using AI to mark student work would not work, Liu explained, “because it would break the trust between the faculty and students”.
Students should be central to any discussions around rules, according to the white paper. “They are engaged, eager for guidance, and fully aware of how important proficiency with these tools is going to be as they move through and beyond their time at university.”
A culture of genAI acceptance and use
Developing trust, as opposed to eroding it with AI use, “helps, over time, to build culture and change culture” around technology acceptance and use, Liu pointed out.
Or as the white paper puts it: “Do we have a culture that looks far enough into the future so that we are preparing ourselves and our students for a radically transformed environment?”
Liu noted: “The future is generally positive, as long as we can shift the culture of higher education. It depends on whether we can help people to change their mindsets from locking down, restricting, banning, and being scared of genAI, to thinking: it’s here, our students are using it, it’s available for us to use.”
The report suggests that those from emerging economies may have a stronger cultural acceptance of technology as it may be perceived as a route towards economic progress and advancement.
Experimentation in a safe environment
Michelle Banawan, professor at the Asian Institute of Management in the Philippines, attended the workshops that fed into the white paper, which she also peer reviewed. The paper “establishes a framework based on varying perspectives of developed countries and developing countries, so it’s really inclusive”, she told University World News.
It is not just a guide but includes benchmarks for policymaking, formulating pedagogies in higher education, or doing research, she added.
“It encourages us to explore, to encourage co-creation and experimentation, even when we do not have established use cases yet,” she said. “But the approach by which the university is trying to learn and to experiment is a ‘best practice’ in itself.”
UBC adopted a “creative sandbox” approach that encourages faculty and others to experiment with genAI, a practice the White Paper says other universities should consider adopting.
Bates explained: “At UBC, we recognised that faculty need time and space to be able to experiment with these tools, to understand in a safe environment where they don’t need to worry about data security or intellectual property [IP], to see how, where, and if they [the tools] would fit within their courses and curriculum.”
It allowed “access to multiple LLMs [large language models] in a secure way, so the IP that faculty might put in their lecture notes or readings doesn’t go back into the model. We also diverted innovation funding to support experimentation in generative AI projects”, he said.
Need for balance
But the workshops also found that for all the good genAI tools can bring, “there are also things we might inadvertently lose in universities by going too far and too fast down this road”, Bates pointed out, noting unintended consequences were also observed with other technology-driven shifts such as the spread of social media or the shift online during the COVID-19 pandemic.
Universities need to recognise “the capabilities of these tools for supporting and personalising learning at scale that even the best teachers cannot do, but equally have to balance that with the very strong desire for human interaction and connection to support learning. Defining that balance is the task for universities”.
Nelson pointed to “driving a personalised learning experience in a way that doesn’t eliminate the teacher, the faculty, but helps streamline and create more time and space for them to add value where they add it most, which is working with students and delivering content and resources. A curriculum that extends scholarship is something that an AI platform has some potential to deliver”.
Nelson added: “These are tools that companies are looking for their employees to know how to use. So, being thoughtful about figuring out the right ways in which they can be integrated into our curriculum and integrated into the way students learn and teach around that, and build critical thinking skills in terms of how we use them, is important.”
Cognitive offloading
GenAI can make learning seem frictionless, but real understanding and mastery of any discipline requires effort and practice, which is something universities of the future must address.
In September 2024 OpenAI introduced the latest genAI model, the o1-Strawberry model. “It’s a cognition model that corrects itself and thinks about its thought processes,” said Banawan. “Universities are now trying to address ‘cognitive offloading’, where students put all the thinking into this technology. We wanted students to be aware of how they think.
“Students need to develop a nuanced view of not only how to use these tools to support learning but when not to rely on them: using them productively whilst avoiding unhelpful cognitive offloading and potential over-reliance,” the white paper states.
An area to explore further with universities, according to Nelson, was “the science of learning itself, and how AI can inform, improve, or advance that, addressing some of the concerns around ‘cognitive offloading’ that may come through the overuse of AI in some cases”.
Importance of collaboration
The white paper notes that “collaboration within and between institutions will be a key to future success for the sector. This could be regional in scope or focused on particular issues of generative AI adoption and application”.
Liu noted that collaboration with AI-aligned industry and those at the forefront, like Microsoft, was important. “They have the connectivity, the clouds, and the foresight to see where technology is headed, and they can only succeed if they work with us, and we can only succeed if we work with them,” he said.
Christina Schönleber, APRU’s chief strategy officer, said: “Amid the transformative impact of generative AI on higher education, fostering multi-stakeholder collaboration in this safe space is more crucial than ever. By continuing to engage university leaders, educators, and students in the region with technology providers and industry partners, we can develop equitable AI solutions that cater to diverse institutional needs.
“These collaborative efforts not only support effective AI adoption but also aim to ensure that our educational systems remain resilient, innovative, and beneficial for the entire academic community.”
January 20, 2025
more
data.org Launches Asia Pacific Data Capacity Accelerator
FOR IMMEDIATE RELEASE
November 22, 2024
CONTACT
Emma Donelan
[email protected]
Today, with the generous support of the Mastercard Center for Inclusive Growth, data.org launched the Asia Pacific (APAC) Data Capacity Accelerator, the fifth in a growing network of global partners that are building a workforce of purpose-driven data practitioners.
The APAC Data Capacity Accelerator will catalyze the application of data to address systemic financial inclusion challenges – including the critical need to build the data for social impact workforce. In partnership with the Asian Institute of Digital Finance (AIDF) – a university-level institute at the National University of Singapore (NUS) – and the Association of Pacific Rim Universities, this accelerator will produce a cohort of data practitioners and a training model to scale across the region.
“Digital transformation, AI and data all have a role to play in shaping society and driving economies towards financial health and resilience,” said Shamina Singh, founder and president, Mastercard Center for Inclusive Growth. “At Mastercard, we are committed to driving financial inclusion for small businesses, workers, and communities all around the world. We are proud to work with partners such as data.org, the Asian Institute of Digital Finance at the National University of Singapore, and the Association of Pacific Rim Universities to reach the next generation of data practitioners, so they can harness the power of data and AI to support inclusive economic growth in the APAC region.”
The latest Capacity Accelerator Network (CAN) launch announcement came at an event held at NUS. Domain leaders across academia, industry, government, and NGOs came together to discuss shared goals and coordination around developing and upskilling purpose-driven data capacity for inclusive growth.
“data.org works at the intersection of what is possible and what is practical, as increasingly illustrated by the impact of our CAN network partners,” said Danil Mikhailov, executive director of data.org. “We will only reach our goal of training one million purpose-driven data practitioners by 2032 through interdisciplinary, locally-led programs. Our growing and diverse network of partners—including now five Capacity Accelerator Network hubs worldwide—is making connections across sectors and across borders, inspiring a new generation of problem solvers.”
The APAC Data Capacity Accelerator builds on the work being done at hubs in Africa, India, Latin America, and the United States. To date, data.org programs have engaged more than 20 academic partners around the world, applying the power of research and academic expertise to enable social impact organizations to unlock the power of data to meet their missions.
For the APAC Accelerator, AIDF and the Association of Pacific Rim Universities are the primary higher education partners.
“AIDF is proud to host today’s event together with data.org and the Association of Pacific Rim Universities. It’s very exciting to be a part of a movement to empower young people and underprivileged communities, such as small business owners, around the world with the skills they need to be competitive in an increasingly tech-driven workforce,” said Professor Huang Ke-Wei, Executive Director of AIDF. “Our students, regardless of their disciplines, can benefit from exposure to and understanding of data and AI. We hope to create more opportunities for them to apply such critical skills in ways that would be beneficial to the community, society, and the world.”
“This partnership is about tapping into the power of higher education to ensure that our workforce and our communities are not left behind,” said Thomas Schneider, chief executive of the Association of Pacific Rim Universities. “Data science for social impact has the potential of significant societal benefits in areas such as economic mobility, gender equity, and even public health and climate, so we are eager to see how the data practitioners and social impact organizations involved will address this challenge in a way that serves the public good in the Asia Pacific and beyond.”
Today’s event included keynotes on topics such as data and AI driving inclusive growth, the power of collaboration among government and social impact leaders, and the unique challenges and opportunities of AI in social impact. Subject matter experts shared their perspectives through panel discussions on bridging the data talent demand-supply gap, data-driven decision-making in multistakeholder partnerships, and scaling innovation and resources.
-30-
About data.org
data.org is accelerating the power of data and AI to solve some of the world’s biggest problems. By hosting innovation challenges to surface and scale groundbreaking ideas, and elevating use cases of the most effective tools and strategies, we are building the field of data for social impact. By 2032, we will train one million purpose-driven data practitioners, ensuring there is capacity to drive meaningful, equitable impact.
About the Mastercard Center for Inclusive Growth
The Mastercard Center for Inclusive Growth advances equitable and sustainable economic growth and financial inclusion around the world. The Center leverages the company’s core assets and competencies, including data insights, expertise, and technology, while administering the philanthropic Mastercard Impact Fund, to produce independent research, scale global programs, and empower a community of thinkers, leaders, and doers on the front lines of inclusive growth. For more information and to receive its latest insights, follow the Center on LinkedIn, Instagram and subscribe to its newsletter.
About the Asian Institute of Digital Finance
The Asian Institute of Digital Finance (AIDF) is a university-level institute at the National University of Singapore (NUS), jointly founded by the Monetary Authority of Singapore (MAS), the National Research Foundation (NRF), and NUS. AIDF aims to be a thought leader, a FinTech knowledge hub, and an experimental site for developing digital financial technologies, as well as for nurturing current and future FinTech researchers and practitioners in Asia. For more information, please visit: https://www.aidf.nus.edu.sg/.
About the Association of Pacific Rim Universities (APRU)
As a network of leading universities linking the Americas, Asia and Australasia, the Association of Pacific Rim Universities (APRU) is the Voice of Knowledge and Innovation for the Asia-Pacific region. APRU brings together thought leaders, researchers, and policy-makers to exchange ideas and collaborate on practical solutions to the challenges of the 21st century. For more information, please visit: https://www.apru.org/.
November 22, 2024
more
APRU’s Generative AI in Education Serves as Important Case Study for the 2024 APEC TVET Workshop
The 2024 APEC Industry-Academia Collaboration Workshop, Best Practices for Inclusive Innovation, Digital Sustainability and cross-regional Talent Development held in late-August by Chinese Taipei’s Working Group to Internationalize Technological/ Vocational Education served as a perfect platform for APRU to present its case studies from the APRU-Microsoft’s collaborative project Generative AI in Higher Education.
The 2024 APEC TVET Workshop gathered 112 education policymakers, industry representatives, and academia from 13 APEC member economies for training aligned with newly emerged industries. The workshop showcases best practices for the industry-academia cooperative model that expands the scope of inclusive and innovative talent training. The workshop topics aligned with newly emerging industries and addressed ongoing economic challenges.
In his opening remarks via video, Deputy Minister of Education, Chinese Taipei, Ping-Cheng Yeh emphasized that in the face of the complex challenges of the digital age, it is crucial for APEC member economies to collaborate, promote cross-regional talent development, and jointly create a more inclusive, sustainable, and innovative future.
Dr. Sean McMinn, Director, Center for Education Innovation, The Hong Kong University of Science and Technology, explored the current challenge of cultivating digital transformation and innovation talents in a presentation that provided first-hand insights into the project’s findings. The presentation contributed key insights into the 2024 APEC TVET workshop and met the goal of showcasing best practices for industry-academia cooperative models that expand the scope of inclusive and innovative talent training.
“The APRU-Microsoft project aims to map a baseline snapshot of the adoption of Generative AI tools across APRU member institutions, identifying specific needs and knowledge gaps that can be addressed in future phases of the work,” said Dr. McMinn.
“It is a particularly urgent endeavor for Hong Kong-based APRU, given that Hong Kong leads globally in predictions by Goldman Sachs and McKinsey for exposure of full-time jobs to automation by AI.”
Dr. McMinn presented two case studies, both touching on AI readiness (literacy) of students and teachers who utilize generative AI tools to solve problems.
Case 1 involved the exploration of using Generative Artificial Intelligence (GenAI), specifically ChatGPT, as a ‘design assistant’ in educational course design. A step-by-step approach was adopted in exploring how GenAI can be used to complete tasks like defining and mapping course intended learning outcomes (ILOs) across course activities and assessments.
Case 2 involved an executive undergraduate course where students are encouraged to use GenAI tools to complete assessed tasks and reflect on the experience.
“The case studies provided us with a deeper understanding of how human-in-the-loop is interregnal to successful use of AI while adding supporting evidence that students learning metacognitive skills is increasingly becoming important in the context if using AI,” Dr. McMinn explained.
This workshop was a key outcome of the HRDWG-EDNET Project of APEC. APRU is a guest member of the APEC Human Resources and Development Working Group Education Network. The Human Resources Development Working Group (HRDWG) was established in 1990 and conducts work programs on developing human resources, focusing on issues ranging from education, capacity building to labor and social protection. In addition, the HRDWG has responsibilities to help build cultural awareness and promote gender equality as well as be responsible for including disability issues in its workplan. The mission of HRDWG is “Sharing knowledge, experience, and skills to strengthen human resource development and promote sustainable and inclusive economic growth.”
September 9, 2024
more
AI in maternal health – how research and policy intertwine
University-based academics from across Asia have been working with Bangladesh policymakers to identify gaps and bottlenecks in using artificial intelligence in maternal healthcare, and to prepare the ground for an upgrade in health services that would in future use advanced technologies such as AI.
The initiative is part of the Association of Pacific Rim Universities (APRU) AI for Social Good project, in collaboration with the United Nations Economic and Social Commission for Asia and the Pacific (ESCAP) in Bangkok.
It involved researchers from the National University of Singapore, the Korea Advanced Institute for Science and Technology, Australian National University and universities in Hawaii in providing research to support the Bangladesh government in developing AI policy and building AI capabilities in pregnancy monitoring.
They worked closely with the Bangladesh Aspire to Innovate (a2i) programme of the government’s ICT Division and Cabinet Division, as well as Bangladesh policymakers and health professionals during the two-year APRU project funded by Google, which released its final report last month. The report focuses specifically on Bangladesh and Thailand.
This article is part of a series on Pacific Rim higher education and research issues published by University World News and supported by the Association of Pacific Rim Universities. University World News is solely responsible for the editorial content.
In the case of Bangladesh, the report concluded that despite technical barriers, an AI system is viable with the development of IT infrastructure to integrate AI and localised AI models using available data from hospitals and clinics.
For example, AI could provide a better understanding of at risk-pregnancies by analysing data collected from different local sources, or through the development of an interactive ‘pregnancy assistant’ or monitoring system.
Arifur Rahman, assistant professor in the department of computer science and engineering at Hawai’i Pacific University, told University World News: “The project was about identifying how artificial intelligence can help in pregnancy monitoring in Bangladesh, then to improve the readiness of Bangladesh to use AI for monitoring.”
Rahman, who is also affiliated to the University of Hawaii, Manoa, said autonomous AI “will be able to predict something is going to happen, and once you can predict, you can take preventive measures to stop it. So, for pregnancy monitoring, there is a prospect in AI. But how much prospect, that still has to be seen”.
While many research groups around the world are working on AI in pregnancy monitoring, he noted that this was very much at the research stage. “It hasn’t been applied yet,” he said.
Rahman added that much ongoing research involves intermediate level AI, which involves input of medical data “and then the [AI] output helps doctors to make a decision – but this is not AI making an autonomous decision”.
Technological challenges to overcome
Rahman’s task within the project was to look at Bangladesh’s current technological challenges and what they have to solve, in order to make some progress in using AI in pregnancy monitoring.
This included what computer hardware is present in hospitals and clinics as well as at village level, how robust the connections are, as well as mobile phone penetration and prospects for digital networking.
While mobile phone penetration in Bangladesh is, surprisingly, almost 100%, he found, computer penetration is lacking and there are other high barriers apart from digital access, such as literacy, social norms and inability to act on information, particularly among the most at-risk groups.
Rahman acknowledged that there was some way to go before autonomous AI systems can be incorporated into maternal care in the country. Nonetheless there were ‘stepping stones’ that can be put in place.
“For predictive AI to be robust could take some time. But instead of just waiting, the country should make some improvements so that when these predictive AI models are ready, they can implement it right away,” he noted.
Bangladesh and SDGs
According to Dr Shabnam Mostari, public health specialist in digital health at a2i, maternal health in Bangladesh has improved greatly in the past two decades. “But the rate of improvement has stalled in recent years and further inputs and efforts are needed to support the health of expecting and new mothers.”
She told University World News that 90% of expectant mothers in the country receive antenatal care, but only 15% receive quality antenatal care and only 30% have access to quality postnatal care by qualified health professionals.
Often complications in maternal and child health arise from delays in getting to quality health care such as doctors or hospitals. This is exacerbated by low levels of doctors per capita – on average just one doctor for 10,000 people, she explained.
“The government’s objective is to reduce maternal mortality and to do that we need to look at a few indicators,” she said. Key among these is to identify pregnant women and reduce the burden on health service providers while also using the findings of the APRU-ESCAP collaboration for a new project for continuous pregnancy monitoring.
“From the beginning of pregnancy, to delivery, [the intention is] to capture the everyday patient data – movement, blood pressure, sleep patterns, temperature, to be stored in the cloud.” Then, at some point in future, “AI will assist the service provider to identify high-risk pregnant women to detect pregnancy complications early,” she said.
“It will help the service provider to focus on high-risk pregnant women.”
Getting ready for AI in healthcare
Continuous pregnancy monitoring is essential in achieving the Sustainable Development Goal 3 of health at every stage of life, according to Professor Olivia Jensen, deputy director and lead scientist (environment and climate) at the National University of Singapore (NUS).
AI will transform treatment and diagnosis as well as the relationship between health professionals and patients. She was the research lead of the NUS team for the project.
Digitalising health systems enhances monitoring and improves data quality. Higher quality data is important for the eventual application of machine learning techniques for AI analysis.
But other foundations need to be laid in the absence of a universal electronic health record system in Bangladesh. The research team assessed options for developing localised AI models using available data from hospitals and clinics.
User-centric design is also important. “Systems and applications should be designed to meet the needs of healthcare providers and expecting mothers, based on a strong foundation of evidence,” she told University World News.
Jensen pointed to the importance of interdisciplinary collaboration, which “enabled us to incorporate perspectives from academic research on risk perception and communication, which were highly regarded by our government partners”.
The project report recommended digitalising standard antenatal care data entry in the public health system, as well implementing a mobile phone-based system for appointment tracking with automated reminders for expecting mothers and community health workers. Jensen also pointed to its use in route planning for health workers in areas of poor transport.
“We gave a lot of emphasis in our work to the community health care workers, people who are not qualified health professionals, but nevertheless play a terribly important role in the frontline of the delivery of health care services, including maternal health care,” she explained.
According to Jensen, an issue that came up during research was the possible use of digital devices by pregnant women themselves. “This has been developed in other countries, and also in Bangladesh, so there are already some apps or online resources like chatbots that provide information to the woman or her family,” she said.
“But we decided that was not going to make a lot of difference to the women and their key outcomes that we were tracking,” she said, pointing to the most vulnerable and marginalised groups.
Anir Chowdhury, policy advisor of the Bangladesh government at a2i, and the government lead for the project, said: “Digital inequality matches exactly maternal health and equality. So, the same people who don’t have access to reliable digital services are the same women who are currently not getting the maternal care they need, and find it most difficult to get to a clinic to receive assistance in the case of a complicated delivery.”
Importance of academic research
Chowdhury explained that the goal for policy-makers was to “collect enough data so that we can identify the high-risk pregnancies and change the visit schedules of health workers to prioritise them. But the question is what is the right data to be collected?”
The research input was valuable in providing comprehensive understanding for policy-makers, and identifying the gaps to narrow, he said. “So hopefully this will lead to a solution that is affordable and scalable – if you collect that data for 100 women in a location, it can scale up to 100,000 or a million, that’s the challenge.”
The second aim, which he noted would take longer, was to train an AI system to help doctors and health workers in their decisions. But this requires a lot of data to be collected “and perhaps a lot of experimentation and permutations and combinations of what the human person does and what the AI does”, said Chowdhury, adding: “It’s a long process but we wanted to get started and some of that has been achieved, but it will take years to get to the point of scalability and accuracy.”
Working with academic researchers was important. “APRU was very careful and deliberate in involving the policy-makers throughout the whole process. In the beginning, the requirements and challenges from the policy-makers went into the design of the research.
“Throughout the whole research process, the guidance and debate between the two parties really helped. There was mutual respect and trust and proof,” Chowdhury said.
“Of course there was a bit of difficulty in the beginning, because the language, the perspectives are different,” he acknowledged. “But we figured out a common language of communication and common vision and this was a good model for us to follow.
“AI is not well understood by policy-makers, there is a great fear about AI, and whether AI is ‘taking over’. And that fear is actually growing. So this collaboration between the academicians who understood much, much better than policy-makers, and could talk about specific examples from other countries that have had successes and failures, really informed the policy-makers,” he said.
APRU’s Chief Strategy Officer Christina Schönleber explained that the research team was a large one with multiple universities involved, brought together by APRU from among more than 60 member universities and based on the expertise needed to frame the research questions.
“For APRU it was about giving governments information that they may already know they need, or they are not aware they need, about the implementation of AI for some of the services that they would deliver, and it’s about the impact,” Schönleber said. “We want to make sure they don’t just use AI for the sake of using AI and because everybody wants to use it.”
Chowdhury also underlined the importance of involving international universities, to learn about successes and failures from other countries “then juxtaposing it to our context”. He added: “That could not have been done by local researchers, they would not have had that exposure from other countries.”
He stressed: “This is not just a research project. It is leading to actionable projects, and interventions that we’ll actually try out.”
April 13, 2024
more
Researchers identify gaps in implementing AI in healthcare
By Yojana Sharma
As published on University World News
As artificial intelligence-assisted technologies are developing rapidly in areas such as the healthcare sector, university researchers are helping policy-makers to identify the gaps and barriers to rapid implementation.
As part of the Association of Pacific Rim Universities’ (APRU) AI for Social Good project, in collaboration with the United Nations Economic and Social Commission for Asia and the Pacific in Bangkok, university-based academics have been working with Thai policy-makers to assess gaps and bottlenecks in implementing AI in healthcare.
The academics then support the Thai government in developing policies to help build AI capabilities.
The two-year APRU project funded by Google, which has just ended, “aimed to work with government partners in Asia and the Pacific to grow sound and transparent AI ecosystems that support sustainable development goals”, explained APRU’s chief strategy officer, Christina Schönleber.
Research has already shown that AI can make healthcare more efficient, improve patient outcomes and support medical research. Newer AI such as voice-to-text and generative AI tools for summarising patient data have also proven useful for health workers in the field.
“For Thailand we were looking at barriers and enablers for data sharing for AI healthcare,” explained Jasper Tromp, assistant professor at the National University of Singapore and APRU’s research lead for the project.
“In addition to rigorous research, the Thai partners emphasised the need to be relevant to the Thai people, and they also saw the benefit of researchers coming from different regions, because they could bring knowledge from their own regions,” explained Toni Erskine, professor of international politics at the Australian National University (ANU) in Canberra, who was the research lead for the overall APRU AI for Public Good project.
For artificial intelligence to be useful in countries like Thailand, it is crucial that data can be shared. But many governments are unaware of the specific barriers or enablers for joined up data such as patient data or imaging data for healthcare, Tromp noted.
Limited data availability and varying data storage standards also pose significant challenges to AI development and deployment, the research found.
One of the aims of the APRU project, in collaboration with the Thai Office of National Higher Education Science Research and Innovation Policy Council, was “specifically to inform development of a guideline or protocol to enable data sharing between government institutions, but also between government institutions and private partners, such as companies or universities or external organisations that would use this type of data”, Tromp explained.
AI solutions for Thailand
Thailand is developing its AI capabilities to help bridge gaps in skills and healthcare coverage beyond major cities. But implementing AI-assisted healthcare still has significant hurdles to overcome, and many examples that resolve some of these have been developed in the United States or Europe.
“Many of these AI algorithms are trained in the US or Europe and most of the training data is derived from either white people or African American people and people that do not share the same ethnic background [as Thais], so they might not work as well in the Thai or Asian local context as they do in the context where they’re developed,” said Tromp.
“For both practical as well as economic reasons, Thailand is very eager to develop their own AI industry and apps that can be deployed locally,” he added. In part, this is because some of the AI-driven healthcare systems developed overseas are expensive to acquire and implement. Also, Thailand wants solutions geared to the local context.
Some research work on AI for medical applications has been ongoing within Thailand, with some companies expecting to release them on the market in the near future. “AI has shown a lot of promise in healthcare. It’s being used now in terms of chatbots, and it is being implemented for image recognition,” Tromp said.
What currently exists is fairly general. “But for health records for public health it has to be very high-level data.”
Hurdles identified by research
“The first task was to systematically map these barriers and enablers that have been published by others, for example, in academic literature outside of Thailand, that might influence data sharing, meaningful data collection and quality. And then we tested those barriers locally ]in Thailand],” said Tromp.
He noted that in common with many other countries in the region, in Thailand “people use different software to collect data”. Apart from that, “if you go to lower tiers in health care, such as primary care or they use paper based [patient] records, it means you’re only getting access to data from centres that have capabilities to collect it”.
Fragmented healthcare provision means differences in data architecture, standards and collection, and these hamper interoperability. In Singapore, TRUST, a data-sharing platform run by Singapore’s Ministry of Health and aimed at improving health outcomes, collects all this data together on a single platform.
The platform includes research data ranging from genomics to socio-economic data and sourced from public health institutions, research institutions and public agencies that allow their anonymised data to be made accessible via TRUST for research purposes.
Tromp acknowledged, however, that the Singapore example is an expensive one. Limited resources are a significant barrier, with uneven human, technical and financial resources across healthcare institutions. High costs of hardware and software acquisition, installation, and maintenance can hamper quality data collection and sharing, particularly for smaller clinics and hospitals, the research found.
APRU’s final report on ‘AI for Social Good’ which is about to be released, points to a lack of understanding of “the value of data and the importance of data security and privacy. Health literacy issues and confusion around data-sharing parameters also contribute to the challenges. Additionally, the absence of precise data-sharing regulations and guidelines at the political and policy levels creates uncertainty and hampers progress.”
Tromp also noted that there was reluctance to share data, within government but also outside government, such as in hospitals and others that hold healthcare data. In addition, for many people Thailand’s new Personal Data Protection Act, which began to be enforced in 2022, is unclear on how they are able to share data and in what formats. “It was one of our major findings. We are recommending they develop a protocol for this,” Tromp said.
The project also proposed a regulatory ‘sandbox’ to promote innovation within a protected experimental environment with fewer regulatory constraints, so that relevant government departments can figure out what future regulation is appropriate.
The project noted that “the rise of regulatory sandboxes in the health sector has ensued from the phenomenal increase in digital health adoption in many countries”. It was also a recommendation that was of interest to the Thai government, Tromp said.
Working with policy-makers
The research input was valuable, and important in the fast-moving AI environment, Tromp said. “AI has specific challenges for data sharing. Because of the granularity that you request from the data to develop AI, there are very few policy frameworks that address this directly, so it is difficult to copy [from others]. You need new knowledge to inform policy developments.”
International organisations such as the United Nations have good on the ground knowledge but rarely work in knowledge generation, Tromp pointed out. “Healthcare systems face a lot of challenges, such as manpower, that require innovations like AI to strengthen, so there is a niche for universities to add to knowledge generation.”
But working together with Thai officials from the outset was important. “With our Thai partners, we had a number of meetings before we even came up with the final research questions and we had a lot of people in those initial meetings,” ANU’s Erskine explained.
The project also had peer reviewers who commented on the drafts the researchers produced. These included Dr Greg Raymond, an assistant professor at ANU who has worked specifically on Thai politics and was able to speak to the Thai government departments and also provide input about the geopolitical and cultural contexts of Thailand that needed to be considered in the research.
“I think this project did a really good job in bridging the gap” between research and policy, said Tromp. “Working with government to inform research priorities is very replicable – it’s an unmet need in the region.”
This article is part of a series on Pacific Rim higher education and research issues published by University World News and supported by the Association of Pacific Rim Universities. University World News is solely responsible for the editorial content.
February 3, 2024
more
APRU Launches New Project to Explore Generative AI’s Impact on Higher Education
The Association of Pacific Rim Universities (APRU) is launching an initiative on generative AI to facilitate discussions on how to leverage the new technology to shape the future of higher education amidst booming interests and rising concerns. The project is supported by Microsoft.
Themed “Generative AI in Education: Opportunities, Challenges and Future Directions in Asia and the Pacific”, the new multi-phase project will establish a network of APRU stakeholders to gain a deeper understanding of the opportunities and challenges that generative AI will have on higher education, and to identify solutions which address the knowledge gaps with a specific focus on equity and inclusion.
As the first phase of the project, APRU is gathering case studies illustrating the ways that generative AI tools support education delivery, broader student experience and other institutional functions from all 60 member universities. Subsequently, APRU will carry out thematic workshops and establish a platform for sharing insights and data with stakeholders.
The project’s academic lead is Professor Simon Bates, Vice Provost and Associate Vice President, Teaching and Learning at the University of British Columbia (UBC).
The launch is timely, given that generative AI tools, such as ChatGPT and DALL.E2, are now widely available and capable of rapid production of various types of new content, including text and images that can be difficult to distinguish from human-produced content. While these tools offer ample opportunities to enhance learning, they also raise concerns around academic integrity, privacy, bias, and ethics of use.
“Universities are currently grappling with these implications, and medium- and longer-term strategies will require a better shared understanding of these tools: how they work and how to balance risks and benefits,” said APRU Chief Executive Professor Thomas Schneider.
“The new APRU project will attempt to reimagine learning assisted by AI and to ideate potential solutions and tools that can support the APRU universities and their key stakeholders as they navigate the era of generative AI,” he added.
The project is the first of its kind under APRU’s “University of the Future” initiative, a key focus area under the Association’s new holistic approach to strategic planning.
“This project presents a unique opportunity to understand how generative AI is already being deployed in universities and to explore, early on, where challenges lie and ways that educational institutions can respond.” said Mike Yeh, Regional Vice President, Corporate External and Legal Affairs, Microsoft. “Universities are critically important institutions as our societies look to maximize the gains from AI and put guardrails in place to ensure the technology is used responsibly”.
APRU is calling for cases studies from members till November 6, 2023. Please see the requirements and template below:
[Call for Contributions] Generative AI in Education: Opportunities, Challenges and Future Directions in Asia and the Pacific
[Template] Generative AI in Education Case Study
October 12, 2023
more
AI for Social Good Summit Gathered Academics and Gov’t Representatives to Showcase Joint Research Outcomes Enhancing Wellbeing in Southeast Asia
Photos by The Australian National University
The AI for Social Good Summit convened at The Australian National University in Canberra from July 9 to 11 this year, provided experts from academia and public agencies the opportunity to discuss results and next steps of four policy-oriented research papers on AI capabilities to address social issues with focus on Southeast Asia.
The summit was organized to exchange outcomes from four projects of the “AI for Social Good— Strengthening Capabilities and Governance Frameworks” (AI4SG) collaboration, which was jointly established by the Association for Pacific Rim Universities (APRU), the United Nations Economic and Social Commission for Asia and the Pacific (UN ESCAP) and Google.org in 2021.
Over the past two years, meetings and workshops have been held with government agencies and local experts in Thailand and Bangladesh.
In Thailand, two teams of academics from the National University of Singapore (NUS) and The Australian National University (ANU), worked with the Office of National Higher Education, Science Research and Innovation Policy to support the Thai Government in building AI capabilities regarding medicine and healthcare, and poverty alleviation. Two agencies — the Office of National Higher Education Science Research and Innovation Policy Council (NXPO) and National Electronics and Computer Technology Center (NECTEC) — have collaborated each team respectively.
In collaboration with the Bangladesh Aspire to Innovate (a2i) Programme of the ICT Division and Cabinet Division of Bangladesh, two other teams from NUS, Korea Advanced Institute of Science and Technology (KAIST) and the University of Hawai’i at Mānoa conducted research to support the Bangladeshi Government in developing of AI policy frameworks and building AI capabilities in pregnancy monitoring.
Government representatives from Thailand and Bangladesh and academics discussed research findings at the AI for Social Good Summit 2023 at ANU, Canberra
Professor Toni Erskine, Director of the Coral Bell School of Asia Pacific Affairs at ANU, guided the conception of the research questions in collaboration with the government partners.
“The process of working closely with government agencies from the outset to discuss these problems and co-design research questions makes this project unique and genuinely collaborative,” Professor Erskine said.
Professor Toni Erskine, Director of the Coral Bell School of Asia Pacific Affairs at The Australian National University
During the summit in Canberra, the papers of four teams was presented: 1) Responsible Data Sharing, AI Innovation and Sandbox Development: Recommendations for Digital Health Governance in Thailand, 2) Raising Awareness of the Importance of Data Sharing and Exchange to Advance Poverty Alleviation in Thailand, 3) Mobilizing AI for Maternal Health in Bangladesh, and 4) AI in Pregnancy Monitoring: Technical Challenges for Bangladesh.
With insights exchanged, the participants drew consensus to further AI capabilities in governance: local context and needs are key drivers for high impact research collaborations, while trust and common goals are main success factors for multi-stakeholder partnerships.
Following the summit, an impact assessment over the four projects will be conducted, while a set of country case studies is under the development.
Here is the AI for Social Good project webpage.
Here is the detailed AI for Social Good Summit 2023 Program:
August 14, 2023
more
Public Agencies from Thailand Participated in AI for Social Good Summit
This article is adapted from the original post on NXPO.
Photos by Australian National University
Dr. Soontharee Namliwal, a policy specialist from the International Policy Partnership Division the Office of National Higher Education Science Research and Innovation Policy Council (NXPO), took part in the AI for Social Good Summit with the theme “Strengthening Capabilities and Government Frameworks in Asia and the Pacific Summit” July 9-11, 2023 in Canberra, Australia.
The summit was co-organized by the Association for Pacific Rim Universities (APRU), the United Nations Economic and Social Commission for Asia and the Pacific (UN ESCAP), and Australian National University (ANU).
Ms. Panchapawn Chatsuwan, a representative from Thailand’s National Electronics and Computer Technology Center (NECTEC) also attended the summit.
The summit served as a platform for participating organizations and policy researchers involved in APRU’s AI for Social Good project to share research findings and discuss issues emerging from the studies.
During the summit, the work of four projects was presented, with two each from Thailand and Bangladesh. The projects included: 1) Responsible Data Sharing, AI Innovation and Sandbox Development: Recommendations for Digital Health Governance in Thailand, 2) Raising Awareness of the Importance of Data Sharing and Exchange to Advance Poverty Alleviation in Thailand, 3) Mobilizing AI for Maternal Health in Bangladesh, and 4) AI in Pregnancy Monitoring: Technical Challenges for Bangladesh.
Joining the meeting virtually, the project leaders from Thailand – Dr. Kommate Jitvanichphaibool (NXPO Senior Division Director) and Dr. Suttipong Thajchayapong (NECTEC Senior Researcher) – delivered presentations on “Responsible Data Sharing, AI Innovation and Sandbox Development: Recommendations for Digital Health Governance in Thailand” and “Raising Awareness of the Importance of Data Sharing and Exchange to Advance Poverty Alleviation in Thailand”.
Apart from the research presentations, Dr. Soontharee shared information on Thailand’s Personal Data Protection Act (PDPA) and provided a review of procedures and best practices regarding data collection, storage, usage and sharing in public health domain in developed countries. Ms. Panchapawn added information on the implication of standards and practices of PDPA on data collection, storage, usage and sharing of Thai People Map and Analytics Platform (TPMAP) which aims to support evidence-based policy.
The meeting also discussed the application of advanced technologies in sustainable development. Dr. Soontharee presented her viewpoints on this issue: 1) the necessity of root cause analysis on public issues, including poverty and social inequality, 2) the potential of advanced technologies, especially AI and big data, in improving public services including public health services, and 3) the importance of good governance, observing ethics of STI while harnessing their potential.
The representatives from NXPO and NECTEC took this opportunity to meet with the executives of ANU and network with researchers working on Humanising Machine Intelligence (HMI) Research Project and the team of the Global Research Network.
AI for Social Good project was initiated in 2021 to study the impact of AI development and application in the region. The UNESCAP and APRU, with funding from Google.org, established a multi-stakeholder network to provide support in the development of country-specific AI governance frameworks and national capabilities.
July 28, 2023
more
Country Workshop Aims to Turn AI Research Results into Actionable Public Policy
Original post on NXPO.
NXPO in partnership with the United Nations Economic and Social Commission for Asia and the Pacific (UNESCAP), the Association of Pacific Rim Universities (APRU), Google.org, Australian National University (ANU) and alliances in the Asia-Pacific region, organized a country workshop for “AI for Social Good: Strengthening Capabilities and Governance Frameworks in Asia and the Pacific” project on 11 May 2023. The workshop brought together academics, researchers and policymakers to share information and opinions as well as to engage in discussion to turn research findings into policies.
NXPO Executive Strategist Dr. Kanchana Wanichkorn said in her opening statement that artificial intelligence (AI) has potential to drive multiple issues essential to the national development. At the time leading to Thailand’s 2023 general election, several political parties had declared policies promoting the use of advanced technologies such as AI to solve national problems, including social disparity and environmental degradation. In term of institutional arrangement, the National AI Committee has already been established, and Thailand National AI Strategy and Action Plan has been endorsed. This workshop is intended to explore policy development to maximize AI benefits.
To design evidence-based policies that follow international practice, NXPO has collaborated with private entities and international partners. Two studies have been launched, one concerning AI in medicine and healthcare, and the other on AI application in poverty alleviation.
Results of the two projects, along with potential topics for further collaborations with state agencies, were presented by each research team. Following the presentations, breakout sessions were held involving stakeholders and relevant agencies, namely NXPO, the National Science and Technology Development Agency, the Office of the National Economics and Social Development Council, Department of Medical Services, representatives from the industry and alliances in the Asia-Pacific region, to provide suggestions for improvement to the research team and discuss how the findings can be turned into public policies.
The research teams will finalize the study reports with inputs from this country workshop. The final reports will be presented at a summit scheduled to take place on 9-11 July 2023 in Canberra, Australia.
May 16, 2023
more
New Joint Synthesis Report by APRU and hbs HK Shows Way Forward on Regulating AI
APRU is proud to announce the publication of the final synthesis report of the Regulating AI webinar series brought together by the Hong Kong-chapter of the Germany-based Heinrich-Böll-Stiftung (hbs HK) and APRU.
“Regulating AI: Debating Approaches and Perspectives from Asia and Europe” addresses key questions that surround the appropriate regulation of AI including: What constitutes an unacceptable risk? How does AI become explainable? How can data rights be protected without throttling AI’s potential?
The joint synthesis report comes at a critical time, as AI has been leaving the labs and is rapidly gaining footholds in our everyday lives. Millions of decisions – many of them invisible – are being driven by AI.
“The project facilitated a fruitful exchange of perspectives from Asia and Europe and allows us to better understand a wide range of emerging approaches to the regulation of AI in different parts of the world,” says Christina Schönleber, APRU’s Chief Strategy Officer and member of the Regulating AI webinar series working group.
Webinar 1 under the theme “Risk-based Approach of AI Regulation” was moderated by Zora Siebert (hbs Brussels) and featured Toby Walsh (University of New South Wales), Alexandra Geese (Member of European Parliament), and Jiro Kokuryo (Keio University) as speakers. The event highlighted that the EU’s proposed AI Act is taking a significant step in defining the types of AI with unacceptable risks, as well as how these can be clearly defined.
Webinar 2 under the theme “Explainable AI” was moderated by Kal Joffres (Tandemic) and brought in perspectives of Liz Sonenberg (University of Melbourne), Matthias Kettemann (Hans-Bredow-Institute / HIIG), and Brian Lim (National University of Singapore). Participants agreed that enabling humans to understand why a system makes a particular decision is key to fostering public trust.
Webinar 3 under the theme “Protection of Data Rights for Citizens and Users” was moderated by Axel Harneit-Sievers (hbs HK) with Sarah Chander (European Digital Rights), M Jae Moon (Yonsei University), and Sankha Som (Tata Consultancy Services) looking into various risks deriving from both under-regulation and over-regulation.
The synthesis report concludes that while governments are fully capable of banning or restricting entire categories of AI uses, the risks posed by AI are so context-sensitive that regulating them a priori and regardless of context is a blunt instrument.
The working group furthermore notes that policy discussions on AI have too often focused on individuals’ fundamental rights; they recommend that discussions should be rebalanced for greater consideration of the broader societal impacts of AI.
Finally, the synthesis report warns that policy discussions centred on the risks of AI can sometimes lose sight of the opportunities AI offers for creating a better future.
“AI has the potential to help address human biases in decision-making and deliver a level of explainability that many of today’s institutions cannot, from banks to government agencies,” the working group writes. “The opportunities of AI must be monitored and acted upon as rigorously as the risks.”
Find out more information about Regulating AI here.
Download the report here.
February 16, 2023
more
Workshop Reveals Impressive Progress on AI for Social Good Project
On 11 January 2023, NXPO in collaboration with the United Nations Economic and Social Commission for Asia and the Pacific (UNESCAP), the Association of Pacific Rim Universities (APRU), Australian National University (ANU) and partner entities organized the 2nd workshop of “AI for Social Good: Strengthening Capabilities and Government Frameworks in Asia and the Pacific” project, or known in short as AI for Social Good project. The virtual workshop served the objective to review research progress and analyze in-depth information on capabilities and governance frameworks that will effectively support the exploitation of artificial intelligence (AI) for social good.
This activity is a follow-up on the 1st workshop held on 31 August 2022 in which outlines of four research proposals were reviewed and given feedbacks by experts. The four proposals are 1) Responsible Data Sharing, AI Innovation and Sandbox Development: Recommendations for Digital Health Governance in Thailand, 2) Raising Awareness of the Importance of Data Sharing and Exchange to Advance Poverty Alleviation in Thailand, 3) AI in Pregnancy Monitoring: Technical Challenges for Bangladesh, and 4) Mobilizing AI for Maternal Health in Bangladesh. Since then, remarkable progress has been made on these four studies with the support of government agencies and the network of universities in the Asia-Pacific region.
NXPO Vice President Dr. Kanchana Wanichkorn who leads the AI for Social Good project expressed her appreciation to the four research teams and partner organizations for their commitment and dedication to studies. She also underscored the importance of a balance between academic excellence and real-world application in performing policy research.
Expert panel reviewing the progress and provide suggestions to the research projects consisted of Dr. Kommate Jitvanichphaibool, NXPO Senior Director of Technology Foresight Division; Dr. Suttipong Thajchayapong, Leader of Strategic Analytics Networks with Machine Learning and AI Research Team, the National Electronics and Computer Technology Center (NECTEC); and Dr. Warasinee Chaisangmongkon, faculty member of Institute of Field Robotics, King Mongkut’s University of Technology Thonburi (FIBO-KMUTT).
For more information on this project, please visit here.
View the article in a Thai version here.
January 26, 2023
more
APRU on Bloomberg: The next stage: APRU-Google-UN ESCAP AI for Social Good Project now working directly with government agencies
Original post on Bloomberg.
The AI for Social Good Project – Strengthening AI Capabilities and Governing Frameworks in Asia and the Pacific has recently passed the milestone of onboarding two key government agencies.
The project is the latest collaboration between the Association of Pacific Rim Universities (APRU), UN ESCAP, and Google.org, which commenced in mid-2021 and will run until the end of 2023. Over the past year, meetings and workshops have been held with government agencies from Thailand and Bangladesh. The confirmed government partners to join the project are the Office of National Higher Education, Science, Research and Innovation Policy Council (NXPO) of Thailand, in close collaboration with the National Electronics and Computer Center (NECTEC) and the National Science and Technology Development Agency and the Institute of Field Robotics (FIBO) under the King Mongkut’s University of Technology Thonburi, and the Bangladesh Aspire to Innovate (a2i) Programme. NXPO and a2i are affiliated with Thailand’s Ministry of Higher Education, Science, Research and Innovation and the ICT Division and Cabinet Division of Bangladesh, respectively.
The AI for Social Good multi-stakeholder network was initially set up in 2019, among the first milestones being the creation of a platform that convenes leading experts from the region to explore opportunities and challenges for maximizing AI benefits for society. After these activities engaged a wide range of policy experts and practitioners, the three project partners decided that it was the right time to move on to the next stage of working directly with government agencies to apply the insights generated through the collaborative project to date. The aim has been to work with government partners in Asia and the Pacific to grow sound and transparent AI ecosystems that support sustainable development goals.
“Recognizing that AI offers transformative solutions for achieving the SDGs, we are pleased to participate in the AI for Social Good Project to share experience and research insights to develop enabling AI policy frameworks,” said Dr. Kanchana Wanichkorn, NXPO’s Vice President.
NXPO identified ‘Poverty Alleviation’ and ‘Medicine and Healthcare’ as two areas of need that are now tackled by two academic project teams. To alleviate poverty and inequality, the Thai government has developed data-driven decision-making systems to improve public access to state welfare programs. The project, under the academic leadership of the Australia National University (ANU) team, will focus on enhancing the human-centered design and public accessibility of these technologies to support successful implementation. In addition, research on AI for medical applications has increased exponentially in the past few years in Thailand. However, the progress in developing and applying AI from research to market in these areas is relatively slow. To support and accelerate the use of AI in medicine and healthcare, the expert team from the National University of Singapore (NUS) will focus their research and analysis on identifying crucial bottlenecks and gaps that impede the beneficial use of AI.
While the two Bangladesh projects both focus on the need for ‘Continuing and Personalized Pregnancy Monitoring’ (to improve health outcomes during and after birth), they are exploring different aspects of this key focus area for the government of Bangladesh. Under the leadership of the team from NUS & KAIST, the first project investigates challenges in perceptions and reception of incorporating AI into continuous pregnancy monitoring systems. Under the leadership of the University of Hawai‘i Team, the second project circles in on technological issues of Bangladesh’s healthcare sector and their impacts on AI-based data analysis and decision-making processes.
The academic integrity of both sets of country projects is overseen by Toni Erskine, Professor of International Politics and Director of the Coral Bell School of Asia Pacific Affairs at ANU. Erskine guides both the conception of the research questions in collaboration with the government partners and the delivery of the project outputs by providing support for the four academic teams in developing their projects.
“It has been incredibly rewarding to lead a project that brings together such an impressive, multidisciplinary group of researchers with government agencies that are so passionate about finding solutions to crucial problems – ranging from poverty alleviation to maternal health care,” Erskine said. She added that “the process of working closely with government agencies from the outset to discuss these problems and co-design research questions makes this project unique and genuinely collaborative. I’m very proud to be part of it.”
The following steps for the ‘AI for Social Good Project: Strengthening AI Capabilities and Governing Frameworks in Asia and the Pacific’ project will be to review and discuss the first complete drafts of the research papers by the four academic teams at a workshop in January. The partner government agencies from Bangladesh and Thailand will attend the workshop. Workshops with both government teams will also follow the presentation of final papers in the second quarter of 2023. To mark the project’s conclusion, a summit with all participants in the project will be held in mid-2023 at the Australia National University.
More
APRU AI for Social Good
November 28, 2022
more
APRU and Government Partners Organize Workshop to Strengthen AI policy in the Asia-Pacific Region
On 31 August 2022, the Office of National Higher Education, Science, Research and Innovation Policy Council (NXPO) of Thailand in close collaboration with the National Electronics and Computer Center (NECTEC) and the National Science and Technology Development Agency and the Institute of Field Robotics (FIBO) under King Mongkut’s University of Technology Thonburi co-hosted a workshop to review research proposals to drive “AI for Social Good: Strengthening Capabilities and Government Frameworks in Asia and the Pacific” project. Co-hosts of this event include the United Nations Economic and Social Commission for Asia and the Pacific (UNESCAP), the Association of Pacific Rim Universities (APRU), Google.org, Australian National University (ANU) and leading universities and research institutes in Thailand and abroad.
In this workshop, four AI policy research proposals were presented and reviewed by the experts. The four proposals are: 1) AI in Pregnancy Monitoring: Technical Challenges for Bangladesh, 2) Mobilizing AI for Maternal Health in Bangladesh, 3) Responsible Data Sharing, AI Innovation and Sandbox Development: Recommendations for Digital Health Governance in Thailand, and 4) Raising Awareness of the Importance of Data Sharing and Exchange to Advance Poverty Alleviation in Thailand.
Presenting the background and importance of this project in Thailand was NXPO Policy Specialist Dr. Soontharee Namliwal. She proceeded to introduce project members from Thailand which are NXPO, NECTEC of FIBO under King Mongkut’s University of Technology Thonburi.
Dr. Kommate Jitvanichphaibool, NXPO Senior Division Director and Dr. Suttipong Thajchayapong, Leader of NECTEC Strategic Analytics Networks with Machine Learning and AI Research Team – provided additional information relating to the research and application of AI in Thailand, namely 1) the poverty alleviation policy, 2) the healthcare system and guidelines for data collection and 3) Personal Data Protection Act B.E. 2562 and policy and guidelines for personal data protection. The experts also offered useful suggestions to the two projects submitted by Thailand to improve the coverage and maximize the benefits to the countries in the Asia-Pacific region.
Initiated in 2021, AI for Social Good: Strengthening Capabilities and Government Frameworks in Asia and the Pacific is a collaboration between the UNESCAP, APRU and partners. Under this project, the UNESCAP and APRU, with funding from Google.org, established a multi-stakeholder network to provide support in the development of country-specific AI governance frameworks and national capabilities. For more information on this project, please visit here.
View the article in a Thai version here.
September 6, 2022
more
No Easy Answers on Protection of AI Data Rights, Webinar by HBS and APRU Shows
On June 15, a webinar held jointly by the Hong Kong office of the Heinrich Böll Stiftung (HBS) and the Association of Pacific Rim Universities (APRU), a consortium of leading research universities in 19 economies of the Pacific Rim, highlighted the complexity of data rights for citizens and users, with risks deriving from both under-regulation and over-regulation of AI applications.
The webinar held under the theme Protection of Data Rights for Citizens and Users completed a joint hbs-APRU series consisting of three webinars on regulating AI. The series came against the backdrop of ever more AI-based systems leaving the laboratory stage and entering our everyday lives. While AI enables private sector enterprises and governments to collect, store, access, and analyse data that influence crucial aspects of life, the challenge for regulators is to strike a balance between data rights of users and the rights for enterprises and governments to make use of AI to improve their services.
The webinar’s three speakers representing an NGO network, academia and the private sector explained that the fair use of personal data should be protected while abusive manipulation and surveillance should be limited. Conversely, regulators should leave reasonable room for robust innovation and effective business strategies and facilitate effective operation of government bureaus to deliver public services.
“We not only talk about the use of personal data but also a broader range of fundamental rights, such as rights to social protection, non-discrimination and freedom of expression,” said Sarah Chander, Senior Policy Adviser at European Digital Rights (EDRi), a Brussels-based advocacy group leading the work on AI policy and specifically the EU AI Act.
“Besides these rights in an individual sense, we have also been looking into AI systems’ impact on our society, impact on broader forms of marginalization, potential invasiveness, as well as economic and social justice, and the starting point of our talks with the different stakeholders is the question of how we can empower the people in this context,” she added.
M. Jae Moon, Underwood Distinguished Professor and Director of the Institute for Future Government at Yonsei University, whose research focuses on digital government, explained that governments are increasingly driven to implement AI systems by their desire to improve evidence-based policy decision-making.
“The availability of personal data is very important to make good decisions for public interest, and, of course, privacy protection and data security should always be ensured,” Moon said.
“The citizens, for their part, are increasingly demanding customized and targeted public services, and the balancing of these two sides’ demands requires good social consensus,” he added.
Moon went on to emphasize that citizens after consenting to the use of their private data by the government should be able to track the data usage while also being able to withdraw their consent.
Sankha Som, Chief Innovation Evangelist of Tata Consultancy Services, explained that the terms Big Data and AI are often intertwined despite describing very different things. According to Som, Big Data is the ability to manage the input side of AI and drawing insights from the data whereas AI is about predictions and decision-making.
“If you look at how AI systems are built today, there are several different Big Data approaches used on the input side, but there are also processing steps such as data labelling which are AI specific; and many issues related to AI actually come from the these processing steps,” Som said.
“Biases can, intentionally or unintentionally, cause long-term harm to individuals and groups, and they can creep into these processes, so it will not only take regulation on use of input data but also on end use, while at the same time complying with enterprise specific policies,” he added.
The webinar was moderated by Dr. Axel Harneit-Sievers, Director, Heinrich Böll Stiftung Hong Kong Office. The series’ previous two webinars were held in May under the themes Risk-based Approach of AI Regulation and Explainable AI.
More information
Listen to the recording here.
Find out more about the webinar series here.
Contact Us
Lucia Siu
Programme Manager, Heinrich Böll Stiftung, Hong Kong, Asia | Global Dialogue
Email: Lucia.Siu [at] hk.boell.org
Christina Schönleber
Senior Director, Policy and Research Programs, APRU
Email: policyprograms [at] apru.org
June 27, 2022
more
Webinar by Heinrich Böll Stiftung and APRU takes deep dive into Explainable AI
On May 25, a webinar held jointly by the Hong Kong office of the Heinrich Böll Stiftung (hbs) and the Association of Pacific Rim Universities (APRU) highlighted that many of the algorithms that run artificial intelligence (AI) are shrouded by opaqueness, with expert speakers identifying approaches in making AI much more explainable than it is today.
The webinar held under the theme Explainable AI was the second in a joint hbs-APRU series of three webinars on regulating AI. The series comes against the backdrop of ever more AI-based systems leaving the laboratory stage and entering our everyday lives.
While AI algorithmic designs can enhance robust power and predictive accuracy of the applications, they may involve assumptions, priorities and principles that have not been openly explained to users and operation managers. The proposals of “explainable AI” and “trustworthy AI” are initiatives that seek to foster public trust, informed consent and fair use of AI applications. They also seek to move against algorithmic bias that may work against the interest of underprivileged social groups.
“There are many AI success stories, but algorithms are trained on datasets and proxies, and developers too often and unintentionally use datasets with poor representation of the relevant population,” said Liz Sonenberg, Professor of Information Systems at the University of Melbourne, who featured as one of the webinar’s three speakers.
“Explainable AI enables humans to understand why a system decides in certain way, which is the first step to question its fairness,” she added.
Sonenberg explained that the use of AI to advise a judicial decision maker of a criminal defendant’s risk of recidivism, for instance, is a development that should be subject to careful scrutiny. Studies of one existing such AI system suggest that it offers racially biased advice, and while this proposition is contested by others, these concerns raise the important issue of how to ensure fairness.
Matthias C. Kettemann, head of the Department for Theory and Future of Law at the University of Innsbruck, pointed out that decisions on AI systems’ explanations should not be left to either lawyers, technicians or program designers. Rather, he said, the explanations should be made with a holistic approach that investigates what sorts of information are really needed by the people.
“The people do not need to know all the parameters that shape an AI system’s decision, but they need to know what aspects of the available data influenced those decisions and what can be done about it,” Kettemann said.
“We all have the right of justification if a state or machine influences the way rights and goods are distributed between individuals and societies, and in the next few years, it will be one of the key challenges to nurture Explainable AI to make people not feeling powerless against AI-based decisions,” he added.
Brian Lim, Assistant Professor in the Department of Computer Science at the National University of Singapore (NUS), in his research explores how to improve the usability of explainable AI by modeling human factors and applying AI to improve decision making and user engagement towards healthier and safer lifestyles.
Speaking at the webinar, Lim explained that one of the earliest uses of Explainable AI is to identify problems in the available data. Then, he said, the user can investigate whether the AI reasons in a way that follows the standards and conventions in the concerned domain.
“Decisions in the medical domain, for instance, are important because they are a matter of life and death, and the AI should be like the doctors who understand the underlying biological processes and causes of mechanisms,” Lim said.
“Explainable AI can help people to interpret their data and situation to find reasonable, justifiable and defensible answers,” he added.
The final webinar will be held on June 15 under the theme Protection of Data Rights for Citizens and Users. The event will address the challenges for regulators in striking a balance between data rights of citizens, and the rights for enterprises and states to make use of data in AI.
More information
Listen to the recording here.
Find out more about the webinar series here.
Register for the June 15th session here.
Contact Us
Lucia Siu
Programme Manager, Heinrich Böll Stiftung, Hong Kong, Asia | Global Dialogue
Email: Lucia.Siu [at] hk.boell.org
Christina Schönleber
Senior Director, Policy and Research Programs, APRU
Email: policyprograms [at] apru.org
June 1, 2022
more
Heinrich Böll Stiftung and APRU Discuss Risk-based Governance of AI in First Joint Webinar
The Hong Kong office of the Germany-based Heinrich Böll Stiftung (hbs) and APRU successfully concluded the first in a series of three webinars on regulating artificial intelligence (AI).
Held on May 5 under the theme Risk-based Approach to AI Regulation, the event constituted a valuable Asia-Europe platform for the exchange of insights on the risks that are associated with AI and the appropriate regulatory responses.
The webinar series comes against the backdrop of AI reaching a stage of maturity and extensive application across supply chains, public governance, media and entertainment. While industries and societies are quick in the uptake of AI, governments struggle to develop appropriate regulatory frameworks to prevent immense possible harm resulting from mismanaged AI.
APRU has been pursuing debates in the field of AI policies and ethics since 2016, and APRU in collaboration with UN ESCAP and Google has set up the AI for Social Good network.
“This joint webinar series comes at the perfect time to bring together experts from Europe and leading thinkers from the highly diverse Asia Pacific region.
We are looking to apply what we have learned to actively support the development and implementation of regulatory frameworks and polices that ensure that AI technology is used for the good of society,” said APRU Secretary General Chris Tremewan, emphasizing the importance of collaboration across regional boundaries.
The webinar was moderated by Zora Siebert, Head of Programme, EU Democracy and Digital Policy, Heinrich Böll Stiftung European Union. Siebert pointed out that the European Commission has unveiled its draft AI Act (AIA) in April 2021, accelerating an active shaping process in the European Parliament. Siebert noted that policymakers in the U.S. and the EU have been keen to align on AI policy, with both sides wishing to enhance international cooperation.
Toby Walsh, Scientia Professor of Artificial Intelligence, University of New South Wales, explained that AI can hardly be regulated in a generic way but will require novel regulative approaches instead.
“Since AI is a platform, it is going to be much like electricity that is in all our devices, and there is no generic way to regulate electricity,” Walsh said.
“The EU AI Act will set an important precedent, but it will depend on how it is going to be implemented and on the sorts of expertise the EU is going to have, because the people who are going to be regulated have vast resources,” he added.
Alexandra Geese, Member of the European Parliament for the Greens EFA and coordinator for the Greens EFA in the AI in the Digital Age Special Committee (AIDA), picked up on Walsh’s electricity metaphor, stressing that “we want to be the ones who switch the lights on and off, as opposed to leaving the decisions to the machines.”
Jiro Kokuryo, Professor at the Faculty of Policy Management at Keio University in Japan, provided an alternative perspective from East Asia, explaining that the society and the technologies should be allowed to co-evolve rather than be forced into a static process.
“Nevertheless, Japan aligns completely with the EU in terms of human rights protection, and the EU’s risk-based approach is also agreeable,” Kokuryo said.
The second webinar will be held on May 25 on the topic Explainable AI. The proposals of “explainable AI” and “trustworthy AI” are initiatives to create AI applications that are transparent, interpretable, and explainable to users and operations managers.
The final webinar will be held on June 15 on the topic Protection of Data Rights for Citizens and Users. The webinar will address the challenges for regulators in striking a balance between data rights of citizens, and the rights for enterprises and states to make use of data in AI.
More information
Listen to the recording here.
Find out more about the webinar series here.
Register for the May 25th session here.
Contact Us
Lucia Siu
Programme Manager, Heinrich Böll Stiftung, Hong Kong, Asia | Global Dialogue
Email: Lucia.Siu [at] hk.boell.org
Christina Schönleber
Senior Director, Policy and Research Programs, APRU
Email: policyprograms [at] apru.org
May 12, 2022
more
APRU on China Daily: Your seat at the table depends on how innovative you are
Original post in China Daily.
Innovate or perish is the new slogan. If you don’t innovate, you don’t invent and if you don’t invent you are out of the race. Gone are the days of captive consumption in an isolated world. Today, we are talking about global economies that transcend borders and if you have nothing new on the plate, you are doomed.
A few days back, there were reports that technological innovation is going to see renewed impetus in China. The State Council has said that the government will publish a list of core scientific projects and seek help from researchers for the same on a voluntary basis. In addition, it will also look at developing policy tools to more efficiently select and allocate funding to potentially groundbreaking research projects.
In a nutshell, what this means is that the Chinese government is not only planning to seek the help of the private sector, but also allocating more resources to emerging new technologies to unlock new growth strategies, say experts.
Nidhi Gupta, a senior technology analyst at GlobalData, a UK-based data and analytics company, tells me that China’s technological advances in recent years can largely be attributed to the government’s proactive policies and strategies.
“China has been promoting the development and use of emerging technologies through a supportive policy framework, setting up large-scale funding of research, and attractive incentives for tech entrepreneurs. The country has also put multiyear strategies in place to upgrade its digital infrastructure and achieve technology independence. In addition, the government’s five-year plans for science and technology innovation and ‘Made in China 2025’ have been instrumental in driving its ascendancy on the innovation front,” says Nidhi.
Belunn Se, an industry observer based in Shenzhen, Guangdong province, tells me that technology innovation is necessary for China to vitalize its domestic economy and reinforce industry strength. It will also help the country as it moves up the value chain and bolsters its supply chains.
Several stakeholders need to be involved in a systematic manner for the success of tech innovation, he says. The primary role must be played by the government as an organizer of resources, guide and supervisor. Colleges and universities are also necessary for fundamental scientific research and development, and talent cultivation. Top academic research institutes can play a big role in China’s efforts to reduce its dependence on external sources for cutthroat technologies like semiconductor production equipment, he says. Policies should also focus on improving the funding avenues for tech firms and scaling up their commercialization by market mechanism.
“It is important to ensure that elementary education and basic sciences play a crucial role in fostering innovation,” says Se.
Christopher Tremewan, secretary general of APRU, a consortium of 56 leading universities headquartered in Hong Kong, tells me that as countries commit more resources to technological innovation, it is important to ensure that new discoveries are directed at the common challenges.
“Techno-nationalism will fall short of solving global crises. It is the universities that do much of the fundamental research that lies behind solutions. Organizations like the APRU are the neutral platforms for cooperation among major research universities across international borders, basically, as a forum that builds trust and a renewed commitment to multilateralism.”
Tremewan says that universities in Hong Kong are already playing a pivotal role in using their research expertise to foster technological innovation. In the Asia-Pacific region, universities are vital in understanding and preparing for complex problems from extreme climate events to the COVID-19 pandemic. The key, though, is to leverage the best research and ensure that the increases in public funding have maximum impact for the common good, thereby building trust and cooperation internationally, he says.
China’s 14th Five-Year Plan (2021-25), which is due to be ratified by the National People’s Congress, is expected to give top priority to science, technology and innovation, and recognize them as critical to achieving technology self-reliance. The plan is based on dual circulation with the emphasis on internal circulation: domestic technology development, production, and consumption.
“With this new five-year plan, China is marking a strategic shift in priorities towards national and industrial security and is set to become increasingly self-sufficient technologically and less reliant on exporting to the West,” says Nidhi from GlobalData.
While the draft plan does not specify what technologies will gain focus over the next five years, it however makes it clear that investments in technology will continue to grow, and will focus on frontier fields like artificial intelligence, integrated circuits, aerospace technology, quantum computing, deep earth and sea exploration, adds Nidhi.
China has already done well in pioneering and upgrading innovation, like high-speed railways and some 5G-enabled technologies. But in the long term, fundamental breakthroughs are necessary as only such moves can trigger profound effects to the economy and industry, pretty much like how the invention of electricity and computers changed human life, says Se.
March 27, 2021
more
APRU on The Business Times: Safeguarding Our Future With AI Will Need More Regulations
Original post in The Business Times.
More has to be done to ensure that AI is used for social good.
A SILVER lining emerging from Covid-19’s social and economic fallout is the unprecedented application of artificial intelligence (AI) and Big Data technology to aid recovery and enable governments and companies to effectively operate. However, as AI and Big Data are rapidly adopted, their evolution is far outpacing regulatory processes for social equity, privacy, and political accountability, fuelling concern about their possible predatory use.
No matter whether contributing to essential R&D for coronavirus diagnostic tools or helping retailers and manufacturers transform their processes and the global supply chain, AI’s impressive achievements do not fully allay anxieties around their perceived dark side.
Public concern about the threats of AI and Big Data ranges from privacy breaches to dystopian takes on the future that account for a technological singularity. Meanwhile, there is fairly strong sentiment that tech giants like Facebook, Amazon and Apple have too much unaccountable power. Amid rising antitrust actions in the US and legislative pushback in Europe, other firms like Microsoft, Alibaba and Tencent also risk facing similar accusations.
Despite their advancements, breakthrough technologies always engender turbulence. The pervasiveness of AI across all aspects of life and its control by elites, raise the question of how to ensure its use for social good.
For the ordinary citizen, justifiable suspicion of corporate motives can also render them prey to misinformation. Multilateral organisations have played critical roles in countering false claims and building public trust, but there is more to be done.
AI FOR SOCIAL GOOD
Against this backdrop, APRU (the Association of Pacific Rim Universities), the United Nations ESCAP and Google came together in 2018 to launch an AI for Social Good partnership to bridge the gap between the growing AI research ecosystem and the limited study into AI’s potential to positively transform economies and societies.
Led by Keio University in Japan, the project released its first flagship report in September 2020 with assessments of the current situation and the first-ever research-based policy recommendations on how governments, companies and universities can develop AI responsibly.
Together they concluded that countries effective in establishing enabling policy environments for AI that both protect against possible risks and leverage it for social and environmental good will be positioned to make considerable leaps towards the Sustainable Development Goals (SDGs). These include providing universal healthcare, ensuring a liveable planet, and decent work opportunities for all.
However, countries that do not create this enabling environment risk forgoing the potential upsides of AI and may also bear the brunt of its destructive and destabilising effects: from weaponised misinformation, to escalating inequalities arising from unequal opportunities, to the rapid displacement of entire industries and job classes.
WAY FORWARD
Understanding of the long-term implications of fast-moving technologies and effectively calibrating risks is critical in advancing AI development. Prevention of bias and unfair outcomes produced by AI systems is of top priority, while government and private sector stakeholders should address the balance between data privacy, open data and AI growth.
For governments, it will be tricky to navigate this mix. The risk is that sluggish policy responses will make it impossible to catch up with AI’s increasingly rapid development. We recommend governments establish a lead public agency to guard against policy blind spots. These lead agencies will encourage “data loops” that provide feedback to users on how their data are being used and thus facilitate agile regulation. This is necessary due to AI’s inherently rapid changing nature and the emergence of aspects that may not have been obvious even weeks or months earlier.
Another important ability that governments have to acquire is the ability to negotiate with interest groups and ethical considerations. Otherwise, progress of promising socially and environmentally beneficial AI applications ranging from innovative medical procedures to new transportation options can be blocked by vested interests or a poor understanding of the trade-offs between privacy and social impact.
Governments should also strengthen their ability to build and retain local technical know-how. This is essential, given that AI superpower countries are built on a critical mass of technical talent that has been trained, attracted to the country, and retained.
DIASPORA OF TALENT
Fortunately, many countries in Asia have a diaspora of talent who have trained in AI at leading universities and worked with leading AI firms. China has shown how to target and attract these overseas Chinese to return home by showcasing economic opportunities and building confidence in the prospects of a successful career and livelihood.
Ultimately, for any emerging technology to be successful, gaining and maintaining public trust is crucial. Covid-19 contact tracing applications are a good case in point, as transparency is key to gaining and maintaining public trust in their deployment. With increased concerns about data privacy, governments can explain to the public the benefits and details of how the tracing application technology works, as well as the relevant privacy policy and law that protects data.
To deal with the use and misuse of advanced technologies such as AI, we need renewed commitment to multilateralism and neutral platforms on which to address critical challenges.
At the next level, the United Nations recently launched Verified, an initiative aimed at delivering trusted information, advice and stories focused on the best of humanity and opportunities to ‘build back better’, in line with the SDGs and the Paris agreement on climate change. It also invites the public to help counter the spread of Covid-19 misinformation by sharing factual advice with their communities.
The education sector is playing its part to facilitate exchange of ideas among thought leaders, researchers, and policymakers to contribute to the international public policy process. I am hopeful that universities will be able to partner with government, the private sector and the community at large in constructing a technological ecosystem serving the social good.
The writer is secretary general of APRU (the Association of Pacific Rim Universities)
March 18, 2021
more
APRU on South China Morning Post: Governments, business and academia must join hands to build trust in AI’s potential for good
By Christopher Tremewan
December 31, 2020
Original post in SCMP.
Concerns about the predatory use of technology, privacy intrusions and worsening social inequalities must be jointly addressed by all stakeholders in society – through sensible regulations, sound ethical norms and international collaboration.
In September, it was reported that Zhu Songchun, an expert in artificial intelligence at UCLA, had been recruited by Peking University. It was seen as part of the Chinese government’s strategy to become a global leader in AI, amid competition with the US for technological dominance.
In the West, a new US administration has been elected amid anxiety about cyber interference. Tech giants Apple, Facebook, Amazon and Google are facing antitrust accusations in the US, while the European Union has unveiled sweeping legislation to enable regulators to head off bad behaviour by big tech before it happens.
Meanwhile, Shoshana Zuboff’s bestselling book The Age of Surveillance Capitalism has alerted social media users to a new economic order that “claims human experience as free raw material for hidden commercial practices”.
In addition, the public is regularly bombarded with dystopian scenarios (like in Black Mirror ) about intelligent machines taking control of society, often at the service of ruling elites or criminals.
The dual character of AI – its promise for social good and its threat to human society through absolute control – has been a familiar theme for some time. Also, AI systems are evolving rapidly, outpacing regulatory processes for social equity and privacy.
Especially during a pandemic, the urgent question facing governments, the private sector and universities is how to promote public trust in the beneficial side of AI technologies. One way to build public trust is to deliver for the global common good, beyond national or corporate self-interest.
With the world facing crises ranging from the current pandemic to worsening inequalities and the massive effects of climate change, it is obvious that no single country can solve any of them alone.
The technological advances of AI already hold out promise in everything from medical diagnosis and drug development to creating smart cities and transitioning to a renewable-energy economy. MIT has reportedly developed an app that can immediately diagnose 98.5 per cent of Covid-19 infections by people just coughing into their phones.
A recent report on “AI for Social Good”, co-authored by the UN, Google and the Association of Pacific Rim Universities, concluded that AI can help us “build back better” and improve the quality of life. But it also said “the realisation of social good by AI is effective only when the government adequately sets rules for appropriate use of data”.
With respect to limiting intrusions on individual rights, it said that “the challenge is how to balance the reduction of human rights abuses while not suffocating the beneficial uses”.
These observations go to the core of the problem. Are governments accountable in real ways to their citizens or are they more aligned with the interests of hi-tech monopolies? Who owns the new AI technologies? Are they used for concentrating power and wealth or do they benefit those most in need of them?
The report recommends that governments develop abilities for agile regulation; for negotiation with interest groups to establish ethical norms; for leveraging the private sector for social and environmental good; and to build and retain local know-how.
While these issues will be approached in different ways in each country, international collaboration will be essential. International organisations, globally connected social movements as well as enhanced political participation by informed citizens will be critical in shaping the environment for regulation in the public interest.
At the same time, geopolitical rivalry need not constrain our building of trust and cooperation for the common good.
The Covid-19 crisis has shown that it is possible for governments to move decisively towards the public interest and align new technologies to solutions that benefit everyone. We should not forget that, in January, a team of Chinese and Australian researchers published the first genome of the new virus and the genetic map was made accessible to researchers worldwide.
International organisations such as the World Health Organization and international collaborations by biomedical researchers also play critical roles in building public trust and countering false information.
Universities have played an important role in advancing research cooperation with the corporate sector and in bolstering public confidence that global access takes priority over the profit motive of Big Pharma.
For example, the vaccine developed by Oxford University and AstraZeneca will be made available at cost to developing countries and can be distributed without the need for special freezers.
Peking University and UCLA are cooperating with the National University of Singapore and the University of Sydney to exchange best practices on Covid-19 crisis management.
Competition for international dominance in AI applications also fades as we focus on applying its beneficial uses to common challenges. Global frameworks for cooperation such as the UN 2030 Agenda for Sustainable Development or the Paris Climate Agreement set out the tasks.
Google, for example, has established partnerships with universities and government labs for advanced weather and climate prediction, with one project focusing on communities in India and Bangladesh vulnerable to flooding.
To deal with the use and misuse of advanced technologies like AI, we need a renewed commitment to multilateralism and to neutral platforms on which to address critical challenges.
Universities that collectively exercise independent ethical leadership internationally can also, through external partnerships, help to shape national regulatory regimes for AI that are responsive to the public interest.
Find out more about the UN ESCAP-APRU-Google AI for Social Good project here.
December 31, 2020
more
APRU on Times Higher Education: ‘Oversight needed’ so AI can be used for good in Asia-Pacific
By Joyce Lau
Original post in THE.
Academics urge governments to set up frameworks for ethical use of technology and reaffirm the need for greater multidisciplinarity
Asia-Pacific universities could use artificial intelligence to harness their strengths in combating epidemics and other global problems, but only if there were regulatory frameworks to ensure ethical use, experts said.
Artificial Intelligence for Social Good, a nearly 300-page report by academics in Australia, Hong Kong, India, Singapore, South Korea and Thailand, was launched the same day as the event, held by the Association of Pacific Rim Universities (APRU), the United Nations’ Economic and Social Commission for Asia and the Pacific (ESCAP) and Google. The research, co-published by APRU and Keio University in Japan, laid out recommendations for using AI in the region to achieve the UN’s sustainable development goals (SDGs).
While the report outlined the great potential for AI in the region, it also said that risks must be managed, privacy concerns must be addressed and testing must be conducted before large-scale technology projects were implemented.
Christopher Tremewan, APRU’s secretary general and a former vice-president at the University of Auckland, said that Pacific Rim universities “have incredible research depth in the challenges facing this region, from extreme climate events and the global Covid-19 pandemic to complex cross-border problems. Their collective expertise and AI innovation makes a powerful contribution to our societies and our planet’s health.”
However, he also said there were potential problems with “rapid technological changes rolled out amid inequality and heightened international tensions”.
“As educators, we know that technology is not neutral and that public accountability at all levels is vital,” he said.
The APRU, which includes 56 research universities in Asia, Australasia and the west coast of the Americas, is based at the Hong Kong University of Science and Technology.
In answering questions, Dr Tremewan drew on his own observations in New Zealand and Hong Kong, two places where Covid responses have been lauded.
“The feeling in Hong Kong is that there is tremendous experience from Sars,” he said, referring to a 2003 epidemic. “The universities here have capability in medical research, particularly on the structure of this type of disease, and also in public health strategy.”
Meanwhile, in New Zealand, “confidence in science” and the prominence of researchers and experts speaking out aided in the public response.
“Universities are playing key roles locally and internationally,” he said, adding that expertise was also needed in policy, communications and social behaviour. “The solutions are multidisciplinary, not only technological or medical.”
Soraj Hongladarom, director of the Center for Ethics of Science and Technology at Chulalongkorn University in Bangkok, and one of the authors of the report, said their work had “broken new ground” in Asia.
“We’re trying to focus on the cultural context of AI, which hasn’t been done very much in an academic context,” he said.
Professor Hongladarom, a philosopher, urged greater interdisciplinarity in tackling social problems.
“Engineers and computer scientists must work with social scientists, anthropologists and philosophers to look beyond the purely technical side of AI – but also at its social, cultural and political aspects,” he said.
He added that policy and regulation were vital in keeping control over technology: “Every government must take action – it’s particularly important in South-east Asia.”
Dr Tremewan said that, aside from crossing disciplinary boundaries, AI also had to cross national borders. “Universities have huge social power in their local contexts. So how do we bring that influence internationally?” he asked.
Find out more about the UN ESCAP-APRU-Google AI for Social Good project here.
November 12, 2020
more
APRU releases AI for Social Good report in partnership with UN ESCAP and Google: Report calls for AI innovation to aid post-COVID recovery
Hong Kong, November 10, 2020 – APRU partners with UN ESCAP and Google to launch the AI for Social Good report. This is the third project exploring AI’s impact on Asia-Pacific societies to offer research-based recommendations to policymakers that focus on how AI can empower work towards the 2030 UN Sustainable Development goals.
With COVID-19’s ongoing social and economic fallout, the role of AI is even more pronounced in aiding recovery. Researchers’ insights underpin the report’s recommendations for developing an environment and governance framework conducive to AI for Social Good – a term encompassing increasingly rapid technological changes occurring amidst inequality, the urgent transition to renewable energy and unexpected international tensions.
Chris Tremewan, Secretary General of APRU commented, “APRU members have incredible research depth in the challenges facing this region, from extreme climate events and the global COVID-19 pandemic to complex cross-border problems. Bringing their expertise and AI innovation together in a collective effort will make a powerful contribution to our societies and the health of the planet.”
Jonathan Wong, Chief of Technology and Innovation, United Nations ESCAP said, “We designed the 2030 UN Sustainable Development Goals with a strong commitment to harness AI in support of inclusive and sustainable development while mitigating its risks. Public policies play a critical role in promoting AI for social good while motivating governments to regulate AI development and applications so that they contribute to aspirations of a sustainable future.”
Dan Altman, AI Public Policy, Google shared, “Google and APRU share the belief that AI innovation can meaningfully improve people’s lives. Google introduced the AI for Social Good program to focus our AI expertise on solving humanitarian and environmental challenges. Google is excited to be working with experts across all sectors to create solutions that make the biggest impact.”
The report’s multidisciplinary studies provide the knowledge and perspectives of researchers from Singapore, Hong Kong, Korea, Thailand, India, and Australia. Combining local understanding with international outlook is essential for policymakers to respond with regulation that enables international tech firms to contribute to the common good.
Here are the key recommendations:
Multi-stakeholder governance must push innovation to realize AI’s full potential
In addition to overseeing major players controlling data, governance must take manageable risks and conduct controlled testing before large scale tech implementation.
Establish standardized data formats and interoperability
Information asymmetries create inequities, therefore standardized data formats and interoperability between systems is critical.
Address data privacy concerns and protect individual dignity
Data needs anonymization, encryption, and distributed approaches. Governments must enforce privacy and individual dignity protection. Incorporating the Asian values of altruism in data governance can also help encourage data sharing for the social good.
November is “AI for Social Good Month” featuring investigative discussions, conversations, and policy briefings with leading AI thinkers and doers from Asia and beyond. Visit the Summit here.
View the original release here.
Media contact: [email protected] / [email protected]
November 10, 2020
more
AI for Social Good network releases new report
AI For Social Good, a partnership between APRU, UN ESCAP and Google, released a new report exploring the impact of AI on societies in the Asia-Pacific region and offering research-based recommendations to policymakers.
Providing perspectives of multidisciplinary researchers from Singapore, Hong Kong, Korea, Thailand, India, and Australia, each chapter of the report presents a unique research-based policy paper offering a set of key conclusions and policy suggestions aiming to support and inform policy makers and policy influencers.
The report seeks to inform the development of governance frameworks that can help to address the risks and challenges associated with AI, while maximizing the potential of the technology to be developed and used for good. It also furthers understanding for developing the enabling environment in which policymakers can promote the growth of an AI for Social Good ecosystem in their respective countries in terms of AI inputs (e.g., data, computing power, and AI expertise) and ensuring that the benefits of AI are shared widely across society.
The AI for Social Good network was launched in December 2018 under the academic lead of Keio University Vice-President, Jiro Kokuryo. It aims to provide a multi-year platform to enable scholars and experts to collaborate with policymakers to generate evidence and cross-border connections.
“We worked very hard to come up with a set of recommendations that will make AI truly contribute to the well-being of humankind. I hope this voice from Asia will be heard not only within the region, but by people around the world.”
‘Governments are encouraged to invest in promoting AI solutions and skills that bring greater social good and help us “build back better” as we recover from the impacts of the COVID-19 pandemic.’ said Mia Mikic, Director of the United Nations ESCAP’s Trade, Investment and Innovation Division.
To share the report’s findings with policymakers, industry leaders, and academics from around the region, the Virtual AI for Social Good Summit will be held in November. The series will feature working and policy insight panels with details to be shared on apru.org soon.
Find the full report here.
See a press release from Keio University here.
September 9, 2020
more
DiDi and APRU strengthen partnership with MoU and new APEC project
APRU and Beijing-based mobile transportation platform Didi Chuxing (DiDi) have entered into a Memorandum of Understanding (MoU) to strengthen collaboration and the development of activities and projects.
Currently both organizations are contributing to the APEC Public-Private Dialogue on Sharing Economy and Digital Technology Connectivity for Inclusive Development, which aims to advance economic, financial, and social inclusion in the APEC member economies.
APRU participated in the APEC Public-Private Dialogue’s latest seminar with the theme, “Capitalize on Research and Development.” Held on February 12 in Putrajaya, Malaysia, the seminar brought together stakeholders in the science and technology innovation sector to strengthen the ecosystem in promoting R&D and enhancing connectivity within the innovation value chain.
“APRU’s large network of researchers, policy-makers and private sector representatives in the Asia-Pacific region makes it the ideal partner for us to jointly explore opportunities for collaborative research, joint projects, education and training, talent development, and academic exchanges, as well as technology transfer and innovation,” said Leju Ma, DiDi’s Senior Expert in International Industries.
“We are looking forward to the significant input that APRU will provide for our projects,” he added.
On the list of future projects to be explored and developed is the joint organization of side events at relevant UN conferences and cooperation on developing future APEC workshops.
Other collaboration opportunities will be provided by the DiDi Engine Initiative, which includes international youth exchange and technology competitions, regional joint AI laboratories, women’s entrepreneurship and empowerment programs, as well as APRU’s support for DiDi’s engagements related to the APEC Business Advisory Council (ABAC).
In view of the Covid-19 outbreak, APRU and DiDi will work together to share best practice on non-pharmaceutical prevention and control measures with entities outside China, especially in the mobility sector.
Find out the report published by APEC Policy Partnership on Science Technology and Innovation.
About DiDi
https://www.didiglobal.com
Didi offers on-demand taxi-hailing, private car-hailing, bike-sharing, automotive solutions and smart transportation services to over 550 million users across China, Japan, Latin America and Australia, delivering over 10 billion rides per year.
April 21, 2020
more
AI For Everyone: New Open Access Book
APRU is pleased to announce the new release of the book “AI for Everyone: benefitting from and building trust in the technology.” Published on January 28, 2020, the book was written by Jiro Kokuryo, Catharina Maracke, and Toby Walsh. The project was led by project co-chairs and AI-experts Professors Jiro Kokuryo (Keio) and Toby Walsh (UNSW). The open-access book features APRU’s project and introduces its findings. The project is the result of a discussion series organized by APRU and Google.
“Experts from APRU universities greatly contributed to this foundational project in which we built upon for projects such as the Transformation of Work and AI for Social Good,” said Christina Schönleber, APRU Senior Director (Policy and Programs).
“It enabled us to actively pursue opportunities to interact with policymakers, businesses, and leaders in society to address major AI-related fears, such as of ‘black box’ machines manipulating human society, unethical uses of AI, and that AI may widen the gap between the rich and the poor,” she added.
The project’s first meeting was held in late-2017, laying the groundwork for the crafting of a series of working papers and their resulting policy recommendations. As many as twelve of these AI-related working papers were reviewed at the second meeting in September, reflecting eager participation by APRU members. An accompanying project workshop took on key questions, such as how to establish more trust in AI and how to amplify human intelligence through the use of AI toward beneficial ends. The project’s preliminary outcome was prominently featured by the Pacific Economic Cooperation Council’s State of the Region Report 2018-2019, which fed into the 30th APEC ministerial meeting held in the following month in Port Moresby, Papua New Guinea.
“The title of our book reflects the belief that access to the benefits of AI should be transparent, open, and understood by and accessible to all people regardless of their geographic, generational, economic, cultural and other social background,” said Kokuryo.
“We wrote it to strengthen awareness about the nature of the technology, governance of the technology, and its development process, with a focus on responsible development,” he added.
The book is available as a paperback edition at cost price.
Please see the project overview and policy statement here.
Keio and UNSW are the APRU member university leads of this project.
Other involved APRU member institutions include: The Australian National University (Australia), Far Eastern Federal University (Russia), Peking University (China), The Chinese University of Hong Kong, National University of Singapore, Technologico de Monterrey (Mexico), Fudan University (China), University of California Irvine (USA), Universidad de Chile (Chile), UNSW Sydney (Australia), and The Hong Kong University of Science and Technology.
February 1, 2020
more
Accelerating Indonesia’s Human Capital Transformation for Future of Work
The final in a series of dissemination events presenting the policy recommendations and research from The Transformation of Work project took place on Tuesday, December 3, 2019 in Jakarta, Indonesia.
Christina Schönleber, Director (Policy and Programs) talked in the opening of the Forum about the research conducted by APRU on the impact of automation on the future work on the society and the economies across the Asia Pacific region. The research is available in the APRU published book titled “Transformation of Work in Asia-Pacific in the 21st Century”.
Digitalization and automation are transforming the world on unprecedented scale and speed, and the impacts are felt in all levels of society. Additionally, recent technological advances such as AI-driven innovation and machine learning require a new set of skills for the future workforce. The future workforce will see the transformation of jobs as technological change creates surpluses of workers and skills in some occupations while creating demands for new skills and jobs in others. The Centre for Strategic and International Studies (CSIS), supported by Google, has conducted a serial of discussions and policy recommendations under Forum Kebijakan Ketenagakerjaan (FKK), a multi-stakeholder platform in labor issues through discussion and dissemination.
Yose Rizal Damuri (CSIS) opened the public seminar and introduce FKK which is to stimulate discussion, accommodate multi-stakeholders perspectives and formulate policy recommendations from evidence-based research. The forum has been successfully held and produce some fruitful debates among researchers, policymakers, the private sector and labor unions. After that, Christina Schönleber (APRU) explains that APRU has conducted a research on the impact of automation on the future work on the society and the economies across the Asia Pacific and held discussions between academia, governments, and industries. The objectives of the projects are to understand digital technology, automation challenges and benefits in relation to the future of work; inform the discussion among researchers, policy-makers and civil society on possible direction and solutions; and publish and widely disseminate a data-driven study with key focus on APAC region.
Faizal Yahya (National University of Singapore) explains that Singapore has a tiny workforce and an aging demographic. There is a growing fear of losing jobs and influx of foreign laborers which creates a negative impression that their jobs are taken away by foreigners. Also, it is necessary to create new jobs for old workers or to reskill them. To prepare for the changes in the future. The government has undertaken several initiatives. First, the government launched SkillsFuture in 2015 to give training to graduates and provide courses for reskilling especially for mature workers under the Ministry of Education. From the demand side, the Committee of Future Economy (CFE) created an Industry Transformation Map (ITM) and assigned different agencies to help different industries sectors since there are more SMEs than larger companies in Singapore. Thirdly, to support the manufacturing sector, the government establish Smart Industry Readiness Index (SIRI), which helps companies to architect their industry 4.0 roadmap through The Prioritization Matrix. Lastly, the government tried to solve the mature workers’ problem through Workforce Singapore (WSG) Adapt and Grow Initiative.
The Forum hosted speakers from Asian Institute of Management, National University of Singapore (NUS), The Centre for Strategic and International Studies (CSIS) and Association of Pacific Rim Universities (APRU), in addition to a number of Indonesian educational organizations.
The speakers addressed important topics related to the impact of recent technological advances (i.e. AI-driven innovation, machine learning, etc.) on the workforce. They highlighted main challenges faced by the workforce including obsolete education material, expiration of skills in the light of rapid technological changes, and heavy rates of young unemployment in Indonesia. “Education will have to be reimagined”, said Jikeyong Kang from the Asian Institute of Management.
The interactive talk-show panel drew participants’ attention to developing solutions to the discussed challenges. The expansion of the Indonesian Government effort to keep the education system updated and relevant to the industry demands was suggested. Meanwhile, continuous training of existing workforce is necessary to keep up with technological trends and deal with the lack of talent in certain fields.
January 13, 2020
more
AI Policy for the Future: Can we trust AI?
AI Policy for the Future: Can we trust AI?
Date & Time: August 23 from 9 am to 5 pm
Venue: Korea Press Center, 20th floor, International Conference Hall
Seoul National University Initiative will host a one-day conference focusing on AI trust for the future.
The conference will invite AI experts and scholars from academia, industry, and government to address the current concerns on accountability and enhance social beneficial outcomes related to AI governance through technology, policy, and law. Considering the critical issues such as fairness and equity will be analyzed on both a macro and micro level to develop key recommendations on the responsible use of AI.
Find out the program here.
August 16, 2019
more
Automation and the Transformation of Work in Asia-Pacific in the 21st Century
The APRU Report, Transformation of Work in Asia-Pacific in the 21st Century, was launched in Singapore on July 11, 2019. Dr Faizal bin Yahya, Senior Research Fellow at Institute of Policy Studies (IPS), contributed Singapore’s case studies to the sixth chapter of the report, which highlights the advances Singapore has made in introducing digitalization in its economy and offered suggestions on future related initiatives. This report also emphasized the need for greater synergy between academia and industry to help workers remain employable in a fast-evolving business environment and a digital economy.
The event started with an overview of APRU and project developments introduced by the APRU International Secretariat. Follow up the project’s major findings and policy recommendations were presented by Prof Tam Kar-Yan who is the project lead of this collaborative work and the Dean of the Business School, The Hong Kong University of Science and Technology.
Dr Faizal was a moderator in the following panel chairing an in-depth discussion on Singapore’s cases. The panelists included Mr Patrick Tay, Director of Legal and Strategy and Assistant Secretary-General of National Trades Union Congress, Dr Jaclyn Lee, Chief Human Resources Officer and Senior Fellow and SUTD Academy of Singapore University of Technology and Design and, Mr Abhijit Chavan, Director of Intelligent Automation, PwC South East Asia Consulting.
The key themes behind the ensuing discussions revolved around a mindset shifting and industry transformation. Most participants agreed that it was important to have leaders with a long-term vision within organizations to promote digital transformation in the workplace in an organic and non-hostile approach. It was noted that many people especially workers were unaware of the changes were occurring. As such, most of them were unprepared when disruptions impacted their work or displaced them.
Adapting to a transformative work environment is also important. There has been an accelerated growth in technology advancements to a point that pessimism tends to dominate the minds of workers that they fear many jobs will be automated away. It is, therefore, necessary to equip workers with relevant new skills that are needed for the digital economy.
The training of workers necessitates the transformation of educational institutions. Graduates now need to be equipped with broader skillsets to promote flexibility and agility in a transformative landscape. Computational skills are also necessary in many fields such as human resources though it should be introduced in a way where there is buy-in from the workers themselves.
See the photos here.
July 31, 2019
more
APRU Partners to Close the Digital Skills Gap at APEC
APRU members participated in the APEC Closing the Digital Skills Gap Forum, held in Singapore in mid-July.
The forum gathered representatives from 16 APEC economies to explore policy options that can strengthen digital skills and the digital economy, with Project DARE taking central stage.
APRU members participating in the forum were Bernard Tan, Senior Vice Provost of the National University of Singapore; Fidel Nemazo, Vice Chancellor for Research and Development of the University of Philippines (UP); Eugene Rex Jalao, Associate Professor of University of the Philippines; and Kar Yan Tam, Dean of the School of Business and Management of The Hong Kong University of Science and Technology (HKUST).
“With the imminent need to facilitate the transition of workforce in the age of disruption, Project DARE provides a tripartite platform for governments, academia and business across the APEC economies to discuss human capital development in data science and analytics,” said Kar Yan Tam. “This platform connects all of us closely together to manage the transformation wisely,” he added.
Project DARE (abbreviation of data analytics raising employment) is an APEC initiative seeking to facilitate development of a data science and analytics (DSA)-enabled workforce across the APEC region to address the skills shortage in DSA. The Closing the Digital Skills Gap survey launched by the forum and prepared by Wiley, an education and professional training solutions provider, showed that 75 per cent of respondents – comprised of employers, government officials, and academics – perceive the existence of a significant skills mismatch.
At the forum, participants finalized a roadmap to support and scale-up skills development and reskilling programs carried out by employers, governments, and educational institutions across APEC. Tam explained how HKUST has leveraged the Recommended APEC Data Science & Analytics Competencies to inform curriculum in data science and technology, including a full undergraduate degree track.
Fellow APRU member Jalao highlighted Philippine projects in high-impact investments in digital upskilling and reskilling, including an ambitious pilot model to train 30,000 workers over three years led by the Analytics Association of the Philippines (AAP). Indeed, the pilot project has been one of the first models to implement the Recommended APEC Data Science & Analytics Competencies.
The Project DARE timeline for 2018 entailed more than 60 participants sharing models how to bridge the digital skills gap, as well as the development of case studies on Recommended APEC Data and Science & Analytics (DSA) Competencies. On the 2019 timeline are the presentation, finalization and beginning implementation of a collective version and roadmap in APEC to support efforts to upskill and reskill at scale. Implementation of the roadmap is envisioned for the 2020-2025 period.
July 20, 2019
more
Kick-off for AI for Social Good―A United Nations ESCAP-APRU-Google Collaborative Network and Project
On Wednesday, June 5, a kick-off meeting for the “AI for Social Good ― a United Nations ESCAP-APRU-Google Collaborative Network and Project” was held at Keio University’s Mita Campus. The project brought together 8 scholars from all across Asia Pacific under the academic lead of Keio’s Vice-President Professor Jiro Kokuryo for the meeting, with the support of UN ESCAP and Google, and organization by the APRU-Association of Pacific Rim Universities.
The scholars, encompassing a wide range of academic backgrounds from technical aspects of AI such as computer science to ethical views including philosophy, had lively discussions on their research plans as well as providing mutual feedback, alongside representatives from the project organizations ― UN ESCAP, Google, and APRU.
Their work at meetings set to take place over the coming year will be published as a policy recommendation paper for government policymakers and other stakeholders including those in industry, NGOs, and academic institutions.
Originally published by Keio University
Vice-President Professor Jiro Kokuryo Chairs Meeting of AI for Social Good ― A United Nations ESCAP-APRU-Google Collaborative Network and Project
June 15, 2019
more
APEC Project DARE (Data Analytics Raising Employment)
With youth unemployed rising in the Asia Pacific in 2017, policymakers have to bridge the gap between a critically low supply of highly skilled professionals and the urgent demand among employers for a skilled workforce. By 2020, the global shortage of highly-skilled workers is expected to reach 38-40 million.
Current advances in the digital age require the collection and interpretation of big data. Employees with the ability to gather, analyze and draw practical conclusions from big data, as well as communicate these findings to others are forecasted to be among the most in demand. Labor markets are in dire need of professionals trained in data science and analytics, and shortages are severe enough to constrain economic growth.
In response to APEC’s policy goals on human capital development, Project DARE – Data Analytics Raising Employment – was created to address the current shortage of employees skilled in data science and analytics, which has resulted in billions of dollars in lost revenue annually. The project brought together business, government and academic leaders to develop a set of ten Recommended APEC Data Science and Analytics Competencies to serve as a resource to equip academic institutions and training providers across APEC economies to align curricula, courses and programs to fill this gap between skills and employer demand.
APRU Experts joined the Project Advisory Group Meeting taking place in Singapore to actively supported the development of the APEC Data analytics Competencies.
At the inaugural APEC University Leaders’ Forum, in Dan Nang, Vietnam, Dr. Christopher Tremewan, APRU Secretary General, and Mr. Clay Stobaugh, Vice President of The Wiley Network and Co-Chair of APEC Project DARE announced a new partnership committed to bridge the projected skills-gap in the Asia Pacific.
See more details about the here recommended APEC DAS Competencies here
Find out more about the project here
Download attachments:
APEC_Project_DARE_2018_Workshop_Agenda_2_October
November 27, 2018
more