APRU on The Business Times: Safeguarding Our Future With AI Will Need More Regulations
March 18, 2021
01

Original post in The Business Times.

More has to be done to ensure that AI is used for social good.

A SILVER lining emerging from Covid-19’s social and economic fallout is the unprecedented application of artificial intelligence (AI) and Big Data technology to aid recovery and enable governments and companies to effectively operate. However, as AI and Big Data are rapidly adopted, their evolution is far outpacing regulatory processes for social equity, privacy, and political accountability, fuelling concern about their possible predatory use.

No matter whether contributing to essential R&D for coronavirus diagnostic tools or helping retailers and manufacturers transform their processes and the global supply chain, AI’s impressive achievements do not fully allay anxieties around their perceived dark side.

Public concern about the threats of AI and Big Data ranges from privacy breaches to dystopian takes on the future that account for a technological singularity. Meanwhile, there is fairly strong sentiment that tech giants like Facebook, Amazon and Apple have too much unaccountable power. Amid rising antitrust actions in the US and legislative pushback in Europe, other firms like Microsoft, Alibaba and Tencent also risk facing similar accusations.

Despite their advancements, breakthrough technologies always engender turbulence. The pervasiveness of AI across all aspects of life and its control by elites, raise the question of how to ensure its use for social good.

For the ordinary citizen, justifiable suspicion of corporate motives can also render them prey to misinformation. Multilateral organisations have played critical roles in countering false claims and building public trust, but there is more to be done.

AI FOR SOCIAL GOOD

Against this backdrop, APRU (the Association of Pacific Rim Universities), the United Nations ESCAP and Google came together in 2018 to launch an AI for Social Good partnership to bridge the gap between the growing AI research ecosystem and the limited study into AI’s potential to positively transform economies and societies.

Led by Keio University in Japan, the project released its first flagship report in September 2020 with assessments of the current situation and the first-ever research-based policy recommendations on how governments, companies and universities can develop AI responsibly.

Together they concluded that countries effective in establishing enabling policy environments for AI that both protect against possible risks and leverage it for social and environmental good will be positioned to make considerable leaps towards the Sustainable Development Goals (SDGs). These include providing universal healthcare, ensuring a liveable planet, and decent work opportunities for all.

However, countries that do not create this enabling environment risk forgoing the potential upsides of AI and may also bear the brunt of its destructive and destabilising effects: from weaponised misinformation, to escalating inequalities arising from unequal opportunities, to the rapid displacement of entire industries and job classes.

WAY FORWARD

Understanding of the long-term implications of fast-moving technologies and effectively calibrating risks is critical in advancing AI development. Prevention of bias and unfair outcomes produced by AI systems is of top priority, while government and private sector stakeholders should address the balance between data privacy, open data and AI growth.

For governments, it will be tricky to navigate this mix. The risk is that sluggish policy responses will make it impossible to catch up with AI’s increasingly rapid development. We recommend governments establish a lead public agency to guard against policy blind spots. These lead agencies will encourage “data loops” that provide feedback to users on how their data are being used and thus facilitate agile regulation. This is necessary due to AI’s inherently rapid changing nature and the emergence of aspects that may not have been obvious even weeks or months earlier.

Another important ability that governments have to acquire is the ability to negotiate with interest groups and ethical considerations. Otherwise, progress of promising socially and environmentally beneficial AI applications ranging from innovative medical procedures to new transportation options can be blocked by vested interests or a poor understanding of the trade-offs between privacy and social impact.

Governments should also strengthen their ability to build and retain local technical know-how. This is essential, given that AI superpower countries are built on a critical mass of technical talent that has been trained, attracted to the country, and retained.

DIASPORA OF TALENT

Fortunately, many countries in Asia have a diaspora of talent who have trained in AI at leading universities and worked with leading AI firms. China has shown how to target and attract these overseas Chinese to return home by showcasing economic opportunities and building confidence in the prospects of a successful career and livelihood.

Ultimately, for any emerging technology to be successful, gaining and maintaining public trust is crucial. Covid-19 contact tracing applications are a good case in point, as transparency is key to gaining and maintaining public trust in their deployment. With increased concerns about data privacy, governments can explain to the public the benefits and details of how the tracing application technology works, as well as the relevant privacy policy and law that protects data.

To deal with the use and misuse of advanced technologies such as AI, we need renewed commitment to multilateralism and neutral platforms on which to address critical challenges.

At the next level, the United Nations recently launched Verified, an initiative aimed at delivering trusted information, advice and stories focused on the best of humanity and opportunities to ‘build back better’, in line with the SDGs and the Paris agreement on climate change. It also invites the public to help counter the spread of Covid-19 misinformation by sharing factual advice with their communities.

The education sector is playing its part to facilitate exchange of ideas among thought leaders, researchers, and policymakers to contribute to the international public policy process. I am hopeful that universities will be able to partner with government, the private sector and the community at large in constructing a technological ecosystem serving the social good.

  • The writer is secretary general of APRU (the Association of Pacific Rim Universities)
Related Articles
Researchers identify gaps in implementing AI in healthcare
more
APRU Launches New Project to Explore Generative AI’s Impact on Higher Education
more
AI for Social Good Summit Gathered Academics and Gov’t Representatives to Showcase Joint Research Outcomes Enhancing Wellbeing in Southeast Asia
more
Public Agencies from Thailand Participated in AI for Social Good Summit
more
Country Workshop Aims to Turn AI Research Results into Actionable Public Policy
more
New Joint Synthesis Report by APRU and hbs HK Shows Way Forward on Regulating AI
more
Workshop Reveals Impressive Progress on AI for Social Good Project
more
APRU on Bloomberg: The next stage: APRU-Google-UN ESCAP AI for Social Good Project now working directly with government agencies
more
APRU and Government Partners Organize Workshop to Strengthen AI policy in the Asia-Pacific Region
more
No Easy Answers on Protection of AI Data Rights, Webinar by HBS and APRU Shows
more
Webinar by Heinrich Böll Stiftung and APRU takes deep dive into Explainable AI
more
Heinrich Böll Stiftung and APRU Discuss Risk-based Governance of AI in First Joint Webinar
more
APRU on China Daily: Your seat at the table depends on how innovative you are
more
APRU on South China Morning Post: Governments, business and academia must join hands to build trust in AI’s potential for good
more
APRU on Times Higher Education: ‘Oversight needed’ so AI can be used for good in Asia-Pacific
more
APRU releases AI for Social Good report in partnership with UN ESCAP and Google: Report calls for AI innovation to aid post-COVID recovery
more
AI for Social Good network releases new report
more
DiDi and APRU strengthen partnership with MoU and new APEC project
more
AI For Everyone: New Open Access Book
more
Accelerating Indonesia’s Human Capital Transformation for Future of Work
more
AI Policy for the Future: Can we trust AI?
more
APRU stepping up research infrastructure and network to build technology systems free from human rights risks
more
Automation and the Transformation of Work in Asia-Pacific in the 21st Century
more
APRU Partners to Close the Digital Skills Gap at APEC
more
Kick-off for AI for Social Good―A United Nations ESCAP-APRU-Google Collaborative Network and Project
more
APRU Partners with United Nations ESCAP and Google on AI for Social Good
more
APEC Project DARE (Data Analytics Raising Employment)
more
1
27