Multistakeholder

An introduction to the Global Partnership on AI’s work on Responsible AI

The Global Partnership on AI (GPAI) has a mission to “support the development and use of AI based on human rights, inclusion, diversity, innovation, and economic growth, while seeking to address the United Nations Sustainable Development Goals”. Launched in June 2020, it is a voluntary, multi-stakeholder initiative with a permanent focus on AI, with a founding membership that covers 2.5 billion of the world’s population. It has ambitions to scale, particularly to include low and middle income countries, support the UN Sustainable Development Goals and help fully realise the OECD AI Recommendation.

GPAI will bring together experts from industry, government, civil society and academia, to advance cutting-edge research and pilot projects on AI priorities. It is supported by four Working Groups looking at Data Governance, Responsible AI (including a subgroup on Pandemic Response), the Future of Work, and Commercialisation and Innovation. As set out in Audrey Plonk’s blog post on the AI Wonk, the OECD is a principal strategic partner on GPAI, hosting GPAI’s Secretariat and working closely with GPAI’s two Centres of Expertise in Paris and Montreal.

This blog post is the second in a series from the Working Group Co-Chairs. It follows the initial post of Dr. Jeni Tennison, co-chair of the Data Governance Working Group. Dr Yoshua Bengio is the founder and Scientific Director of Mila, Professor of Computer Science at the University of Montreal and co-recipient of the 2018 ACM A.M. Turing Award. Dr Raja Chatila is the Director of the SMART Laboratory of Excellence on Human-Machine Interaction and Professor and Director of the Institute of Intelligent Systems and Robotics (ISIR). 

Yoshua Bengio, Co-Chair, Working Group on Responsible AI, The Global Partnership on AI (GPAI)
Raja Chatila, Co-Chair, Working Group on Responsible AI, The Global Partnership on AI (GPAI)

Introducing the Responsible AI Working Group 

Developing artificial intelligence responsibly is key to ensuring that society benefits from the full potential of AI systems. So when Canada and France invited us to co-chair the Responsible AI Working Group, we accepted without hesitation. 

Set to push the envelope on what responsible AI concretely looks like, the Working Group was formed this summer with the collaboration with Jacques Rajotte, Interim Executive Director and his team at the International Centre of Expertise in Montréal for the Advancement of Artificial Intelligence (ICEMAI). In this post, we wanted to take the opportunity to introduce the group members, outline our current efforts, and, most importantly, share how the broader AI community can get involved. 

Our experts

The Working Group consists of 32 experts from over 15 countries including India, Slovenia and Mexico. We have a wide range of expertise including philosophy, computer science, policy and ethics, resulting in varied viewpoints and robust discussions. As the group refined its mission and deliverables over the last month, we witnessed impressive cross-disciplinary collaboration among these experts. The end of this post lists the Working Group’s experts. 

Our mission and objectives

Our mission is simple: we strive to “foster and contribute to the responsible development, use  and governance of human-centred AI systems, in congruence with the UN Sustainable Development Goals”. It is worth noting that GPAI’s Working Groups do not operate in silo within GPAI, such that we may collaborate with other groups from time to time. For instance, we may interface with the Data Governance Working Group when our respective projects share common dimensions. In light of the COVID-19 pandemic, we have also formed an ad hoc subgroup to support the responsible development, use and governance of AI in this specific area. The subgroup co-chairs will soon publish a post in this series, inviting further contributions.

Responsible AI
Image by Nick Fewings at Unsplash

Our first project

In support of the Working Group’s mandate, we are launching a project that will lay the groundwork for GPAI’s future ambitions on Responsible AI. The results from this first project will be delivered at the GPAI’s first Plenary to be held in December 2020. 

This is a first step in the development or integration of coordination mechanisms of the international community to facilitate cross-sectorial and international collaborations for AI for social good, in particular to contribute to achieving the UN Sustainable Development Goals. To do so, the initial project will:

  • Catalogue existing key initiatives undertaken by various stakeholders to promote the responsible research and development of beneficial AI systems and applications, including: 
    • projects and frameworks to operationalize AI ethical principles and the application of AI for social good;
    • mechanisms and processes to identify and mitigate bias, discrimination, and inequities in AI systems;
    • tools, certifications, assessments, and audit mechanisms to evaluate AI systems for responsibility and trustworthiness, based on metrics such as safety, robustness, accountability, transparency, fairness, respect for human rights, and the promotion of equity.
  • Analyse promising initiatives that have great potential to contribute to the development and use of beneficial AI systems and applications that could benefit from international and cross-sectoral collaboration.
  • Recommend new initiatives and how they could, in practice, be implemented and contribute to promote the responsible development, use and governance of human-centered AI systems.

The Working Group will present a report to the Multistakeholder Experts Group Plenary in December 2020. We are looking for a partner to assist the Working Group with this deliverable and are launching a competitive tender to identify that partner. If you have experience and expertise in this field, please read the Terms of Reference and consider submitting a proposal by the deadline of the Midnight AoE, September 20, 2020.

Medium-Term deliverable

By 2022-2023, the Working Group will strive to foster the development or integration of coordination mechanisms for the international community in the area of AI for social good applications, to facilitate multistakeholder and international collaborations in this area. Such coordination mechanisms could include public consultation, when appropriate. The overall objective will be to bring together needs, expertise and funding sources. This deliverable may be associated with other potential future projects of the Working Group, for example, on the following topics:

  • Using AI systems to advance the UN Sustainable Development Goals, build public trust, increase citizen engagement, improve government service delivery, contribute to the promotion of human rights, and strengthen democratic processes, institutions, and outcomes;
  • Assessing and developing practical multistakeholder frameworks for specific applications for Responsible AI;
  • Developing tools, certifications, assessments, and audit mechanisms that could be used to evaluate AI systems for responsibility and trustworthiness based on metrics such as accountability, transparency, safety, robustness, fairness, respect for human rights, and the promotion of equity.

So please keep an eye out on the blog series as we move forward. You may also find updates on our Twitter accounts (@MilaMontreal and @raja_chatila). Should you have questions, comments, ideas or requests about the work of GPAI’s Responsible AI Working Group, please get in touch via jacques.rajotte[AT]gmail[DOT]com.

Membership of GPAI’s Responsible AI Working Group

Working Group members

Yoshua Bengio (Co-Chair) – Mila – Quebec Artificial Intelligence Institute

Raja Chatila (Co-Chair) – Sorbonne University

Carolina Aguerre – Center for Technology and Society (CETyS)

Genevieve Bell – Australian National University

Ivan Bratko – University of Ljubljana

Joanna Bryson – Hertie School

Partha Pratim Chakrabarti – Indian Institute of Technology Kharagpur

Jack Clark – OpenAI

Virginia Dignum – Umeå University

Dyan Gibbens – Trumbull Unmanned

Kate Hannah – Te Pūnaha Matatini, University of Auckland

Alistair Knott – University of Otago

Pushmeet Kohli – DeepMind

Marta Kwiatkowska – Oxford University

Toshiya Jitsuzumi – Chuo University

Christian Lemaître Léon – Metropolitan Autonomous University

Vincent Müller – Technical University of Eindhoven

Wanda Muñoz – Inclusion, Victim Assistance, and Humanitarian Disarmament Consultant

Alice Hae Yun Oh – Korea Advanced Institute of Science and Technology School of Computing

Luka Omladič – University of Ljubljana

Julie Owono – Internet Sans Frontières

Dino Pedreschi – University of Pisa

V K Rajah – Advisory Council on the Ethical Use of Artificial Intelligence and Data (Singapore)

Catherine Régis – University of Montréal

Francesca Rossi – IBM Research

David Sadek – Thales Group

Rajeev Sangal – International Institute of Information Technology Hyderabad

Matthias Spielkamp – Algorithm Watch

Osamu Sudo – Chuo University

Roger Taylor – Centre for Data Ethics and Innovation

Observers

Marc-Antoine Dilhac – ALGORA Lab

Karine Perset – OECD

Stuart Russell – University of California, Berkeley 



Disclaimer: The opinions expressed and arguments employed herein are solely those of the authors and do not necessarily reflect the official views of the OECD or its member countries. The Organisation cannot be held responsible for possible violations of copyright resulting from the posting of any written material on this website/blog.