Intergovernmental

A new expert group at the OECD for policy synergies in AI, data, and privacy

word cloud online privacy

A fast-evolving AI and privacy policy landscape calls for international cooperation

There is a growing consensus that as AI evolves rapidly, it raises incredible opportunities and challenges that require more international multistakeholder cooperation in data governance and privacy. As a general-purpose technology, AI is wide-reaching, entering diverse products, sectors, and business models across the globe. Recent progress in generative AI happened partly thanks to vast training data stored in servers and networks worldwide. Like the data, actors involved in the AI lifecycle are also spread across jurisdictions, highlighting the need for synchronisation, clear guidance and cooperation.

In this complex and fast-paced environment, where countries and organisations compete to reap the full benefits of AI, legitimately protecting the rights of some, including their privacy and intellectual property rights, can be seen as affecting data quality and availability for training AI models, thus frustrating innovation capabilities.

But AI development and protecting rights is not a zero-sum game. On the contrary, data protection laws and privacy considerations have a role to play in building trustworthy AI. This is the core premise of the upcoming work of the OECD.AI Expert Group on AI, Data and Privacy.

Building bridges between the AI and privacy communities

To achieve AI/privacy compatibility, data governance and AI communities must work together. A structural challenge in this respect is to break down the silos where conversations typically happen between the AI and privacy circles, despite their interconnection, due to the diverse makeup of each community. Moreover, AI and privacy regulatory initiatives often happen in parallel, developing and operating within different frames of reference. Principles and concepts in data privacy will also need to be mapped to AI principles to ensure alignment and to develop a common language between the communities.

Recently, a notable increase in stringent privacy enforcement actions around AI, and generative AI in particular, has reinforced the perception of some stakeholders that privacy regulations could hinder AI and data-driven innovation. Yes, enforcement is an unavoidable component of privacy regulation, but it is not its only dimension. In practice, the privacy community, including regulators and privacy professionals in organisations, is actively working to deploy solutions to implement privacy-friendly AI and even use AI in the service of privacy. Regulators also have a role in clarifying how personal data can be used responsibly and in accordance with personal data protection laws, enabling AI innovation rather than hindering it.

We need synergies between the AI and privacy communities

A growing number of privacy regulators, commonly abbreviated as DPAs (for Data Protection Authorities) or PEAs (for Privacy Enforcement Authorities), support AI’s development. For example, in January 2023, the French DPA (CNIL) announced the creation of an AI department to strengthen its expertise in these systems, improve its understanding of privacy risks and anticipate the implementation of the EU AI Act. A few months later, it published an action plan for deploying AI systems that respect individuals’ privacy. The UK ICO produced comprehensive guidance on AI and data protection, updated it on 15 March 2023, and recently launched a consultation series on generative AI and data protection. Singapore is also launching a set of guidelines on how personal data can be used to train and develop AI recommendation and decision-making systems.

Technological and organisational initiatives are increasingly emerging to ensure data availability while protecting privacy. Some of these include a growing number of regulatory sandboxes applied to AI and operated by IMDA and involve DPAs (in Singapore and France, and Norway as early as 2020) with the involvement of data intermediaries to foster responsible data sharing within the AI economy. In 2022, the European Commission’s Joint Research Centre explored the opportunity AI-generated synthetic data presents for privacy-safe data use.

AI brings exciting applications to help preserve, enhance, and protect privacy. For example, a team of researchers from École Polytechnique Fédérale de Lausanne (EPFL), the University of Wisconsin, and the University of Michigan developed a deep learning-based program capable of automatically reading and interpreting the privacy policies of online services. This tool uses simple graphs and colour codes to illustrate how to use the data.

The OECD as a forum for international cooperation on AI and privacy

As AI captivates the attention of policymakers and regulators, critical initiatives to foster policy discussions on the intersection of AI and privacy policy are also multiplying. At the end of 2023, the OECD has decided to play its part, leveraging its unique cooperation infrastructure and convening power to support and foster a positive and proactive message about AI and privacy.

The OECD’s commitment to active cooperation across borders, sectors, and areas of expertise is evident. The OECD is an observer and a historical partner to the Global Privacy Assembly (GPA), the premier global network of Privacy Enforcement Authorities (PEAs), as well as at the Council of Europe (CoE). Reciprocally, the OECD Secretariat participates as an observer in the G7 Data Protection Authorities Roundtable. These bodies’ work on AI and privacy has grown in recent years.

On the AI side, the OECD AI Principles, to which 46 countries have adhered, emphasise international collaboration, including in privacy and accountability. The OECD established the OECD.AI Network of Experts, a multistakeholder group of hundreds of AI experts worldwide. It informs policy responses on emerging topics across different policy communities. This unique working method has proven effective and can be applied to the AI and data protection and privacy policy communities.

In early 2024, the OECD formally launched the OECD.AI Expert Group on AI, Data, and Privacy, bringing together leading AI and privacy experts worldwide. This new group has diverse actors from data protection authorities, policymakers, industry, civil society, and academia. Experts will bring global insights from multiple economies to share diverse multi-stakeholder perspectives on how various jurisdictions tackle AI’s privacy challenges and explore opportunities to ensure the technology is privacy-respecting and enhancing.

The expert group is a forum for the privacy community to understand how AI technology operates and how AI policy is evolving while promoting privacy-enhancing tools and principles within the AI policy community. The AI and the privacy communities can also work together on Privacy Enhancing Technologies (PETs).

To bridge gaps, facilitate interoperability and further coordination, it will be crucial for all parties to understand differences in how the AI and privacy communities use terminology. For example, while algorithmic fairness in the AI community is primarily concerned with the mathematical distribution of resources or outcomes, fairness in the privacy domain also considers the context in which a system is deployed, including the human decision-making that sets the system’s parameters, and other qualitative aspects, such as power imbalances between individuals and those who process their data. Experts will explore where these two perspectives intersect and discuss how they can complement each other to contribute to AI systems that process personal data fairly and lead to fair outcomes. Other concepts used in the two communities—e.g., transparency, explainability, accountability, and even privacy, data protection, and risks- would undoubtedly benefit from joint definitional work.

Leveraging on the OECD Privacy Guidelines and OECD AI Principles

Beyond this, the Expert Group will explore how existing OECD legal instruments can align AI and privacy regulatory efforts. For example, the 1980 OECD Privacy Guidelines form the bedrock of data protection laws globally. They are often complemented at the national level with provisions that can apply to AI, e.g., automated decision-making, and are referenced in later OECD work. The 2019 OECD Principles for Trustworthy AI are the first intergovernmental standard on AI and have become an authoritative reference globally, including the definition of an AI system. According to Principle 1.2 on Human-centred values and fairness, AI actors should respect the rule of law, human rights and democratic values, including privacy and data protection. The Principles serve as the foundation for the G20 AI Principles and are instrumental in AI regulatory efforts, such as the EU AI Act, which uses the OECD’s definition of an AI system.

The OECD’s Privacy Guidelines and the AI Principles are essential guidance for regulating their domains. To effectively operationalise both, one must understand how they can best support each other. AI actors are keen to work with high-quality datasets for optimal performance of their AI systems, so identifying privacy principles and practices that foster trust in AI systems is crucial. How can we ensure that happens in a way that respects privacy? Developing an awareness of potential privacy challenges in AI business models is also vital.

Trustworthy AI requires trust in all aspects of personal data collection, management and use, such as acquiring reliable data, using it responsibly, keeping it secured, and maintaining transparency about its use. Efforts to find coherence and cooperation between the Privacy Guidelines, the AI Principles and communities will benefit the development and implementation of AI and fortify a trust-based data ecosystem for years to come.

Governments worldwide can benefit from AI and privacy communities working together to achieve common goals to foster innovation, safeguard human rights and promote the common good. We look forward to members of these communities coming together to exchange knowledge and experiences to ensure compatibility and synergy between AI and privacy frameworks to underpin a trustworthy AI ecosystem.

The authors would like to acknowledge the contribution of Sergi Gálvez Duran, Celine Caira and Sarah Berube, members of the OECD Secretariat, in the writing of this blog post.



Disclaimer: The opinions expressed and arguments employed herein are solely those of the authors and do not necessarily reflect the official views of the OECD or its member countries. The Organisation cannot be held responsible for possible violations of copyright resulting from the posting of any written material on this website/blog.