calendar tag arrow download print
Skip to content

‘Policy experiments are important to provide controlled and transparent AI systems’

Article
05 June 2021
Ethics AI UNESCO
Image
Karine Perset - De Wereld & AI

Artificial Intelligence (AI) transcends national borders and is developing at lightning speed. How can we steer the development of AI in the right direction? In this episode of the blog series #TheWorld&AI, we talk to Karine Perset, head of OECD AI Policy Observatory. ‘It is clear that ensuring human-centric AI requires that all of our strengths and mandates are aligned and that we make a complementary effort towards the same end goals. That is why we encourage collaboration with United Nations bodies.’

In short:

  • In the coming months, the Rathenau Instituut will be addressing the global importance of responsible AI.
  • Karine Perset, administrator OECD.AI, emphasises the importance of policy experimentation to test AI systems in a transparent way.
  • According to Perset there needs to be an adaptable governance hierarchy that depends on the higher or lower risks involved when developing and deploying AI systems.

UNESCO is currently working on an international recommendation on ethics and AI. 24 experts from around the world are writing global guidelines that will be presented to 195 member states in November 2021. The Rathenau Instituut has been appointed as a national observer in the development of the international recommendation. This role allows the Rathenau Instituut to watch proceedings and to provide substantive comments.

For this blog series, we ask inspiring thinkers for their ideas. Which aspects do they consider important for this international discussion? Each blog post covers a different theme, such as the responsibility of companies in deploying AI, the role of governments and policymakers, promoting technological citizenship, and the impact AI has on work and education.

About Karine Perset

Karine Perset

Karine Perset is the administrator of the OECD’s AI Policy Observatory, part of the OECD Division for Digital Economy Policy in Paris. She focuses on trends in the development and diffusion of AI and on opportunities and challenges that AI raises for public policy. She manages the OECD Network of Experts on AI (ONE AI) and the OECD’s AI Policy Observatory (OECD.AI). She was previously Advisor to ICANN’s Governmental Advisory Committee (GAC) and was the Counsellor of the OECD’s Directorate for Science, Technology and Innovation (STI) before that.

About the OECD AI Principles and OECD.AI work

The OECD AI Principles, adopted in May 2019, are the first intergovernmental standard on AI and  represent a political commitment. The principles aim to help establish a broad and shared framework for public policy and international cooperation that underpins responsible stewardship of trustworthy AI. 

in February 2020 the OECD launched an AI Policy Observatory to help implement the principles.

The OECD’s AI Principles are presented as a framework that is designed to give policymakers clarity –  how should countries bring the principles into practice?
 ‘This is just the beginning. The work ahead is to move from principles to action and concrete partnerships. We will do this through practical implementation guidance and by the development of our AI policy Observatory. With the AI Policy Observatory we hope to create a platform for long-term international and multi-stakeholder collaboration, knowledge sharing and dialogue. We believe such an environment is needed to ensure we can discuss AI policy issues and solutions together and measure our progress. It is our vision that other countries, in addition to the original 40, will adhere to the principles. We notably hope that they will be useful for the G20 and G7 high-level political processes.’

How do the OECD principles add up to the UNESCO Recommendation on the Ethics of AI? 
‘Well, it is clear that ensuring human-centric AI requires all of our strengths and mandates are aligned towards the same end goals in a complementary manner. After all, different organisations have different mandates and different strengths.

That is why we are encouraging cooperation. Many of us share similar values and end goals; chief of which are achieving the sustainable development goals and human rights. That is why, for example, the European Commission, the Council of Europe, UNESCO, and the Institute of Electrical and Electronics Engineers (IEEE) are involved  in our work, and why in turn, we are involved in their work. We all want to reduce the risks and reap the benefits of AI. Our common goal is that economies and societies harness the benefits of trustworthy AI, so that no one is left behind.’

The OECD promotes policies for governments, the business sector, academics and NGOs to improve the economic and social well-being of people. In ethical and policy discussions about AI, however, the role of business – especially tech companies - play a big part. What should we expect from multinationals when it comes to the responsible development and deployment of AI?
‘Our goal is to identify practical guidance and shared procedural approaches to help  actors in the field of AI - including of course multinationals - and decision-makers to implement trustworthy AI. Therefore, we have developed three types of tools: process-oriented, technical and education or awareness building tools. The OECD has been working actively on tools for different stakeholders to implement trustworthy AI.’

Do you expect technology companies to make an extra effort?
‘What we are seeing is the development of sector-specific codes of conduct, for example, for the finance sector. In addition, tech firms set up internal teams that focus on the ethics of AI. Standard bodies and technology companies develop many tools to target specific AI-related concerns. Those include: bias detection, explainable AI, and tools that improve the robustness of AI systems and secure them against adversarial attacks. Those tools are based on  technical standards and technical research. Moreover, they  are often open source.’

What are the next steps for the OECD to help shape policies for responsible AI?
‘We will further develop the AI Policy Observatory as an inclusive hub for public policy on AI, in a multi-stakeholder manner. At the OECD AI Policy Observatory we combine resources from across the organisation with those of partners from all stakeholder groups to provide multidisciplinary, evidence-based policy analysis on AI. Furthermore, we aim to facilitate multi-stakeholder dialogue. The Observatory includes a live database of AI policies and initiatives that can be shared, updated, and compared interactively. As of today, it hosts the largest collection of national AI policies - with over 600 policies from 60 countries and the EU.’

Can we get a preview of this hub?
‘In response to the fast-paced AI policy environment, we have already launched a blog, the AI wonk.  This is an online space where the OECD Network of Experts on AI (ONE AI), GPAI members and guest contributors share their experiences and research. It is an ongoing conversation about the OECD AI Principles. It addresses  how to best share and shape trustworthy AI policies that benefit individuals, communities and economies.

Furthermore, we are working on a user-friendly framework to classify AI systems. An overview of this ongoing work is available on the AI Wonk blog as well. This framework could help EU policymakers across the globe navigate the complex ethical, policy and regulatory considerations associated with AI systems.’

What are the biggest dilemmas you face when it comes to drafting regulations for AI?
‘Regulations need to be carefully considered, as they can constrain an emerging technology too early and too vigorously. Considering the fast pace of AI developments, it means that regulations should create and enable a policy environment that is flexible enough to keep up with those developments. Moreover, they should promote innovation, yet bolster safety. Furthermore, they should provide legal certainty, which is a significant challenge. Our experience is that we should move slowly when it comes to  regulating an emerging technology like AI. Regulations that are too strict can hinder innovations that can contribute to improving the safety of  a technology and making it more trustworthy (for instance bias detection).’

Can you tell me a bit more about the way the OECD envisions this?
'The work of the OECD stresses the role of experimentation to provide controlled and transparent environments in which AI systems can be tested. Furthermore, in which AI-based business models, that could promote solutions to global challenges, can flourish. Policy experiments can operate in a ‘start-up mode’. This means that  experiments are first deployed, evaluated and modified, and then later they can be scaled up or down or even abandoned, depending on the test outcomes.’

There are many different types of AI systems that raise very different policy and regulatory considerations and different opportunities and challenges, right?
 ‘Sure. Think about the differences between a water treatment plan control room, where AI controls the chemical treatment of the water, and the development of a self-driving vehicle. Or the different concerns when talking about predictive maintenance of machines in  manufacturing production and a video recommendation engine for children. All these examples raise very different policy considerations. Therefore, there is a need for a shared understanding of  the level of risks.’

Can you explain this?
‘Recently, Lord Tim Clement Jones (member of the UK’s House of Lords, and ONE AI member) wrote a blog post about the complexity of assessing the nature of AI applications and their contexts. The same applies to  the consequent risks of applying this to models of governance and regulation. To quote him: “If we aspire to a risk-based regulatory and governance approach we need to be able to calibrate the risk. This will in turn determine the necessary level of control.”
Given this kind of calibration, a hierarchy of governance needs to follow from this, depending on the rising risk involved. When there is a lower risk, actors can adopt a flexible approach, such as a voluntary ethical code without a hard compliance mechanism. When there is a higher risk governments will need to institute enhanced corporate governance, using business guidelines and standards with clear disclosure and compliance mechanisms.’

To conclude, could you name some best policy practices?
‘Canada, for example, developed an Algorithmic Impact Assessment tool to assess the potential impact of algorithms on citizens. And Japan’s AI Utilization Guidelines provide methods to enhance the explainability of the outcomes of AI systems. These different approaches are useful as they provide policymakers with ideas, but we do not think there is one preferable way forward at this stage. This is because we find ourselves in the early stage of the development of this technology. Consequently, we cannot yet speak about ‘good’ or ‘best policy practice’. Nonetheless,  what I find optimistic is that experts, policymakers and regulators all recognise that there are varying degrees of risk in AI systems. Moreover,  there is a willingness to share experiences and learn from each other. That is something to cherish and it tells me that we are on the right track.’