OECD CALLS FOR PUBLIC CONSULTATION ON AI RISK THRESHOLDS

Key Highlights:

  1. Public Consultation on AI Risk Thresholds: The OECD is inviting public input on how to establish risk thresholds for advanced AI systems, focusing on approaches, opportunities, and limitations to ensure responsible AI development and deployment.
  2. Debate on Compute Power as a Risk Measure: While Compute Power is currently used as a proxy to assess AI risks, there is ongoing debate about its adequacy. The consultation seeks views on alternative risk thresholds, such as those based on AI capabilities, societal impact, or ethical considerations.
  3. Strategic Implementation of Risk Thresholds: The OECD is exploring strategies for setting and enforcing AI risk thresholds. Stakeholders are encouraged to share their thoughts on effective approaches for measuring real-world AI systems and imposing requirements on systems that exceed established thresholds.

The Organization for Economic Co-operation and Development (OECD) is taking a momentous step in addressing the potential risks associated with advanced AI systems. In collaboration with a wide range of stakeholders, the OECD has launched a public consultation aimed at exploring approaches, opportunities, and limitations for establishing risk thresholds for these sophisticated technologies on 26th July 2024. This initiative reflects a growing global concern about how to effectively govern AI systems that are becoming increasingly integral to our lives, yet pose complex challenges due to their rapid evolution and potential for significant impact.

The Need for AI Risk Thresholds

As AI systems continue to advance, there is a pressing need to ensure that these technologies are developed and deployed responsibly. Risk thresholds are a key tool in this effort. According to the OECD, “risk thresholds refer to the values establishing concrete decision points and operational limits that trigger a response, action, or escalation.” These thresholds can be based on various factors, including technical aspects like error rates or computational power, as well as human values such as social or legal norms. The ultimate goal is to identify when AI systems present unacceptable risks or require enhanced scrutiny and mitigation measures.

In recent years, the concept of setting risk thresholds has gained traction within both policy and technical communities. For instance, the May 2024 AI Seoul Summit saw 27 countries and 16 AI companies commit to setting risk thresholds, evaluation criteria, and mitigation approaches. Additionally, voluntary commitments by leading AI companies in the United States have emphasized public reporting on system capabilities and discussions of societal risks—steps that could be instrumental in establishing effective risk thresholds.

The Debate Over Compute Power as a Measure of Risk

One of the central questions in this public consultation is whether AI risk thresholds based on Compute Power are sufficient to mitigate risks from advanced AI systems. Compute Power, often measured in FLOPS (floating-point operations per second), has been used as a proxy for AI system capabilities. For example, the European Union AI Act and the US Executive Order on AI have introduced reporting and oversight requirements for models that exceed certain compute thresholds.

However, there is an ongoing debate about the adequacy of Compute Power as a sole measure of risk. Some experts argue that these thresholds can be somewhat arbitrary and may lead to unintended consequences. As the OECD notes, “the text of some policy documents suggests that such thresholds may serve as temporary proxy measures for AI system capabilities until more specific capability-oriented thresholds can be identified and measured.”

Exploring Alternative Risk Thresholds

Beyond Compute Power, the OECD is also interested in exploring other types of AI risk thresholds that could be valuable in managing advanced AI systems. These might include thresholds based on specific AI capabilities, societal impact, or ethical considerations. The consultation seeks input on what these alternative thresholds could be and how they might be effectively implemented.

One of the key challenges in establishing these thresholds is the dynamic nature of AI technology. As AI systems evolve, so too must the criteria used to assess their risks. This raises important questions about how governments and companies can stay ahead of these developments and ensure that their risk management strategies remain relevant.

Strategies for Setting and Implementing AI Risk Thresholds

Identifying and setting appropriate AI risk thresholds is only the first step. Measuring real-world systems against these thresholds and determining the appropriate response when thresholds are exceeded is equally crucial. The OECD consultation invites suggestions on the strategies and approaches that could be used to achieve this.

For systems that exceed established thresholds, there may be a need for specific requirements, such as increased oversight, transparency, or even limitations on deployment. The consultation is an opportunity for stakeholders to share their views on what these requirements should be and how they can be enforced.

The Path Forward: Considerations for the OECD and Collaborating Organizations

As the OECD and its partners work towards designing and implementing AI risk thresholds, they must consider a wide range of factors. This includes not only the technical and ethical aspects of AI risk management but also the broader societal implications. The consultation emphasizes that while voluntary commitments by companies are a positive step, they may not be sufficient on their own to prevent potential negative impacts.

To ensure a comprehensive approach, the OECD is seeking input from a diverse array of stakeholders. The results of this consultation will inform further research and analysis, helping to shape future policies and strategies for managing the risks associated with advanced AI systems.

Conclusion

The OECD’s public consultation on AI risk thresholds represents a critical opportunity for all interested parties to contribute to the discussion on how to best manage the risks posed by advanced AI systems. Whether you are an AI expert, a policymaker, or simply someone with a keen interest in the future of AI, your input can help shape the policies that will govern these powerful technologies. The deadline to participate is 10 September 2024, so don’t miss the chance to have your voice heard in this important conversation.

References:

  1. https://oecd.ai/en/site/ai-futures/discussions/risk-thresholds-consultation
  2. https://oecd.ai/en/wonk/seeking-your-views-public-consultation-on-risk-thresholds-for-advanced-ai-systems-deadline-10-september
  3. https://www.linkedin.com/posts/oecd-ai_seeking-your-views-public-consultation-on-activity-7227214009853186048-_w5V/?utm_source=share&utm_medium=member_android