Introduction
In January 2023, the OECD established “OECD.AI Expert Group on AI Incidents” to advance the development of common AI incident reporting framework and an AI Incidents Monitor (AIM) to track adverse impact of AI systems. Recently on May , 6 2024, the OECD released a paper , defining AI incidents and related terms, this paper provides preliminary definitions and terminology related to AI incidents to support the development and advancement of both initiatives. Further the Report aims to establish clear and common understanding for AI incidents and related term at global level in order to manage and prevent risks associated with AI systems.
Understanding Key Terminologies
Where the development or use of an AI system results in actual harm is termed an AI incident, while an event where the development or use of an AI system is potentially harmful is termed an AI hazard.
The report defines AI Incident as follows:
“An AI incident is an event, circumstance or series of events where the development, use or malfunction of one or more AI systems directly or indirectly leads to any of the following harms:
(a) Injury or harm to the health of a person or groups of people;
(b) Disruption of the management and operation of critical infrastructure;
(c) Violations of human rights or a breach of obligations under the applicable law intended to protect fundamental, labour and intellectual property rights;
(d) Harm to property, communities or the environment.”
The report defines AI Hazards as “ An AI hazard is an event, circumstance or series of events where the development, use or malfunction of one or more AI systems could plausibly lead to an AI incident” , i.e., any of the harms as listed under the above definition of term AI Incident.
This report also includes proposed definitions for associated terminology, including what constitutes, serious AI hazards, serious AI incidents and AI disasters.
Definition of a “Serious AI Incident”
“A serious AI incident is an event, circumstance or series of events where the development, use or malfunction of one or more AI systems directly or indirectly leads to any of the following harms:
(a) the death of a person or serious harm to the health of a person or groups of people;
(b) a serious and irreversible disruption of the management and operation of critical infrastructure;
(c) a serious violation of human rights or a serious breach of obligations under the applicable law intended to protect fundamental, labour and intellectual property rights;
(d) serious harm to property, communities or the environment.”
This working definition of a serious AI incident aligns with the definition proposed in the context of the EU.
Definition of a “Serious AI Hazard”
“ A serious AI hazard is an event, circumstance or series of events where the development, use or malfunction of one or more AI systems could plausibly lead to a serious AI incident or AI disaster, i.e., any of the following harms:
(a) the death of a person or serious harm to the health of a person or groups of people;
(b) a serious and irreversible disruption of the management and operation of critical infrastructure;
(c) a serious violation of human rights or a serious breach of obligations under the applicable law intended to protect fundamental, labour and intellectual property rights;
(d) serious harm to property, communities or the environment;
(e) the disruption of the functioning of a community or a society and which may test or exceed its capacity to cope using its own resources.”
Definition of “AI Disaster”:
“An AI disaster is a serious AI incident that disrupts the functioning of a community or a society and that may test or exceed its capacity to cope, using its own resources. The effect of an AI disaster can be immediate and localised, or widespread and lasting for a long period of time.”
This definition of an AI disaster is based on the definitions of disaster provided by the United Nations Office for Disaster Risk Reduction.
Purpose and Objective of the Report–
The OECD is influencing an “Incident Reporting Framework” for reporting AI incidents. Further a complementary project to the reporting framework is the AI Incidents Monitor (AIM), which began its first phase in 2023. The AIM documents negative outcomes and incidents related to AI. It serves as an evidence base for policymakers, AI practitioners, and stakeholders worldwide. By monitoring AI incidents, AIM helps create policies for safer AI by providing real-world data on risks and hazards associated with AI technologies.
The OECD’s effort to define AI incidents and hazards is a step towards creating a safer and more trustworthy environment for the deployment of AI technologies. By providing a common language and framework for reporting, it allows for better management of AI-related risks and facilitates the sharing of knowledge and experiences globally. This initiative aligns with the OECD’s mandate to implement principles for trustworthy AI and supports the goal of ensuring that AI benefits society as a whole.
Source:
(1) Defining AI incidents and related terms | en | OECD. https://www.oecd.org/sti/defining-ai-incidents-and-related-terms-d1a8d965-en.htm.
(2) AI incidents Overview – OECD.AI. https://oecd.ai/en/network-of-experts/incidents.
(3) Artificial intelligence – OECD. https://www.oecd.org/digital/artificial-intelligence/.