NATIONAL ASSURANCE FRAMEWORK FOR AI BY AUSTRALIA

Authored by Vanshika Jain

Introduction

The artificial intelligence (AI) landscape is rapidly evolving, and governments around the world are exploring ways to harness its potential while ensuring its safe and ethical use. Addressing the challenges posed by AI in deployment, procurement, and development by government, the Australian government has introduced the National Assurance Framework for AI in Government on 21st June, 2024. This framework is based on Australia’s AI Ethics principle introduced through DISR, 2019[1] which includes Principle of/for:

  • Human, societal and environmental wellbeing
  • Human-Centered values
  • Fairness
  • Privacy protection and security
  • Reliability and safety
  • Transparency and Explainability
  • Contestability
  • Accountability

The National Assurance framework is based on the belief that the use of Artificial Intelligence by governments carries risks that require careful monitoring, including legal, privacy, security and ethical risks such as bias, transparency, privacy and accountability. The importance of mitigating these risks is outlined in the Australian Government’s interim response to the safe and responsible use of AI advice[2] as well. This framework is a strategic initiative to guide the use of AI across a range of Australian government agencies along with gaining Public confidence and trust in the Australian government for Safe and Responsible use of AI. It will assist governments to develop, procure and deploy AI in a safe and responsible way.

OBJECTIVES AND PRINCIPLES

The main objectives of the National Framework are to ensure that AI systems used by government are:

  1. 1.Safe and Secure: AI systems must to be resilient. to attacks and failures, ensuring their reliable and safe operation.
  2. Fair and Ethical: The implementation of artificial intelligence must meet ethical standards, avoid bias and ensure fairness in decision-making processes.
  3. Transparent and explainable: The work and decision-making processes of AI systems must be transparent and understandable to stakeholders.
  4. Accountable: There must be clear accountability mechanisms to solve problems caused by the use of artificial intelligence.

The National Framework is built on five cornerstones designed to provide comprehensive guidance on the implementation and management of AI systems in government:

GOVERNANCE

The Australian government has placed a greater responsibility on governance to combat the unique challenges posed by AI. Through this framework, it is recognized that, to combat with the multifaceted challenges posed by AI, effective management along with differentiated technical, social and legal expertise is required.  These areas include core management functions such as data and technology management, privacy, human rights, diversity and inclusion, ethics, cybersecurity, auditing, intellectual property rights, risk management, digital investment and procurement.

Adoption of AI should be policy-driven and supported by growing need and adaptability of technology. In order for AI based technology and governance to flow, there is a need to update the existing decision-making and accountability structures to account for the impact of AI. This provides agencies with multiple perspectives, clear lines of responsibility and transparency in the use of artificial intelligence.

At the agency level, leaders must commit to the safe and responsible use of AI and develop a positive AI risk culture so that open and proactive AI risk management becomes an integral part of daily work.

They must provide employees with the necessary knowledge, training, and resources to have the knowledge and tools to:

  • Comply with government goals
  • Use AI ethically and legally
  • Use judgment and discretion when using AI results
  • Identify risks, report and mitigate them
  • Consider requirements for testing, transparency and accountability
  • Support the community by transforming public services
  • Clearly explain AI results

DATA GOVERNANCE

Data governance comprises the policies, processes, structures, roles and responsibilities to achieve this and is as important as any other governance process. It ensures responsible parties understand their legislative and administrative obligations, see the value it adds to their work and their government’s objectives. Data governance is also an exercise in risk management because it allows governments to minimise risks around the data it holds, while gaining maximum value from it.[3]

The Second cornerstone of this framework is the Data Governance. The framework has recognized that Data is the fuel to an AI system and that the quality and input of an AI system is based on the quality of the data provided to it. Therefore the framework has recognized that the it is important to create, collect, manage, use and maintain datasets that are authenticated, reliable, accurate and representative.

Along with that it is necessary that Ethical Data procuring practices are followed that are in-line with Data Governance and legislation.

A RISK-BASED APPROACH

The third cornerstone is the adoption of a Risk-Based Approach. The use of AI must be assessed and managed on a case-by-case basis to ensure its safe and responsible development, acquisition and deployment in risky environments while minimizing administrative costs, in lower risk contexts. The level of risk associated with AI depends on specific characteristics, such as business context and data characteristics. Self-assessment models such as the NSW AI Assurance Framework are important for identifying, assessing, documenting and managing these risks. Risk management throughout the lifecycle of an AI system, including reviews between lifecycle stages, is critical. Periodic monitoring of the AI ​​system to ensure that it is working properly and to resolve issues.

Establishing control and feedback loops is essential to address emerging risks, unintended consequences or performance issues. It is also important to plan for the risks posed by outdated and legacy AI systems. Boards should consider oversight mechanisms in high-risk environments, including external or internal audit bodies, advisory bodies or AI risk committees to provide consistent expert advice and recommendations.

STANDARDS

The forth cornerstone is establishing standards. It is recognized that Government should adopt some AI standards that align with their overall approach towards deployment of AI. The Standards adopted shall outline specifications, procedures, and lay some guidelines to enable the safe, responsible, consistent, and effective implementation AI that are consistent with the AI ethics principle. These standards should be adopted in such a manner that they could be applied in an interoperable manner. Some current AI governance and management standards as recognized in the framework include:

  • AS ISO/IEC 42001:2023 Information technology – Artificial intelligence – Management system • AS ISO/IEC 23894:2023 Information technology – Artificial intelligence – Guidance on risk management
  • AS ISO/IEC 38507:2022 Information technology – Governance of IT

PROCUREMENT

The fifth cornerstone is the Procurement of AI systems. This provides that the contractual agreements and procurement documentation of AI system should be carefully scrutinize to ensure that they are consistent with AI ethics principles.

Some other measures clearly outlined in the framework includes:

  • Clearly established accountabilities
  • Transparency of data
  • Access to relevant information assets
  • Proof of performance testing throughout an AI system’s life cycle.

The framework has also deem it needed that AI contracts are adaptable to change given the dynamic nature of technology, and that there is scope of growth and change in the contracts. Due diligence in procurement plays a critical role in managing new risks, such as transparency and explainability of ‘black box’ problem in AI systems like foundation models.

IMPLEMENTATION OF THE NATIONAL FRAMEWORK

Successful implementation of the National Framework requires a coordinated and collaborative approach involving multiple stakeholders, including government agencies, industry partners, academia and civil society. The framework describes a step-by-step implementation strategy to ensure a smooth transition and effective deployment of AI technologies:

  1. Initial Assessment and Planning: A detailed assessment of existing AI capabilities and areas for improvement are identified. This includes mapping the current use of AI in government and setting clear goals and milestones.
  2. Pilots and Testing: Running pilot projects to test and validate AI applications in various government functions. These pilot projects are learning opportunities to improve the framework and respond to potential challenges.
  3. Scaling and Integration: Scale-up of successful pilot projects and integration of AI systems into broader government operations. In this phase, interoperability and compatibility with the existing IT infrastructure is ensured.
  4. Monitoring and Evaluation: Establish ongoing monitoring and evaluation mechanisms to assess the performance and impact of AI systems. This includes regular audits, performance reviews and feedback loops to ensure continuous improvement

CONCLUSION

The National Assurance Framework for AI in Government is an important milestone in Australia’s journey towards the responsible and ethical deployment of AI. Through a structured approach to governance, risk management and public participation, the framework aims to increase trust in AI technologies. Successful implementation of this framework will not only increase the efficiency and effectiveness of government operations, but also ensure that AI is used in a manner that adheres to ethical standards and human rights. As the Artificial Intelligence technology evolves, the National Framework will act as a dynamic and adaptive tool that can respond to emerging challenges and opportunities. This will set a precedent for other countries to ensure the responsible and ethical use of AI in government, ultimately contributing to the global debate on AI governance and assurance.

[1] Australia’s Artificial Intelligence Ethics Framework | Department of Industry Science and Resources

[2] Supporting responsible AI: discussion paper – Consult hub (industry.gov.au)

[3] https://www.finance.gov.au/sites/default/files/2024-06/National-framework-for-the-assurance-of-AI-in-government.pdf