AUSTRALIAN DIGITAL TRANSFORMATION AGENCY (DTA) INTRODUCES AI POLICY FOR GOVERNMENT

Authored by Ms. Vanshika Jain

Key Highlights:

  1. AI Policy Implementation: Australian Digital Transformation Agency (DTA) introduced its new policy, relating to the responsible use of AI within government operations, published on 15th August, 2024. This policy is Due to take effect from 1st September, 2024, the policy will be part of a milestone development in relation to how the government has approached governance of AI in Australia.
  2. Core Framework: The essence of the new AI policy strategy will be “Enable, Engage, and Evolve” and these are the principles that the government departments need to take into consideration and use AI.
  3. Transparency and Accountability: Another key feature of the policy was that it included transparency and accountability in AI usage on the part of government agencies. It required each agency to identify various officials from the hierarchy of the organizational structure who will be responsible for the plan of using AI within their agencies and not to hide any concerns.

INTRODUCTION

Australia has been leading the world of AI with its various policies and guidelines framework targeted towards responsible use of AI. With Australia’s New “Policy for the Responsible Use of AI in Government”, published by Australia’s Digital Transformation  Agency,it has now stepped up, and is working towards Ethical and Responsible use of AI based technology in government agencies. This policy is due to take effect from 1st September, 2024.

For the purposes of this policy, the Australian Government has adopted the definition of AI provided by the Organisation for Economic Co-operation and Development (OECD). According to the OECD, an AI system is a machine-based system that, for explicit or implicit objectives, processes inputs to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. These systems vary in their levels of autonomy and adaptability after deployment.

AIM OF THE POLICY

The Policy for the responsible use of AI in Government aims that government plays a leadership role in embracing AI for the benefit of Australians while ensuring its safe, ethical and responsible use, in line with community expectations. The adoption of AI technology and capability varies across different Australian agencies. This policy is designed in such a way that it will provide all the government agencies with standards and baselines hence unifying government’s approach towards transparent and Responsible use of AI.

THREE PILLARS OF THE POLICY

The newly introduced AI policy is structured around the “Enable, Engage, and Evolve” framework. This approach is designed to provide a comprehensive guide for government agencies on how to integrate AI into their operations, while maintaining public trust and ensuring that the use of AI aligns with the community’s expectations.

  • The first pillar of the framework builds on some of the imperatives of government agencies in the safe application of AI, such as improved productivity, better decision-making, and effective delivery of policy outcomes. Agencies are called upon to establish clear accountability for the adoption and use of AI. Agencies are therefore expected to identify and designate responsible officials, within 90 days from the effective date of the policy, towards demarcating clear chains of responsible parties for the deployment of AI across the public service.
  • The policy’s second pillar in protecting Australians from harm is the responsible use of AI through strategies that aim to mitigate risks with assurance on transparency and explainability in using AI. Each agency will publish, within six months of the effective date of this policy, a Transparency Statement for its adoption and use of AI. Updated on a regular basis, the transparency statement will reflect significant changes in support of open and accountable principles.
  • The third pillar emphasizes that the strides of technological development are quick, and, therefore, AI handling must be agile and flexible. For this, AI applications must be reviewed and assessed continuously so that the government remains responsive to new developments. Feedback mechanisms inlayed into the sinew of governance will enable continuous tweaking in the use of AI, with a view to ensuring that policies and practices remain abreast of rapidly emerging technological innovations.

IMPLEMENTATION AND SCOPE OF THE POLICY

Starting September 1, 2024, this policy will apply to all non-corporate Common wealth entities (NCEs), as defined by Australia’s Public Governance, Performance, and Accountability Act 2013. These entities are required to adhere to the guidelines set forth by this policy. However, the policy includes specific carveouts for national security. It does not apply to the use of AI within the defense portfolio or the ‘national intelligence community’ (NIC) as defined by the Office of National Intelligence Act 2018.

The AI policy has been designed to complement and strengthen existing frameworks, legislation, and practices rather than duplicating them. This integrated approach ensures that government agencies not only comply with AI-specific guidelines but also meet their broader obligations under existing laws and frameworks, such as the APS Code of Conduct.

PRINCIPLES UNDERLINED IN THE THE POLICY

The Australian Government’s AI Policy is underpinned by several key principles aimed at ensuring the responsible and effective use of AI within government operations:

  1. Safe Engagement with AI: The policy emphasizes the safe engagement with AI technologies to enhance productivity, improve decision-making, achieve better policy outcomes, and optimize government service delivery for the benefit of all Australians.
  2. Accountability and Ownership: Australian Public Service (APS) officers must be able to explain, justify, and take ownership of the advice and decisions made when utilizing AI. This principle ensures that AI-driven decisions are transparent and accountable.
  3. Clear Accountability: There must be clear lines of accountability for the adoption and use of AI within government agencies. This principle ensures that responsibilities are well-defined and understood across the APS.
  4. Building Long-Term AI Capability: The policy advocates for the development of long-term AI capabilities within government agencies, ensuring that the APS is equipped to effectively manage and utilize AI technologies in the future.

CONCLUSION

As AI evolves, so too will the government’s policy on it, in a way that keeps Australia at the leading edge of technological innovation but with the highest standards of governance and public trust. This policy marks the first step in a journey that will see AI deployed to make not just many operations more efficient but harnessed to better living qualities for citizens across the board. This structured path, with enablement, responsibilities of engagement, and a continuous evolution, is the shining light now that all the governments of the world could embrace AI with and yet curb its dangers.

References

  1. https://www.digital.gov.au/policy/ai/policy
  2. https://www.dta.gov.au/blogs/responsible-choices-new-policy-using-ai-australian-government#:~:text=To%20protect%20Australians%20from%20harm,of%20the%20policy%20effect%20date.
  3. https://www.digital.gov.au/policy/ai/aim