The Office of the Australian Information Commissioner Issues New Guidance for Commercial AI Use (22.10.24)

Key Highlights

 

  1. Compliance with Privacy Laws: Organizations must manage personal information in AI systems in line with Australian privacy laws, ensuring transparency through updated policies and clear disclosure of AI usage to users.
  2. Risk Mitigation and Governance: Businesses should adopt privacy-by-design principles, minimize input of sensitive data, and maintain oversight to mitigate risks, especially when using public or generative AI tools.
  3. AI and Privacy Best Practices: The OAIC recommends careful product selection, lawful handling of AI-generated personal data, and alignment with Australia’s Voluntary AI Safety Standards, reinforcing the importance of trust and accountability.

The Office of the Australian Information Commissioner (OAIC) has recently released new guidance on privacy considerations for businesses using commercially available AI products. This guidance, aimed at AI deployers, comes at a crucial time as Australia pushes for reforms in privacy law. These guides aim to help organizations align with the Privacy Act and Australian Privacy Principles (APPs), ensuring responsible AI deployment while maintaining consumer trust. This guidance complements Australia’s Voluntary AI Safety Standard and is issued ahead of potential privacy law reforms.

 

Takeaways From the Guidance

 

Privacy Obligations for AI systems:

 

According to the OAIC, privacy obligations must be strictly followed when handling personal data within AI systems. Whether its personal information fed into the AI or data generated by the AI itself, organizations are responsible for ensuring that this information complies with Australia’s Privacy Act, and AI-generated data (even if incorrect) that identifies an individual constitutes personal information and must be treated accordingly. Privacy Commissioner Carly Kind emphasized, “Robust privacy governance and safeguards are essential for businesses to gain advantage from AI and build trust and confidence in the community.”

 

Due Diligence in AI Adoption:

 

Businesses should conduct thorough due diligence when selecting AI tools. This includes evaluating the tool’s suitability for its intended use, embedding human oversight, assessing privacy risks, and controlling access to sensitive information. As the OAIC points out, just because an AI product is available doesn’t mean it should be used without proper safeguards, Privacy-by-design principles should be embedded through the lifecycle of the AI systems.

 

Transparency in AI Use

 

Transparency remains a critical theme in the guidelines. The OAIC advises businesses to communicate their use of AI systems, especially in privacy policies and notices with specific disclosures on how AI interacts with Personal Data. Customers and external users should know when AI, such as chatbots, is being used. This allows for informed consent and builds trust with the public.

“Our new guides should remove any doubt about how Australia’s existing privacy law applies to AI,” said Commissioner Kind. She further stressed that businesses must not only inform users about AI tools but also be transparent about how AI-generated data might affect them.

 

Compliance with Data Handling Standards (APPs)

 

AI systems that generate or infer personal information must comply with strict data handling requirements outlined in the Australian Privacy Principles (APP). Specifically, APP 3 regulates the collection of personal information, ensuring that it’s only used fairly and for lawful and necessary purposes. This also extends in line with APP 6, personal data used by AI must be limited to the primary purpose for which it was collected unless consent is obtained for secondary use.

 

Risk Mitigation with Public AI Tools

 

One of the OAIC’s strongest recommendations is that businesses should avoid inputting personal or sensitive information into public AI tools. These public-facing AI systems, like chatbots, pose significant privacy risks, and as a best practice, businesses should follow OAIC guidelines to minimize these risks.

A governance-first approach is important to managing risks and building public trust in AI, according to the OAIC. They emphasize that strong governance frameworks are essential for responsible AI use. Businesses can also align their AI systems with privacy laws by following the voluntary AI Safety Standard, which provides clear guidelines for safer AI deployment.

 

A Call for Privacy Reform

 

The OAIC is actively working to align its efforts with new privacy reforms being discussed in Parliament. These reforms focus on key areas like protecting children online, tackling doxxing, and introducing transparency rules for Automated Decision-Making (ADM) systems. Commissioner Kind highlighted the importance of adapting privacy protections to meet the evolving challenges posed by AI. She remarked, “With developments in technology continuing to evolve and challenge our right to control our personal information, the time for privacy reform is now.” The OAIC is pushing for a positive obligation on businesses, meaning they would be required to handle personal data fairly and responsibly, ensuring people’s privacy is treated with care and respect.

The new guidelines not only clarify how businesses can remain compliant but also stress the need for ongoing privacy governance. As AI continues to grow in power and accessibility, the OAIC’s guidance serves as a timely reminder that privacy must remain at the forefront of AI deployment strategies.

 

Implications for Businesses Using AI

 

This guidance reflects growing public concern about how AI systems handle personal data, particularly with generative AI tools becoming more common. It provides businesses with best practices to navigate compliance effectively and reduce privacy risks:

  • Select AI products based on privacy compliance and governance capabilities.
  • Maintain ongoing assurance processes to monitor AI usage and detect risks.
  • Ensure privacy protections align with the principles set out in Australia’s Privacy Act and APP guidelines.

The OAIC’s guidance serves as both a compliance framework and a roadmap for responsible AI use, encouraging businesses to adopt AI cautiously and with transparency, accountability, and fairness.

This development highlights that privacy remains a critical issue in the context of rapidly evolving AI technologies. As reforms are introduced, businesses will need to stay updated and proactively adjust their AI governance practices to meet new legal expectations.

 

References:

 

  1. https://www.oaic.gov.au/privacy/privacy-guidance-for-organisations-and-government-agencies/guidance-on-privacy-and-the-use-of-commercially-available-ai-products#:~:text=to%20opt%2Dout.-,As%20a%20matter%20of%20best%20practice%2C%20the%20OAIC%20recommends%20that,and%20complex%20privacy%20risks%20involved.
  2. https://www.oaic.gov.au/privacy/australian-privacy-principles/australian-privacy-principles-guidelines
  3. https://www.mi-3.com.au/21-10-2024/privacy-commissioner-provides-guidance-ai-governance-ahead-privacy-reform
  4. https://www.oaic.gov.au/news/media-centre/new-ai-guidance-makes-privacy-compliance-easier-for-business
  5. https://www.infosecurity-magazine.com/news/australia-privacy-guidance-ai/
  6. https://www.minterellison.com/articles/oaic-clarifies-artificial-intelligence-ai-privacy-obligations
  7. https://www.capitalbrief.com/briefing/privacy-regulator-issues-new-ai-guidance-1fa8708a-2ade-497e-9ba8-b2a051f5fcff/
  8. https://www.linkedin.com/posts/eddiemajor_oaic-checklist-privacy-considerations-for-ugcPost-7253900490772525056-vBws/?utm_source=share&utm_medium=member_android