Google Released a Policy Working Paper titled “Generative AI and Privacy”

Authored By - Dr Yatin Kathuria

In a rapidly evolving landscape of Generative Artificial Intelligence (AI), Google is taking significant steps to address privacy concerns. Their recent policy working paper titled “Generative AI and Privacy” emphasizes the importance of building AI responsibly while harnessing its full potential (Released on 4th June 2024).  In section I of the working paper Google acknowledges AI’s potential to benefit society while exacerbating existing challenges posed by Generative AI Models. In section II of the paper, Google provides insights into how and why Generative AI interacts with personal data. Where as in section III, google suggests some steps on how organizations and policymakers can apply basic privacy principles to protect personal data from being exploited by GAI.

Section III of their framework emphasizes on –

Accountability-Organizations that develop or deploy GAI models should be accountable for explaining the privacy principles they follow, maintaining an internal privacy program that contemplates documenting their privacy practices.

Transparency- GAI can be challenging for even experts to understand, so it is vital to inform users about data practices, empowering them to make appropriate choices. Developers can provide transparency through multiple mechanisms—including privacy policies, terms of service, in-product notifications and disclosures, and centralized, easy-to-access resource hubs.

User Controls-An important part of responsible, human-cantered AI is empowering users to make clear choices and providing control to their data as appropriate. User control plays a key role in ensuring fairness and guaranteeing individuals’ rights to privacy and data protection.

Data Minimization -The responsible deployment of GAI includes reducing the amount of personal data needed across the lifecycle of a GAI system without reducing its quality. Setting out these minimization goals will help ensure that the data used is necessary and proportionate to the purposes for which the data is processed.

Data Output Safeguards- The content generated by GAI models in form of texts or images may often contain inaccurate information. These types of outputs are also referred to as hallucinations, may appear coherent but is not based on fact. Adoption of Output safeguards by deplorers and developers is one way that GAI can, over time, increasingly prevent the spread of private personal data or inappropriate, offensive, or harmful content.

Privacy Protections for Teens & Children- Corporations building GAI models that are available to minors should invest in AI education and literacy programs for this group. This includes explaining in age-appropriate language both the opportunities and limitations of the technology, how to interact with GAI tools, and how to use GAI to empower, assist, and inspire.

Google advocates for a “privacy-by-design” approach, embedding privacy protections from the very inception of AI development. To achieve these goals, Google offers four concrete recommendations in Section IV of the paper:

1. Balance Benefits and Risks, ensuring that privacy safeguards built on longstanding principles apply to GAI in ways that are proportional to its benefits and risks.

2. Focus on the Outputs, so that privacy standards cover the results of AI products used by businesses and consumers.

3. Protect Access to Publicly Available Data, avoiding restrictions on the processing of public data needed to train AI models.

4. Invest in Opportunity Research, seizing AI’s new privacy and security opportunities.

Google CEO, Sundar Pichai, once said that “AI is too important not to regulate, and too important not to regulate well”. Google have acknowledged that there is a need of “proportional, flexible, and risk-based regulatory frameworks; constructive, open-minded dialogue between regulators and companies, such as through regulatory sandboxes; globally interoperable standards; and policies that promote progress while reducing risks of abuse.

Google’s commitment through this working paper to safeguard privacy ensures that innovation doesn’t come at the cost of user safety. By prioritizing such standards, they pave the way for responsible AI that enhances protection towards privacy and personal data

References: