GOVERNING GENERAL-PURPOSE AI: EU RELEASES FINAL CODE FOR GENERAL PURPOSE AI MODELS AND COMPLIANCE ROADMAP (11.07.25)

In the global race to regulate algorithmic power, the EU is not simply legislating AI, it is codifying a governance philosophy. The distinction lies in its method: legal instruments grounded in democratic values, complemented by voluntary codes co-drafted with industry, academia, and civil society. With the publication of the final General-Purpose AI Code of Practice on 10th July, 2025 alongside newly released Guidelines for GPAI providers, the European Union edges closer to the enforcement of the world’s most ambitious AI regulation. Far from being a mere procedural document, this Code reflects the EU’s deepening commitment to embedding fundamental rights, accountability, and legal certainty into the technological fabric of the AI age.

 

The Code of Practice arrives just weeks before the AI Act’s GPAI rules come into force on August 2, 2025, and serves as a robust compliance framework for AI developers. With sweeping input from over 1,000 stakeholders — including AI model developers, SMEs, academic researchers, AI safety professionals, civil society organizations, and rightsholders,  the Code is a collaborative, multi-stakeholder response to the growing concerns around transparency, copyright, and systemic risks in AI deployment.

A VOLUNTARY TOOL WITH REGULATORY BITE

Although voluntary in nature, the GPAI Code of Practice is no symbolic gesture. According to the Commission, signatories to the Code will enjoy a streamlined path to compliance under the AI Act, along with reduced administrative burdens and enhanced legal clarity. In contrast, companies that attempt to demonstrate compliance through ad hoc processes may face higher compliance costs and regulatory uncertainty.

Henna Virkkunen, Executive Vice-President for Tech Sovereignty, Security and Democracy, emphasized the Code’s dual value as both a legal and ethical instrument:

“Today’s publication of the final version of the Code of Practice for general-purpose AI marks an important step in making the most advanced AI models available in Europe not only innovative but also safe and transparent. Co-designed by AI stakeholders, the Code is aligned with their needs. Therefore, I invite all general-purpose AI model providers to adhere to the Code.”

 

THE THREE PILLARS: TRANSPARENCY, COPYRIGHT, AND SAFETY

At its core, the Code is structured around three foundational pillars: Transparency, Copyright, and Safety & Security  each addressing critical ethical, legal, and technical challenges of modern AI deployment.

  1. Transparency: Clarity from Complexity

One of the Code’s standout features is a user-friendly Model Documentation Form, which consolidates key information on model development, training data, capabilities, and intended uses. This not only helps downstream developers integrate AI models responsibly but also boosts public trust by demystifying how AI systems work.

Transparency obligations also align closely with the EU AI Act’s classification framework, which distinguishes between low-risk and high-risk systems and mandates proportionate disclosure accordingly.

  1. Copyright: Harmonizing Innovation and IP Rights

The Copyright chapter offers practical compliance pathways with EU copyright law, especially in response to the growing controversy over training AI models on copyrighted data without consent. With lawsuits (e.g., NYT v. OpenAI) making headlines globally, this section signals a turning point in how AI developers are expected to approach rights-cleared data.

By offering scalable implementation practices for copyright due diligence, the Code paves the way for ethical model training and responsible dataset governance — often the Achilles’ heel of large-scale generative AI models.

  1. Safety & Security: Addressing Systemic Risks

Acknowledging the power — and potential peril — of GPAI systems, the Safety and Security chapter deals with “systemic risks”, such as:

  • Amplification of disinformation
  • Erosion of fundamental rights
  • Dual-use concerns (e.g., chemical or biological weapon design)
  • Loss of human control over model outputs

This section is targeted specifically at the most advanced GPAI model providers, recognizing that while not every model poses these threats, the impact of those that do could be catastrophic. The Code calls for state-of-the-art risk mitigation, robust oversight, and alignment with red-teaming best practices.

 

TIMELINE OVERVIEW

The Commission’s Guidelines include a clear three-phase rollout with embedded compliance and enforcement windows, reflecting a careful balance between urgency and transition:

Key Milestone Details
2 August 2025 Obligations apply as soon as new GPAI models hit the market; providers must collaborate with the AI Office. Those releasing systemic-risk models must notify the Office. Legacy models already in the market must document plans to catch up. digital.nemko.com+4Digital Strategy+4GamingTechLaw+4
2 August 2026 The Commission gains enforcement powers, including fines that can reach up to €35 million or 7% of global turnover for serious violations. ReutersDigital StrategyArtificial Intelligence Act
2 August 2027 Deadline for legacy models (deployed before 2 Aug 2025) to fully comply. Non-compliance beyond this date risks penalties. Digital StrategyGamingTechLawArtificial Intelligence Act

 

Why it matters: the EU AI Act doesn’t just impose future rules—it is already shaping model design and governance. This timeline enables innovation while maintaining protections, and it sets a global benchmark in AI ethical oversight.

 

IMPLICATIONS FOR GLOBAL AI GOVERNANCE

While the Code is EU-centric, its influence could ripple across global AI policy debates. Similar to how the GDPR became the de facto global privacy standard, the GPAI Code of Practice may evolve into a template for trustworthy AI governance  especially in jurisdictions where regulatory approaches remain fragmented or inconsistent.

By offering a concrete blueprint for ethical AI development, the EU is placing human rights, transparency, and democratic accountability at the heart of its AI future  a move that will likely inspire emulation and adaptation globally.

 

THE ROAD AHEAD: VOLUNTARY TODAY, ESSENTIAL TOMORROW?

While the Code is currently non-binding, it may not remain so. There’s growing speculation that future amendments to the AI Act could draw on lessons from the Code to establish mandatory baseline standards — particularly if voluntary adherence proves insufficient or inconsistent.

For responsible developers, however, the message is clear: adhere early, adapt proactively, and build AI that respects human values.

 

Read the full General-Purpose AI Code of Practice (PDF): Download here

Read the official press release: ec.europa.eu link