26 TECH GIANTS SIGN EU’S AI CODE OF PRACTISE , BUT NOT EVERYONE’S ON BOARD (06.08.2025)

Authored by Ms. Vanshika Jain

At a time when AI governance appears to be a patchwork of vague intentions and voluntary principles, the European Union has emerged, once again as the de facto global leader. 26 companies, including major players like Amazon, Google, Microsoft, IBM, and OpenAI have signed the European Commission’s AI Code of Practice. Heralded as the most significant soft law development following the EU AI Act, this voluntary framework is fast emerging as a defining benchmark for responsible AI alignment globally. But conspicuously, key holdouts like Apple and Meta have declined to sign, prompting fresh scrutiny over industry-wide commitment to AI safety.

 

The updated list of signatories , released on August 1, includes not just Big Tech but also rising AI companies and research institutes,  such as: Accexible, AI Alignment Solutions, Aleph Alpha, Almawave, Amazon, Anthropic, Bria AI, Cohere, Cyber Institute, Domyn, Dweve, Euc Inovação Portugal, Fastweb, Google, Humane Technology, IBM, Lawise, Microsoft, Mistral AI, Open Hippo, OpenAI, Pleias, Re-AuditIA, ServiceNow, Virtuo Turing, Writer, and xAI (under the Safety and Security chapter).

From a regulatory and legal research perspective, this development represents a progressive convergence between law, ethics, and computational accountability. The AI Code of Practice is not binding in the way the EU AI Act will be, but its significance lies precisely in its voluntary nature, it marks a proactive, collective step toward AI alignment before compulsion becomes inevitable.

Signatories commit to specific measures around AI safety testing, red-teaming, watermarking, and post-deployment monitoring especially for general-purpose AI (GPAI) systems. For many legal scholars and governance advocates, this framework provides a much-needed scaffolding while regulatory enforcement mechanisms catch up.

Yet, the fact that Apple and Meta have declined to sign cannot be overlooked. Meta has publicly stated that it will not be joining the initiative at this time, raising questions about divergent strategic priorities or concerns about transparency requirements. Apple, on the other hand, remains silent. Both companies’ absence threatens to dilute the collective credibility of the initiative. If some of the largest AI actors remain outside such voluntary frameworks, the burden may fall disproportionately on those who choose responsibility.

Still, the Code is significant for creating a harmonized ecosystem of trust and innovation. Notably, the Code is aligned with international AI safety principles and bolsters global efforts to pre-empt catastrophic risks from misaligned AI. It builds on commitments made at Bletchley Park’s AI Safety Summit and extends the momentum of the GPAI regulatory conversation in constructive ways.

For legal scholars and policy thinkers, this signals an encouraging shift. The EU’s approach of pairing hard laws like the AI Act with complementary soft norms may prove to be a durable model in a fragmented regulatory landscape. It also invites reflection on how law can play a more anticipatory role in the age of exponential technologies.

With the U.S. still navigating its patchwork of AI regulations and Asia exploring divergent models, the EU’s Code of Practice could serve as a global reference point. However, its success will hinge not just on the number of signatories, but on the quality and sincerity of implementation.

The next frontier lies in making these voluntary commitments enforceable through audits, transparency disclosures, and civil society oversight. And perhaps more crucially, in persuading the remaining outliers that alignment is not just an ethical responsibility, but a strategic necessity for long-term legitimacy in the AI era.

As the AI race accelerates, the choices being made today—by companies, regulators, and the public will shape the boundaries of safety and innovation tomorrow. The EU’s AI Code of Practice may not be perfect, but it’s a critical leap forward in creating an accountable and trustworthy AI ecosystem.

REFERENCES

https://digital-strategy.ec.europa.eu/en/policies/contents-code-gpai