A total of 26 major tech companies (including Google, Microsoft, OpenAI, and Amazon) have signed the voluntary EU AI Code of Practice for General-Purpose AI (GPAI). The Code is intended to help companies comply with the AI Act, which begins enforcement on 2 August 2025. However, strong dissent from Meta and Belgium, along with broader concerns over copyright, legal uncertainty, and rushed implementation, highlight deep fractures in Europe’s AI governance push.
EU AI Code – Key Points
The European Commission Launches EU AI Code of Practice for GPAI
In July 2025, the European Commission introduced the General-Purpose AI Code of Practice as a voluntary but structured tool to help providers of GPAI systems comply with the AI Act’s obligations. Developed through a multi-stakeholder process led by independent experts, the Code consists of three chapters:
- Transparency and Copyright chapters: Apply to all GPAI providers and support compliance with Article 53 of the AI Act.
- Safety and Security chapter: Applies to a limited subset of providers whose models pose systemic risk, under Article 55.
26 Signatories, Including Major Players
The EU AI Code was signed by 26 companies, including global tech giants Google, Microsoft, Amazon, IBM, and OpenAI, and European AI leaders such as Mistral AI and Aleph Alpha. Their participation signals early support for aligning with the EU’s AI governance model, though notable gaps remain.
Google Supports the Code with Caveats
Google confirmed its signature via blog post and media interviews. Despite supporting the final version, Kent Walker, President of Global Affairs at Alphabet, outlined major concerns:
Legal exposure due to divergence from EU copyright norms
Risk of innovation slowdowns due to unclear approval procedures
Potential exposure of proprietary information under transparency clauses
Google estimates the Code, if implemented effectively, could contribute €1.4 trillion ($1.62 trillion) to the EU economy annually by 2034, equal to 8% of EU GDP.
Meta Declines to Sign, Calling Code Overreaching
Meta, developer of the Llama family of models, declined to sign. Joel Kaplan, Meta’s Global Affairs Chief, criticized the Code in a public statement, calling it legally vague and “far beyond the EU AI Code’s scope.” Meta warned the Code could stifle innovation and introduce legal instability in Europe’s AI ecosystem.
xAI Signs Only the Safety and Security Chapter
Elon Musk’s xAI, developer of Grok, signed only the Safety and Security chapter. Under the EU AI Code’s structure, xAI opted out of commitments on transparency and copyright, relying instead on direct compliance with the AI Act.
Belgium Formally Opposes the Code at EU Board Level
During internal deliberations of the AI Board, Belgium voted against the EU AI Code, citing key gaps:
Insufficient copyright guarantees
Weak compensation mechanisms for creators and rights holders
Unclear opt-out clauses for data used in model training
Belgian Minister for Digitalization Vanessa Matz emphasized the need to revisit the Code, urging stronger protections for journalists, publishers, and producers. Matz stated, “This is not the end of the process,” highlighting Brussels’ intention to demand a Code revision before the next legal review phase.
Rapid Timeline and Developer Pushback
Though the AI Act was passed in 2024, final developer guidance—including the GPAI Code—was only issued on 10 July 2025, giving tech firms less than a month to adapt. Critics have pointed to:
Rushed rollout
Lack of legal clarity
Ambiguous obligations for systemic risk mitigation, energy reporting, and adversarial testing
Some firms are reportedly concerned that these requirements blur the line between best practice and legal obligation, especially for high-stakes systems.
AI Act Enforcement Begins – With Broader Scope Coming in 2026
As of 2 August 2025, enforcement of the AI Act begins for GPAI providers, requiring:
Systemic risk assessments
Adversarial testing
Cybersecurity audits
Incident reporting
Energy usage disclosure
From 2026, new provisions will extend to high-risk AI applications in healthcare, policing, and critical infrastructure.
Europe vs. U.S. Regulatory Models
Europe’s approach (centered on transparency and rights protections) contrasts sharply with the U.S. AI Action Plan, which emphasizes light-touch regulation to boost innovation. This growing divergence has raised concerns about transatlantic tech friction, particularly if European firms face stricter operational burdens than their American rivals.
Why This Matters:
The EU AI Code of Practice is Europe’s boldest attempt to shape global norms around general-purpose AI. While 26 companies have voluntarily signed on, key stakeholders (including Meta and the Belgian government) highlight the legal and competitive risks of Europe’s current trajectory. With enforcement underway and expansion planned in 2026, how the EU balances safety, innovation, and global alignment will define the next phase of the AI era.
Explore the vital role of AI chips in driving the AI revolution, from semiconductors to processors: key players, market dynamics, and future implications.
Read a comprehensive monthly roundup of the latest AI news!






