Meta declined to sign EU AI Code, citing legal uncertainty, regulatory overreach, and innovation risks. This move escalates tensions between U.S. tech firms and EU policymakers ahead of the AI Act’s phased enforcement.
Meta Declined to Sign EU AI Code – Key Points
Decision announced 18 July 2025
On 18 July, Joel Kaplan, Meta’s Chief Global Affairs Officer, confirmed that Meta declined to sign EU AI Code via a public LinkedIn post. He stated the bloc is “heading down the wrong path on AI,” warning the code introduces legal ambiguities and extends well beyond the AI Act. Kaplan replaced Nick Clegg earlier this year and has longstanding experience in U.S. policy.
Voluntary code released July 2025
The European Commission finalized the code in early July with input from 13 independent experts. It sets compliance benchmarks for GPAI models and addresses transparency, dataset sourcing, content licensing, and safety obligations. Companies that sign may benefit from reduced regulatory burden, while those who don’t—like Meta, which declined to sign EU AI Code—face enhanced scrutiny.
Meta’s stance: Innovation at risk
Meta declined to sign EU AI Code in alignment with over 40 companies, including SAP and Bosch, who requested a two-year delay. Kaplan argued that the code’s provisions would stifle frontier AI innovation and make Europe less hospitable to startups.
AI Act background
The AI Act, in effect since June 2024, bans high-risk applications like social scoring and mandates strict standards for biometric systems, educational tools, and employment-related AI. Non-compliant companies may face legal consequences.
GPAI compliance timeline
The Act’s GPAI provisions take effect on 2 August 2025, applying to major AI developers like Meta, OpenAI, Google, and Anthropic. Full compliance is expected by 2 August 2027 for pre-existing models. Despite the approaching deadline, Meta declined to sign EU AI Code, highlighting growing friction with EU regulators.
Regulatory pushback and enforcement outlook
The European Commission published updated implementation guidance on 19 July and reaffirmed that the timeline would remain unchanged. Spokesperson Thomas Regnier stressed that although the code is voluntary, it sets a “solid benchmark” for compliance. Companies not participating—like Meta, which declined to sign EU AI Code—may face heightened AI Office oversight.
Rival strategies and growing isolation
Meta became the first major U.S. tech firm to reject the framework. Meanwhile, Mistral AI signed on 17 July, followed by a pledge from OpenAI, and signals from Microsoft suggesting likely endorsement. The EU plans to publish the official list of signatories on 1 August 2025.
Political pressure and recent controversies
The code’s urgency follows public backlash against incidents like antisemitic responses generated by Grok, X’s chatbot. The Commission sees the code as a proactive enforcement mechanism to prevent similar scandals without launching full investigations.
Why This Matters
That Meta declined to sign EU AI Code sends a clear message of resistance to Europe’s expanding digital oversight. The EU aims to become the global standard-setter for AI regulation, and Meta’s defiance raises questions about the Act’s enforceability. With enforcement deadlines looming, this decision could shape global compliance strategies and regulatory influence for years to come.
Explore AI ethics via Aristotle’s philosophy, focusing on human-centered design, enriched ethical values, and democracy’s role in AI development, as detailed in the “Lyceum Project” white paper.
Read a comprehensive monthly roundup of the latest AI news!






