The EU AI regulation, together with the Digital Services Act (DSA), forms the world’s first enforceable legal framework focused on managing systemic risks from both online platforms and general-purpose AI (GPAI) systems. These laws promote transparency, protect fundamental rights, and anchor the EU’s leadership in global digital governance.
EU AI Regulation – Key Points
DSA Scope and Responsibilities
- The DSA applies to Very Large Online Platforms (VLOPs) and Very Large Online Search Engines (VLOSEs) with over 45 million users in the EU.
- These platforms must identify and reduce systemic risks tied to their algorithms, content systems, and service design.
- Under the EU AI regulation and DSA, platforms must focus on four main risk areas:
- The spread of illegal content
- Threats to fundamental rights
- Disruptions to civic discourse and elections
- Harms affecting minors, mental health, and vulnerable groups (e.g. gender-based violence)
Two Types of Systemic Risk
- Impact-based risks: Widespread effects on society—like viral disinformation influencing elections.
- System-based risks: Rooted in platform infrastructure—such as AI-driven recommendations amplifying harm or faulty moderation systems.
EU AI Regulation Sets a Global Trend
- Nations like Singapore, Brazil, Taiwan, and the UK are adopting similar systemic risk rules modeled on the EU AI regulation, extending its influence beyond Europe.
What Counts as “Illegal Content”?
- Platforms face legal uncertainty: Should they act on content illegal across all EU countries or only in some? This ambiguity adds complexity to enforcement of the EU AI regulation.
The “Dark Brussels Effect” on Civic Discourse
- There’s a growing concern that the EU AI regulation could cause tech firms to prioritize compliance in Europe, while under-resourcing election-related risks in more fragile democracies.
Protecting Rights as a Compliance Foundation
- Article 34(1)(b) of the DSA identifies core rights at risk: freedom of expression, human dignity, privacy, non-discrimination, children’s rights, and consumer protection.
- The EU AI regulation urges platforms and developers to use saliency assessments to determine which rights are most at risk—based on severity, scope, and potential remedy.
Recognizing Intersectional Harms
- Online gender-based violence affects marginalized groups—especially women who are also part of ethnic, LGBTQI+, or disabled communities.
- These harms often cross into multiple categories of systemic risk under the EU AI regulation, requiring more inclusive assessment practices.
Agreement on Using International Human Rights Frameworks
- Industry leaders and civil society groups agree: systemic risk work under the EU AI regulation and DSA should follow the UN Guiding Principles on Business and Human Rights (UNGPs).
- Transparency, inclusive participation, and public reporting are essential for trustworthy compliance.
AI Guidelines and GPAI Model Obligations under EU AI Regulation
Scope of Regulation
- Applies to general-purpose AI models trained with over 10²³ FLOPs and capable of generating multimodal outputs (text, images, video, or audio).
Compliance Timeline
- August 2, 2025: Obligations begin.
- August 2, 2026: EU Commission begins enforcement.
- August 2, 2027: Pre-2025 models must also comply.
Transparency Requirements
- Developers must publish technical documentation outlining training data, compute usage, energy consumption, limitations, and risk mitigations.
- This level of detail is a cornerstone of EU AI regulation transparency standards.
Copyright Rules
- Providers must follow EU copyright law—even if their AI systems are trained or deployed outside the EU.
Systemic Risk Controls
- GPAI models considered high-risk must:
- Submit internal risk reports
- Notify the EU AI Office
- Implement robust cybersecurity protections
- These safeguards reinforce the systemic risk architecture at the core of the EU AI regulation, as highlighted in Reuters, July 2025.
Open-Source Clarification
- Open-source models are exempt from some documentation rules—but not from systemic risk duties under the EU AI regulation.
Who Is Considered a Provider?
- Developers, fine-tuners, and distributors offering GPAI models in the EU—including via APIs—are legally classified as providers.
The GPAI Code of Practice (Effective July 2025)
- A voluntary guide created to help companies align with EU AI regulation ahead of formal enforcement.
- Focused on three key areas:
- Transparency
- Copyright
- Safety & Security
- Signing the Code grants a “presumption of compliance” during the first year.
- Meta declined to sign due to legal uncertainties. Microsoft, OpenAI, Mistral, and Aleph Alpha are expected to participate.
- The AI Office will provide direct support to signatories on how to meet expectations under EU AI regulation.
Industry Pushback vs. Commission Determination
- Companies like Airbus and ASML pushed for a two-year delay in implementing EU AI regulation obligations, citing complexity and costs.
- The European Commission refused, emphasizing the urgent need to secure public trust and protect fundamental rights.
Why This Matters:
The EU AI regulation and DSA together establish a blueprint for responsible digital governance. They place legal obligations on tech companies to assess and reduce systemic risks, protect democratic processes, and uphold fundamental rights. This legal architecture is setting the standard for tech accountability around the world.
GRIP, a WEF-UAE initiative, reimagines global regulation for AI, fintech, and biotech with live testing, ethics tools, and leadership frameworks.
Read a comprehensive monthly roundup of the latest AI news!






