EU MOVES TO EASE AI AND DATA RULES, DELAYS AI ACT TO 2027 (21.11.25)

The European Commission has announced its “Digital Omnibus” package, proposing major changes to EU rules on AI, data protection, and online privacy. Alongside this, the full enforcement of the AI Act has been pushed to 2027, raising both industry optimism and privacy concerns.

The European Commission has unveiled a sweeping reform package aimed at reshaping the EU’s digital regulation landscape. Branded as the “Digital Omnibus”, the proposal seeks to streamline multiple laws governing artificial intelligence, data protection, cybersecurity, and online platforms. While Brussels argues that the package modernises Europe’s digital rulebook and reduces compliance burdens, critics warn that it may dilute some of the world’s strongest privacy protections, potentially marking the EU’s biggest regulatory shift in a decade.

The announcement comes alongside another major development: the full implementation of the EU AI Act has now been delayed to 2027, giving companies more time to comply with obligations for high-risk AI systems. Together, these moves signal a clear recalibration of Europe’s approach to governing emerging technologies.

 

What the Digital Omnibus Proposes?

At its core, the Digital Omnibus consolidates and amends several existing laws — including the AI Act, the GDPR, the e-Privacy Directive, the Data Act, and cybersecurity laws such as NIS2. The Commission claims this will “reduce regulatory fragmentation” and bring consistency across digital rules.

One of the most controversial elements is a set of changes to GDPR. The proposal introduces a more permissive interpretation of “legitimate interest” as a lawful basis for data processing, allowing companies greater access to personal and even sensitive data for AI training and algorithmic optimisation. Under the new framing, organisations may not always need explicit consent if they can demonstrate “proportional safeguards” and “minimal risk to data subjects.”

The Omnibus also narrows the scope of what qualifies as personal data. Pseudonymised datasets — which are technically capable of re-identification but require additional information — may be treated as non-personal data when shared with parties who lack the means to re-identify individuals. Privacy experts argue that this redefinition could create a significant loophole, weakening a central pillar of the GDPR.

Another proposal targets the long-criticised cookie consent regime. Users may soon be able to reject cookies with a single click, with their choice respected for six months. The Commission says this will reduce “consent fatigue,” but digital rights advocates fear it may give platforms more room to nudge users into accepting data tracking under “legitimate interest.”

The package also introduces a single incident-notification mechanism for cybersecurity breaches, replacing the current multi-law reporting requirements. The Commission believes this will reduce compliance costs, particularly for SMEs and mid-sized enterprises that struggle with overlapping obligations.

According to the Commission’s internal estimates, the reforms are expected to save European businesses up to €5 billion by 2029, primarily through simplified compliance, reduced record-keeping, and harmonised reporting obligations.

 

Delay in EU AI Act Implementation

Alongside the Omnibus, the European Commission confirmed that the full operationalisation of the AI Act will be postponed to December 2027. Originally, the high-risk requirements were expected to come into force in August 2026.

The delay effectively grants companies , especially those developing high-risk systems in sectors such as healthcare, finance, policing, border control, and critical infrastructure more time to adapt to obligations around risk management, human oversight, transparency, and documentation.

The Commission says the extension is necessary because organisations need more time to deploy conformity assessment mechanisms and because enforcement bodies across Member States must still build institutional capacity. However, critics argue that the delay could weaken deterrence against unsafe AI deployment in high-risk contexts.

 

Supporters Say the EU Is Cutting Red Tape

Business groups and major technology companies have largely welcomed the proposals. Lobbying groups such as CCIA Europe describe the Omnibus as a “promising step toward regulatory coherence.”

Many corporate stakeholders argue that the EU’s existing digital laws have become too dense and burdensome, especially when startups face overlapping audits, multi-layered compliance documentation, and inconsistent enforcement between Member States. They view the Omnibus as an opportunity to bring legal predictability and reduce operational costs.

Some policy analysts argue that the reforms will help Europe compete more effectively with the United States and China in the global AI ecosystem. They see streamlined data access rules as essential for enabling European AI companies to develop competitive models without being disadvantaged by strict GDPR interpretations.

 

Critics Warn of a ‘Rollback of Digital Rights’

The pushback has been immediate and intense. Civil society organisations, digital rights groups, and privacy scholars have described the Digital Omnibus as a “regulatory rollback” and even a “dismantling of GDPR by stealth.”

One coalition of 120+ rights organisations warned that allowing companies to process sensitive and personal data for AI training under “legitimate interest” could undermine fundamental rights guaranteed under the EU Charter of Fundamental Rights, particularly the rights to privacy and data protection.

Critics argue that narrowing the definition of personal data is particularly dangerous in the era of generative AI and powerful re-identification technologies. They point out that pseudonymisation provides limited protection when machine-learning systems can cross-reference datasets to infer identities.

There is also unease about the political context. Some analysts warn that weakening GDPR-style protections in Europe could influence other jurisdictions, especially countries that rely on the EU as a model for stronger data governance. Others fear the reforms appear timed to placate large technology companies amid growing geopolitical tensions and shifting global alliances.

 

A Legal Balancing Act with Global Consequences

The legal debate now centres on three key questions:

  1. Can legitimate interest truly serve as a safe basis for large-scale AI training?
    GDPR traditionally treats this basis narrowly. Expanding it risks undermining the principle of informed, freely given consent.
  2. Will the AI Act’s delayed enforcement weaken its ability to govern high-risk systems?
    Critics say any delay risks creating a regulatory vacuum as AI deployment accelerates.
  3. Does redefining personal data contradict the purpose and structure of EU data protection law?
    The GDPR is built on a broad definition of personal data precisely to prevent loopholes.

Regardless of the outcome, the Digital Omnibus marks a turning point. It could reshape not only EU digital regulation but also global norms around privacy, AI governance, and data-sharing frameworks.

 

 

What Happens Next?

The proposal will now move to the European Parliament and the Council of the EU for negotiations. Significant amendments are expected, particularly from lawmakers concerned about fundamental rights implications.

Meanwhile, industry groups will lobby for broader simplification, while privacy advocates are preparing legal analyses, public campaigns, and parliamentary interventions to prevent the erosion of GDPR protections.

For policymakers, the challenge lies in striking a balance between regulatory agility and fundamental rights protection , a balance that will shape Europe’s digital future and influence global governance models.