EU Divided by the AI Act: Technology Wants to Outpace the Law

Image showing EU-AI-Act-technology-law-balance

The EU’s Struggle with AI Regulation: Balancing Innovation and Safety

Although the European Union’s landmark AI Act officially entered into force in August 2024, lawmakers are still actively working to refine and adjust artificial intelligence regulations. Balancing innovation with public safety is proving to be a difficult task. Recently, after approximately 12 hours of intense discussions in the European Parliament, representatives of the member states failed to reach a consensus. Here is a detailed look at the core issues sparking the most significant disputes across the EU.

Relaxing the Rules: The Controversial “Digital Omnibus”

The European Union is highly motivated to improve its technological competitiveness against global powerhouses in Asia and North America. To achieve this, lawmakers have been negotiating a legislative package known as the Digital Omnibus. This package, which includes proposed amendments to the AI Act, is designed to give European enterprises more time to prepare before they are strictly bound by the new compliance measures.

However, negotiations regarding the AI Act’s enforcement timeline have hit a major roadblock. According to Reuters, discussions have stalled and are expected to resume in mid-May of the upcoming legislative calendar. The primary point of contention revolves around exemptions. Several member states are pushing back against requests to temporarily exempt sectors already governed by existing, strict regulations from immediately meeting the new AI Act requirements.

Specifically, the debate centers around the integration of AI into:

  • Medical devices and healthcare technology
  • Children’s toys
  • Smart cars and autonomous vehicles
  • Heavy machinery used in manufacturing and factories

A “Legal Chaos” for Compliant Companies?

The failure to reach a swift compromise has drawn sharp criticism. Kim van Sparrentak, a Dutch Member of the European Parliament, openly criticized the delay.

“In Big Tech, the champagne has just popped. European companies that actually care about safety and have done their homework are now facing legal chaos,” she stated.

It is important to remember that the AI Act strictly prohibits the use of AI models for highly invasive practices. These include unrestricted biometric identification, social scoring, predictive policing, and unmitigated management of critical infrastructure. Without clear enforcement timelines, experts fear an increase in malicious AI applications, such as the spread of AI-generated fake news and disinformation.

The Regulatory Race: Technology Outpacing the Law

One of the most problematic aspects of the AI Act is Article 111, which outlines the transition periods for artificial intelligence systems that are already active in the market. Lawmakers are struggling to govern technologies that evolve much faster than the bureaucratic process.

The current timeline for transition periods is structured as follows:

  • Large-scale IT systems: Must be fully compliant by December 31, 2030, provided they are introduced to the market before August 2, 2027.
  • High-risk AI systems: Broadly exempt from immediate adaptation if introduced before August 2, 2026.
  • Public high-risk systems (Exception): Government or public sector high-risk systems must be compliant before August 2, 2030.
  • General-purpose AI models (e.g., ChatGPT, Gemini): Must adapt to the new rules by August 2, 2027, if they were placed on the market before August 2, 2025.

The proposed Digital Omnibus aims to extend the transition period for high-risk systems to December 2, 2027, and push the compliance date for their application in broader tools to August 2, 2028. However, this extension must still survive a vote in the European Parliament.

As the legal framework struggles to catch up, the rapid advancement of these technologies continues to reshape economies. Business leaders and tech CEOs are already preparing for massive shifts; for instance, insights from figures like Sam Altman on how AI is changing the game in the labor market highlight why robust, yet flexible, legislation is critical for the future.

Frequently Asked Questions (FAQ)


Why is the European Union considering delaying some AI Act requirements?

The EU is considering delays, primarily through the proposed “Digital Omnibus,” to improve its global technological competitiveness. Lawmakers want to give European businesses more time to understand and integrate compliance measures without stifling their ability to innovate against markets in the US and Asia.


What qualifies as a “high-risk” AI system under the EU AI Act?

High-risk AI systems are those that pose significant threats to fundamental rights, health, or safety. Examples include AI used in critical infrastructure, medical devices, law enforcement, education grading systems, and biometric identification. These systems face the strictest transparency, safety, and human oversight requirements.


How will the transition periods affect popular AI tools like ChatGPT or Gemini?

General-purpose AI models that are already on the market before August 2, 2025, are generally granted a transition period. Under current rules, they must fully adapt to the AI Act’s regulatory standards by August 2, 2027. This allows tech companies a grace period to ensure their foundation models meet EU copyright and transparency laws.

Source: Reuters & Opening photo: Gemini

About Post Author