The representatives from the Parliament and the Council begin technical meetings this week in hopes to finalise the AI Omnibus rules by mid-year so they are ready for the launch on 2 August. The political will to pass the simplification package remains high and the adoption of new rules is likely in the immediate future.
EU AI Act simplification package enters the final negotiation stage on the way to adoption. On 27 March, the European Parliament delivered its negotiating position to the Council and the Commission.
What is AI Omnibus
AI Omnibus is a digital simplification package designed to address the friction points created by the original AI Act. The Omnibus was designed in order to simplify and update the rules surrounding the Artificial Intelligence, as well as to reduce the administrative burden on businesses and make EU-based companies more competitive on the global market.
Simplification for Non-High-Risk AI
One of the most practical changes is the removal of the requirement for companies to register non-high-risk AI self-assessments in the EU database. If a system isn’t categorized as high-risk, the systems will face less bureaucratic burden. The Act also transfers the obligation to develop AI literacy from companies to the European Commission and national governments by providing public education. This removes compliance cost for startups and allows them to focus more on development.
Small Mid-Ccaps
The AI Omnibus introduces a new category called the small mid-caps: companies with fewer than 750 employees and less than 150 million EUR in revenue. These firms will now benefit from the same flexible compliance regimes and lower fines as SMEs. This prevents growing startups from being subject to more regulation the moment they scale past the 250-employee mark.
Safety and Transparency
The Parliament has proposed a ban on AI tools used to create non-consensual sexually explicit or intimate images of people. The timeline for AI watermarking has been adjusted as well. Providers of AI-generated media or text will have until 2 November to implement systems that clearly indicate the origin of content. These rules aim to build public trust in AI-generated media while giving developers a firm window to implement the necessary technical watermarking standards.
New Timelines
The compliance date for high-risk AI systems has been delayed from August 2026 to December 2027. This move responds to industry concerns that the necessary standards and guidelines aren’t ready yet.
Secondly, AI systems covered by sector-specific legislation (such as aviation or medical devices) now have until August 2028 to comply. This is intended to prevent a change that is too sudden and gives businesses a clearer roadmap.
However, core transparency rules for General Purpose AI are still set to trigger by August 2026.
The Next Steps
The proposal is now in the heat of trilogue negotiations. While the agreement between the Council and Parliament on delaying high-risk deadlines is expected, possible point of disagreement might be the exact scope of bans on explicit images generated by AI. Some civil society groups are pushing for more oversight to address privacy concerns. That being said, businesses should not pause their adjustment activities. If trilogue stalls or doesn’t work out, the original August 2026 deadline remains the legal default.