The AI Risk Pyramid: Europe’s Strategy for Control?

Imagine if artificial intelligence was no longer just a tool, but a decision-maker? In Albania, an AI tool named Diella was appointed to a ministerial role few weeks ago, with a task to make government services more efficient and transparent. 

This avant-garde move begs a crucial question: as AI becomes a part of our daily lives, who sets the rules? In Europe, it’s the EU AI Act, the first comprehensive legal framework for artificial intelligence in the world. It’s a landmark regulation designed to stimulate tech innovation while still protecting fundamental rights, but how do you regulate something that evolves almost daily? 

The EU’s answer is a unique risk-based approach, a tiered system that classifies AI systems based on their potential to cause harm.

At the top are unacceptable risk AI systems, which are outright banned. These include AI used for “social scoring” by governments, like systems that grant or deny public services based on behavior. Real-time biometric identification in public spaces is also mostly prohibited, with only a few exceptions for serious crimes. 

Next are high-risk AI systems. These are allowed but come with many obligations. This category includes AI used in critical infrastructure, like managing electricity grids, or in essential public services, such as evaluating creditworthiness. AI in employment also falls here. These systems must meet requirements for data quality, accuracy, human oversight, transparency, and cyber security. Companies deploying high-risk AI will also need to conduct fundamental rights impact assessments to make sure they don’t harm vulnerable groups disproportionately. 

The numbers show why this is critical. Over 70% of companies are projected to integrate AI into their HR processes by 2027, for resume screening, performance evaluations etc. Furthermore, the adoption of AI in critical infrastructure is expected to surge by over 27% in the next five years.  

Below that are limited risk systems, which mostly involve transparency obligations. For example, chat bots and deep fakes must clearly state that they are AI generated. Finally, minimal risk AI, like spam filters or video games, faces very few restrictions, encouraging innovation in areas with little potential for harm. 

However, some argue that its complexity and compliance costs could inhibit European innovation, especially for small and medium sized enterprises. Others worry that the definitions of “high-risk” might not keep up with evolving technology, which would leave loopholes. The economic stakes are huge, with the global AI market projected to grow from $230 billion last year to over $1.7 trillion by 2032

The EU AI Act is a global experiment. Will this model inspire the rest of the world to provide a legal framework for artificial intelligence – remains to be seen. 

Watch the Video:

Share

Sign up for our newsletter

Explore More

The MEPs will have a rather packed schedule this week, tackling topics such as energy, defence, technology, copyright, and geopolitical uncertainties. Defence, Energy and Copyright: In the Spotlight of This Week’s Plenary Session This week, the European Parliament gathers in Strasbourg from Monday to Thursday for a plenary session

Read more

The conflict in the Middle East escalates after the US and Israel take decisive action in Iran for a regime change, while the EU is sidelined in crisis response as Cyprus is attacked. The Middle East and the EU’s Seat at the Table ​If the geopolitical world had a

Read more

Executive Summary The United States and Iran are currently in a high-stakes stand-off that has reached a boiling point this February. Following a massive and bloody crackdown on domestic protesters in Iran last month, President Trump has shifted the US military into a strike-ready posture. While diplomacy remains technically

Read more