The promise of AI in government is simple. A pure, un-bribable efficiency. It is a tool that can be used against human corruption. But – if an algorithm is immune to money and influence peddling, does that make it immune to the rule of law?
For example, take the case of Ukraine’s Diia.AI. It’s been launched in September this year as the world’s first national AI assistant that delivers government services directly in a chat, rather than just offering information. Citizens can request official documents like income certificates through a simple text or voice command. This innovation is part of Ukraine’s goal to become one of the leaders in public sector AI adoption by 2030, transforming the state into an “agentic” system that anticipates user needs. But here’s the catch: the efficient, proactive system is an algorithm at its core. The question now actually becomes, does a fight for greater efficiency and an agentic state simply create a mystery machine that can’t be held accountable? And who is really accountable when the AI makes a mistake?
This question is way past the theoretical stage. AI systems used for essential public services are already causing legal and constitutional harm to people in multiple places all across the globe. In the United States, an automated system in Michigan wrongly flagged over 40,000 citizens for unemployment fraud. The result was a huge number of fines and bankruptcies. Also, in Idaho, a flawed algorithm cuts some Medicaid benefits from its rightful beneficiaries.
These are not minor errors. They are violations of basic due process and civil rights. The citizens have a right to appeal and to face their accuser, but code is not something that can easily be cross-examined.
Regulators are now trying to draw a line. As you know from one of our earlier videos pointed at explaining different risk levels under the EU AI Act, any system used for access to essential public services, such as hiring or law enforcement, is classified as high-risk. This mandates strict requirements: full transparency, quality data, and, maybe most importantly, adequate human oversight. The AI may assist, but a person must remain the one who is a decision-maker.
AI offers governance a powerful tool to clean up the state, but the power to automate critical public decisions that affect the fundamental rights of citizens must be matched by legal accountability, or else we could risk experiencing different types of issues.
Watch the Video:
Find Out More About AI:
