When Code Takes a Seat at the Cabinet Table: AI, Governance, and the Future of Public Service

When Code Takes a Seat at the Cabinet Table: AI, Governance, and the Future of Public Service

In a world-first move, Albania has introduced “Diella,” an AI-generated virtual assistant, as a government figure responsible for public procurement oversight. Prime Minister Edi Rama presented this innovation as a symbolic yet strategic step toward eliminating corruption, favoritism, and hidden interests from one of the most sensitive areas of governance: the allocation of public contracts.

The idea is bold. For decades, procurement processes across many countries have been plagued by opaque decision-making, insider networks, and bribery. Automating parts of this process through an AI tool offers a promise: a system that makes decisions based on coded rules instead of personal relationships.

Why This Matters

Public procurement represents a significant portion of national spending — in some countries, up to 20–30% of government budgets. Every point of bias or corruption in this system can cost citizens millions. By introducing an AI layer, Albania is signaling a shift toward algorithmic governance, where technology plays an active role in shaping state decisions.

Opportunities of AI-Governed Procurement

If implemented properly, AI oversight could introduce a new standard for fairness:

  • Zero-tolerance for bribery — algorithms cannot be bribed or threatened.
  • Consistent criteria application — every bidder is evaluated under the same standards.
  • Digital audit trails — each system decision can be logged and reviewed.
  • Improved efficiency — AI can process data and detect anomalies far faster than human teams.

This aligns with global anti-corruption frameworks, such as the OECD recommendations for transparency and traceability in procurement systems. Countries like Estonia, South Korea, and the UAE are also experimenting with automated public service workflows.

But Technology Alone Is Not a Silver Bullet

Despite the optimism, experts urge caution. AI does not eliminate bias — it shifts it. The risks include:

  • Hidden influences baked into training data or rules.
  • Lack of accountability — who is responsible when the AI makes a harmful or incorrect decision?
  • Algorithmic opacity — complex systems are difficult for citizens to challenge or understand.
  • Manipulation risks — if insiders learn how the system ranks bids, they may tailor fraudulent strategies around it.

History offers a reminder: automated welfare systems in Australia (Robodebt) and the Netherlands led to major scandals and citizen harm due to flawed algorithmic judgments. These serve as warnings that automation without oversight can produce systemic injustice — quickly and at scale.

Striking the Right Balance: AI + Human Governance

Experts in digital democracy emphasize a hybrid governance model. Here’s what that balance could look like:

AI ResponsibilitiesHuman Oversight Responsibilities
Apply procurement criteria consistentlyInterpret edge cases and exercise judgment
Detect irregular patterns or conflictsInvestigate flagged cases and ensure due process
Produce transparent audit logsEnable public scrutiny and respond to citizen concerns
Accelerate document analysis and comparisonsSet ethical guidelines and intervene when needed

Principles for Responsible AI in Public Service

To make innovations like Diella credible and safe, policymakers should ensure:

  • External audits by independent bodies.
  • Open algorithms and logging systems accessible to watchdogs and civil society.
  • Clear escalation channels, where humans can override or question AI decisions.
  • Legal frameworks assigning responsibility when things go wrong.

A Glimpse Into the Future

Albania’s Diella marks a symbolic turning point in digital governance. Whether it becomes a model for transparency or a cautionary tale depends not on the sophistication of the AI — but on the quality of the rules, safeguards, and democratic control surrounding it.

As more governments integrate AI into critical functions, one guiding principle should remain clear: technology can increase fairness, but only when humans remain accountable stewards of the public good.

Read more