Fortress or Sandbox? How the EU and Singapore Fare in the AI Race

by Roberta Vommaro


Roberta Vommaro EU and Singapore AI Governance

We’ve spent the last two years framing AI governance as a race between two models: the EU’s compliance first “fortress” approach, and Singapore’s experimentation driven “sandbox” model.

But as we hit the mid-point of 2026, both models are running into a wall, albeit at different places: by design (EU), or via implementation (Singapore).


The EU: The 20th-Century Paradox

On 2 August 2026, the majority of the EU AI Act’s obligations for high-risk systems become applicable. The fundamental bottleneck remains Article 14 (human oversight). It was designed in a paradigm where outputs were more reviewable and step-based than today’s agentic systems.

In today’s reality, how do you “meaningfully oversee” an agent that just autonomously searched 500 databases, summarized 10,000 pages, and triggered downstream actions? Technical experts are already sounding the alarm on “behavioral drift”, the moment an agent’s unpredictable reasoning outpaces its rigid legal documentation.

This gap between real-time human control and distributed autonomous execution is what shifts things from “meaningful oversight” to “post-hoc auditing,” which is not what Article 14 was conceptually designed for. This is also what creates the legal paradox where the human is not only inclined to agree with the output, but also becomes liable for it. Not without reason there are those who point that the EU is turning high-level leaders into "log auditors."

To be fair, the EU AI Act incorporates risk-based elements and innovation sandboxes, aiming to protect fundamental rights while supporting trustworthy AI. Yet I argue that Article 14 doesn’t solve the core need for cognitive sovereignty and leadership, and potentially risks reducing the human to a compliance checklist.


Singapore: The Implementation Ceiling

Then we have Singapore, the world's first governance framework specifically dedicated to agentic systems. While the EU was drafting stop-button rules, Singapore’s Infocomm Media Development Authority (IMDA) spent late 2025 doing the opposite.

In January 2026 they released the Model Governance Framework (MGD) for Agentic AI. Singapore model doesn't ask you to watch the agent, but to bound the agent. Instead of manual human oversight, they focus on "risk bounding by design", limiting what tools an agent can touch and what permissions it has before it starts.

Singapore is successfully positioning itself as the world’s most pragmatic and technically literate regulator. The MGD explicitly addresses the impracticality of constant oversight and promotes bounding and accountability. Essentially, the regulator is saying “go."

But look underneath the stack, and the story changes.

Recent industry reports (ST Telemedia study) suggest that despite the ambition and resources, only 3% of Singaporean organizations have achieved Al leadership. Why? Because while the government is open, the internal culture is too risk-averse.

Organizations have built fragmented assemblies of data and security that are nearly impossible to govern or scale. In essence, they are trying to run 2026 intelligence on 1990s corporate rules. The vision is modern, but the specialized operational expertise required to scale past a pilot is missing.

This creates an "implementation ceiling" where companies mistake IT configuration for actual governance. By letting platform engineers build technical compliance systems rather than leadership-driven policy, organizations are left wide open.

Singapore is stalling because its legacy bureaucracy is moving at a different speed than its vision, and when the regulator finally walks in the room, a technical log won't be enough to answer a policy question.


The Core Issue: What Is Governance?

So, we are at a crossroads.

At its heart, governance exists to protect the integrity of human intent. But when we lose our ability to define the "why" behind a process, we lose our sovereignty.

We're witnessing a profound translation error, or a “disconnection” between the people who write the laws and the people who write the code.

In the EU, the regulators are speaking a language of 20th-century legal constraints that the engineers simply cannot translate into 21st-century systems. The result is a fortress built of laws that risk interpreting engineers like a threat rather than a partner.

In Singapore, the problem is reversed. The government has provided a modern, spaceship-ready manual, but the engineers tasked with implementation don't speak the language of policy. They treat governance as a technical configuration, while understandably missing the nuanced, multi-jurisdictional obligations that keep a Chief Compliance Officer up at night.

This translation gap becomes even more acute when organizations operate across jurisdictions. The EU AI Act’s extraterritorial reach means that a Singapore-based or Asia-focused deployment can still trigger Article 14 human oversight obligations (and hefty penalties) the moment outputs touch the European market.

Meanwhile, Singapore’s risk-bounding approach welcomes flexibility for scaling agentic systems, but it does not shield companies from the compliance fortress when serving multiple regions.

The result is a fragmented landscape where multinationals must often adopt the strictest standard as a baseline, while struggling to preserve the balance between innovation and cognitive sovereignty that true governance should enable.

Until regulators and technologists develop truly interoperable frameworks, the real race may not be between fortress and sandbox, but in building systems resilient enough to navigate both. And until we find a common language, we are stalling.

Some could argue this is potentially a good thing.

I don’t disagree.

Because ultimately, real governance is the architectural courage to start building systems that are as intelligent as the intent they were created to serve.

And that should take time.


References:


  1. World Economic Forum, "The new era of public sector service is about building trust through agentic AI" (Jan 20, 2026): https://www.weforum.org/stories/2026/01/the-new-era-of-public-sector-service-is-about-building-trust-through-agentic-ai/

  2. The Agentic State Vision Paper, by The Agentic State (Kubernesis OÜ): https://agenticstate.org/paper.html

  3. Model AI Governance Framework for Agentic AI (IMDA): https://www.imda.gov.sg/-/media/imda/files/about/emerging-tech-and-research/artificial-intelligence/mgf-for-agentic-ai.pdf

  4. Securing Agentic AI: A Discussion Paper (CSA / FAR.AI): https://www.csa.gov.sg/resources/publications/securing-agentic-ai-a-discussion-paper/

  5. Agentic AI Primer (GovTech Singapore): https://www.developer.tech.gov.sg/guidelines/standards-and-best-practices/agentic-ai-primer.html

  6. ST Telemedia Research: https://www.sttelemediagdc.com/newsroom/new-research-st-telemedia-global-data-centres-reveals-asias-ai-ambitions-hampered



 
 

Watch my TEDx Talk

 

Get the Book

 

Work with Me

Whether your organization needs support with facing the challenge of reinvention, or you want to explore keynotes and leadership advisory, I’m happy to have that conversation.

Next
Next

Why everyone should join the AI conversation