Asia's AI Rulebook Forked Three Ways: Korea's Hard Threshold, Singapore's Agentic Pivot, Japan's Innovation Bet
On the same day, Korea's AI Basic Act took effect and Singapore launched the world's first agentic AI governance framework. Japan stayed innovation-first. Asia's AI rulebook now has three operating systems — and none look like the EU AI Act.
January 22, 2026 was a strange day for anyone tracking AI regulation. In Seoul, South Korea’s amended Artificial Intelligence Basic Act took effect — the country’s comprehensive horizontal AI law, with mandatory watermarking, a 10²⁶ FLOPs compute threshold, and administrative penalties on the books. In Davos, on the very same day, Singapore’s Minister for Digital Development Josephine Teo announced the launch of the world’s first governance framework specifically built for agentic AI. Tokyo had already gone its own way the previous summer, choosing an innovation-first statute with no penalties.
By the time the news cycle settled, Asia had what the EU has had since 2024 — working AI governance — but in three radically different shapes. None of the three looks like the EU AI Act. For multinational compliance teams, the assumption that Brussels would set the global default just got harder to defend.
This briefing maps the three Asian frameworks against each other and against the EU, then asks the question that matters most for boards and C-suites: which way should a company building or deploying AI across Asia in 2026 actually plan?
Korea: The Hard-Threshold Rulebook
South Korea was the first country in Asia — and one of the first in the world — to enact a comprehensive horizontal AI law. The original Basic Act passed in December 2024; the version that took force on January 22, 2026 was revised through 2025 to add teeth.
Three features make Korea’s framework distinct.
A compute-defined “high-impact AI” tier. AI systems trained with cumulative compute of at least 10²⁶ floating-point operations are designated “high-performance AI” and trigger safety obligations: risk assessment, safety measures, designation of a local representative for foreign providers. This is the same numeric line of credibility the EU AI Act draws for systemic-risk GPAI models — but Korea wrote it into a hard statutory obligation, not a Code of Practice.
Mandatory labeling of AI-generated content. Korea has been described as having the world’s first law requiring visible labels on AI-generated media. Under the Act, generative AI outputs — text, images, sound, video — must carry invisible digital watermarks, and realistic deepfakes that could be mistaken for real media must carry visible or audible labels. Non-deceptive works like webtoons can use invisible watermarks alone. Operators of generative AI services must additionally inform users that content was produced by AI.
Real, if modest, penalties. Administrative fines of up to ₩30 million (about US$21,000) can be imposed for failure to label, failure to appoint a domestic representative, or refusal of government inspections. Korea’s Ministry of Science and ICT (MSIT) has indicated a one-year grace period focused on guidance before administrative fines apply, giving subject businesses through early 2027 to install compliance infrastructure.
The fines are not large by EU AI Act standards (where penalties under Article 99 can reach €35M or 7% of global turnover for the most serious violations). But the labeling regime is the strongest of any AI law in force anywhere — and is already being studied by regulators in Japan, the UK, and the US as a template.
Singapore: Governing What Acts, Not Just What Predicts
Singapore did something different. Rather than passing a statute, the Infocomm Media Development Authority (IMDA) and AI Verify Foundation kept building out the country’s framework approach — and on January 22, 2026 published the Model AI Governance Framework for Agentic AI, described as the world’s first governance framework specifically designed for AI agents capable of autonomous planning, reasoning, and action.
This matters because agentic AI is moving faster than most regulatory regimes can describe. The 2024 EU AI Act, the 2025 Japanese AI Promotion Act, and the 2024 Korean AI Basic Act were all drafted around a mental model of AI as a prediction system — a classifier, a generator, a recommender. None of them squarely addresses what happens when an AI system books your flight, drafts your contract, and approves the wire transfer without a human in the loop on each step.
Singapore’s new framework treats agentic AI as a distinct governance problem and structures the response around four core dimensions:
- Bound the risks upfront — set guardrails on what the agent can do and where it operates before deployment, not after.
- Make humans meaningfully accountable — clear ownership of agent actions inside the organization, not diffused responsibility.
- Implement technical controls and processes — kill-switches, audit logs, capability constraints, sandboxing.
- Enable end-user responsibility — explanations, transparency, recourse for the humans on the receiving end of agent decisions.
The whole document is described by IMDA as a “living document.” That is consistent with Singapore’s broader philosophy, which Duane Morris has characterized as a pro-innovation, framework-driven model — guidance rather than statute, voluntary alignment rather than enforcement, deep interoperability with international standards (OECD AI Principles, the GPAI Code of Practice).
For a company experimenting with agent deployments in production, Singapore is now the most operationally useful jurisdiction to anchor in. The framework’s four dimensions translate cleanly into an internal governance checklist that survives audit scrutiny in Tokyo, Seoul, Brussels, and Washington — even if those jurisdictions’ actual statutes differ.
Japan: The Innovation-First Bet
Japan’s AI Promotion Act, passed in May 2025 and detailed in our earlier briefing on Japan’s sovereign AI policy landscape, takes the third path. There are no prohibited applications, no mandatory conformity assessments, no monetary penalties on business operators. The statute instead sets national R&D and adoption goals and authorizes the AI Strategy Headquarters — chaired by the Prime Minister — to coordinate ministry action.
The enforcement mechanism, as the International Bar Association notes, is reputational: the government can investigate harmful AI use, advise companies on remediation, and publicly name non-compliant operators. The METI/MIC AI Governance Guidelines for Business (v1.1, March 2025) are the operational layer underneath, and they expect companies to either comply or explain deviations in good faith.
This is closer to Singapore’s posture than to Korea’s. The two key differences: Japan layered the framework on top of a parliamentary statute (which gives it national-policy weight Korea’s AI law also has, but which Singapore’s framework deliberately avoids), and Japan paired the framework with massive industrial-policy commitment — ¥1.23 trillion to AI and semiconductors for FY2026 alone, plus a ¥1 trillion five-year commitment announced in December 2025.
Tokyo’s bet is that you cannot regulate your way into AI leadership, and that getting the industrial-policy stack right will matter more in 2030 than getting the penalty schedule right in 2026.
The EU Contrast — and Why “Brussels Effect” Has Limits Here
For most of the past decade, the prevailing assumption among compliance leaders was that the EU’s regulatory choices would propagate globally — the “Brussels Effect.” It happened with GDPR. It is happening, partially, with the CSRD. The implicit forecast was that the EU AI Act would set the global template, and Asian jurisdictions would converge over time.
That forecast is not aging well.
The EU AI Act’s GPAI obligations became effective August 2, 2025, with the AI Office gaining full enforcement powers from August 2, 2026. Pre-existing GPAI models must achieve compliance by August 2, 2027. The structure is risk-tiered — prohibited, high-risk, limited-risk, minimal-risk — with substantial penalties for non-compliance.
None of the three Asian frameworks adopted the EU’s risk-tier taxonomy. Korea uses a single compute threshold. Singapore avoids prescriptive categories entirely. Japan rejected categorical prohibitions outright. Each Asian government looked at the EU AI Act, judged its compliance overhead, and chose a different design.
The implication for global multinationals is that there is no single “AI compliance program” that satisfies every jurisdiction in 2026. The realistic minimum is a layered approach:
- EU baseline for risk-tier classification, technical documentation, and copyright training-data summaries.
- Korea additional for labeling/watermarking on any generative AI output destined for Korean users, plus a domestic representative if you cross the compute threshold.
- Singapore additional for any agentic deployment — increasingly relevant as enterprise AI moves from chat interfaces to autonomous workflows.
- Japan additional for comply-or-explain documentation aligned to METI’s guidelines, with a posture of demonstrable good faith rather than checklist compliance.
The good news is that the technical underlying capabilities — risk assessment, watermarking, audit logging, human-in-the-loop controls — are highly portable across these frameworks. The cost is in the documentation overlay, not the engineering.
Why This Sits Squarely on the T4IS2027 Agenda
The summit’s 2026 program already addressed adjacent ground: former Digital Minister Masaaki Taira argued that AI’s economic value will be unlocked through Japan’s regulated stablecoin rails; United Nations University Rector Tshilidzi Marwala keynoted on diplomacy, AI, and the UN, and an AI Governance panel followed his address on the main stage. The 2027 program will go deeper. Three threads we expect to surface:
The interoperability question — whether the international network of AI Safety Institutes launched at the AI Seoul Summit in May 2024 can translate parallel domestic rules into something a multinational can actually comply with once rather than four times.
The agentic-AI gap — whether the rest of the world follows Singapore’s framework lead, or whether Korea’s labeling/watermarking model gets extended to cover agent actions (not just generative outputs).
The Japan question — whether innovation-first regulation paired with industrial-policy heft can outrun penalty-first regulation paired with conformity assessments, or whether the absence of statutory teeth becomes a liability when something goes wrong in production.
Tokyo, Seoul, and Singapore have each chosen a hypothesis and started running the experiment. The next 18 months will tell us which hypothesis was right.
T4IS 2027 (May 18–19, Tokyo) convenes the people running these experiments. To explore whether your organization belongs in the room, request an invitation.