Singapore launched a new governance framework aimed at guiding the safe and responsible deployment of agentic artificial intelligence, as governments and businesses grapple with the growing autonomy of AI systems.
Minister for Digital Development and Information Josephine Teo announced the Model AI Governance Framework (MGF) for Agentic AI at the World Economic Forum in Davos, according to a statement by the Infocomm Media Development Authority (IMDA).
Developed by IMDA, the framework builds on Singapore’s original Model AI Governance Framework introduced in 2020, and is designed to address the distinct risks posed by agentic AI systems that can reason, plan, and take actions on behalf of users.
Unlike traditional or generative AI, agentic AI systems can perform tasks such as updating databases, initiating transactions, or interacting with other software tools, raising concerns around data access, unintended actions, and human accountability, IMDA said.
The framework provides guidance to organisations deploying agentic AI, whether developed in-house or sourced from third parties, and emphasises that humans remain ultimately accountable for AI-driven decisions and outcomes.
It sets out four key dimensions of governance: assessing and limiting risks upfront by bounding agents’ autonomy and access to tools and data; ensuring meaningful human oversight through approval checkpoints; implementing technical controls across the AI lifecycle, including testing and restricted system access; and promoting end-user responsibility through transparency and training.
“As the first authoritative resource addressing the specific risks of agentic AI, the MGF fills a critical gap in policy guidance,” said April Chin, co-chief executive officer of AI assurance firm Resaro, in the statement.
Singapore said the initiative reflects its “practical and balanced” approach to AI governance, which seeks to put guardrails in place while allowing room for innovation.
The framework is the latest in a series of initiatives aimed at building a trusted AI ecosystem, alongside tools such as AI Verify and testing kits for large language model applications.
Singapore is also working with other countries through its AI Safety Institute and leading ASEAN efforts on regional AI governance.
Singapore’s move positions it among the first governments to directly address the governance challenges of agentic AI, as global regulators race to keep pace with rapidly evolving AI capabilities.
By focusing on accountability and operational controls rather than prescriptive rules, the framework may serve as a reference model for other jurisdictions seeking to balance innovation with risk management.