AI REGULATIONS IN SINGAPORE

Singapore has been proactive in fostering AI development through a variety of initiatives across different sectors. AI Singapore’s AI Apprenticeship Programme (AIAP), started in 2017, focuses on deep skilling local AI talent over nine months. This fully subsidized program aims to build human capacity and prepare for labor market transitions, directly benefiting the labor force. The Road Traffic (Autonomous Motor Vehicles) Rules 2017, by the Ministry of Transport (MOT) and Land Transport Authority (LTA), regulate autonomous vehicles, ensuring safety and supporting AI R&D in transport.

AI Singapore’s (AISG) 100 Experiments (100E) program, launched in 2018, this flagship initiative addresses AI challenges faced by industries while training engineers to build AI capabilities. This aligns with the OECD principle of investing in AI R&D, covering policy areas like education, innovation, investment, and science and technology, with firms of any size and the labor force as direct beneficiaries. 

The Model AI Governance Framework, first released in 2019 and updated in 2020 by IMDA, provides guidance for ethical AI deployment, promoting responsible AI adoption and building consumer trust. The Implementation and Self-Assessment Guide for Organisations (ISAGO), a companion to the Model AI Governance Framework, helps firms align their AI practices with governance standards.

In 2018, the Advisory Council on the Ethical Use of AI and Data was established by the Infocomm Media Development Authority (IMDA) to guide the government on ethical AI deployment. This council works on creating governance frameworks and engaging various stakeholders, addressing the OECD principle of providing an enabling policy environment for AI, benefiting firms, the national government, and the labor force.

The Accelerated Initiative for Artificial Intelligence (AI2), led by the Intellectual Property Office of Singapore (IPOS) from 2019 to 2021, aimed to expedite the patent process for AI innovations, supporting Singapore’s digital economy and emphasizing the importance of protecting AI technologies. This initiative supports innovative enterprises and addresses OECD principles related to AI R&D and fostering a digital ecosystem.

Since 2020, Singapore has engaged in Bilateral AI Collaborations, supported by MCI and the Ministry of Foreign Affairs (MFA), to enhance AI development and deployment through international cooperation.

The Compendium of AI Use Cases, developed in 2020 by IMDA and PDPC, showcases AI governance practices across sectors, addressing multiple OECD principles and benefiting firms and industry associations.

In 2020, the AI Ethics and Governance Body of Knowledge (BoK), developed by the Singapore Computer Society and IMDA, provided a practical reference on ethical AI development. It addresses human-centered values, fairness, transparency, and accountability, targeting firms, SMEs, the national government, and civil society.

The AI Governance Testing Framework Minimum Viable Product (MVP), running from 2021 to 2024, involves IMDA and the Personal Data Protection Commission (PDPC) working with partners to enhance AI system transparency and trust. It addresses numerous OECD principles, including robustness, security, and accountability, benefiting firms and the labor force.

In healthcare, the AI in Healthcare Guidelines (AIHGle), co-developed by the Ministry of Health (MOH), Health Science Authority (HSA), and Integrated Health Information Systems (IHiS), aim to improve trust and safety in AI use, addressing inclusivity, human-centered values, and international cooperation, benefiting healthcare firms and the labor force.

AI Verify, initiated in 2022 by the Ministry of Communications and Information (MCI), is a toolkit for companies to self-test AI systems, focusing on transparency and accountability, and supporting innovation in AI governance.

The Generative AI Evaluation Sandbox (2023) by IMDA facilitates the evaluation of AI products, promoting trusted AI development and benefiting various stakeholders.

A 2023 Consultation on Advisory Guidelines on Use of Personal Data in AI Systems by IMDA and PDPC seeks public views on AI data protection, enhancing trust in AI deployment. The Discussion Paper on Generative AI (2023) by IMDA and Aicadium aims to foster a trusted ecosystem for generative AI, inviting global collaboration for responsible AI use. In 2023, MCI issued an internal circular on the use of Large Language Models (LLMs) in the public sector, driving AI adoption within government agencies. 

Starting in 2024, the ASEAN Guide on AI Governance and Ethics by MCI promotes regional AI framework alignment, supporting national governments and international entities. IMDA expanded the framework to include Generative AI, addressing new dimensions such as accountability, security, and public good, ensuring comprehensive governance.

REFERENCES

YEAR POLICY
2017AI APPRENTICESHIP PROGRAM
2017ROAD TRAFFIC (AUTONOMOUS MOTOR VEHICLES) RULES 2017
2018PRINCIPLES TO PROMOTE FAIRNESS, ETHICS, ACCOUNTABILITY AND TRANSPARENCY IN THE USE OF ARTIFICIAL INTELLIGENCE AND DATA ANALYTICS IN SINGAPORE’S FINANCIAL SECTOR
2018100 EXPERIMENTS
2018ADVISORY COUNCIL ON THE ETHICAL USE OF AI AND DATA
2019MODEL AI GOVERNANCE FRAMEWORK
2020AI VERIFY
2020AI ETHICS AND GOVERNANCE BODY OF KNOWLEDGE
2020CHARTERED AI ENGINEER
2020COMPENDIUM OF AI USE CASES
2020GUIDE TO JOB REDESIGN IN THE AGE OF AI
2020IMPLEMENTATION AND SELF-ASSESSMENT GUIDE FOR ORGANISATIONS
2021AI GOVERNANCE TESTING FRAMEWORK MINIMUM VIABLE PRODUCT (MVP)
2021AI IN HEALTHCARE GUIDELINES
2021COMPUTATIONAL DATA ANALYSIS EXCEPTION OF THE COPYRIGHT ACT 2021
2023CIRCULAR ON THE USE OF LARGE LANGUAGE MODELS IN THE PUBLIC SECTOR
2023CONSULTATION ON ADVISORY GUIDELINES ON USE OF PERSONAL DATA IN AI RECOMMENDATION AND DECISION SYSTEMS
2023DISCUSSION PAPER ON GENERATIVE AI – IMPLICATIONS FOR TRUST AND GOVERNANCE
2023GENERATIVE AI EVALUATION SANDBOX
2024ASEAN GUIDE ON AI GOVERNANCE AND ETHICS
2024MODEL AI GOVERNANCE FRAMEWORK FOR GENERATIVE AI