Singapore's New AI Safety Initiatives: A Global Commitment to Responsible AI
Thursday, Feb 13, 2025 3:41 am ET

Singapore, a global leader in digital innovation, has announced a series of new AI governance initiatives aimed at enhancing the safety and reliability of AI products and services. These initiatives, unveiled by Minister for Digital Development and Information Josephine Teo at the AI Action Summit (AIAS) in Paris, reflect Singapore's commitment to balancing AI innovation with necessary safeguards.
The key initiatives include the Global AI Assurance Pilot, the Joint Testing Report with Japan, and the Singapore AI Safety Red Teaming Challenge Evaluation Report. These measures address the transboundary nature of AI products and services, ensuring that AI systems are developed and deployed responsibly, both in Singapore and globally.
1. Global AI Assurance Pilot: This pilot, launched by the AI Verify Foundation and the Infocomm Media Development Authority (IMDA), brings together AI assurance vendors and companies using generative AI (GenAI) solutions. The primary goal is to establish global best practices for technical testing of GenAI applications. By convening leading AI assurance and testing vendors with firms deploying real-life GenAI applications, the Pilot will shape future AI assurance standards and future assurance services. This will grow the local and international third-party AI assurance markets and provide practical input to AI governance frameworks.
2. Joint Testing Report with Japan: In collaboration with Japan under the AI Safety Institute (AISI) Network, Singapore has released a Joint Testing Report focusing on making Large Language Models (LLMs) safer in different linguistic environments. The report assesses guardrails for LLM safety across ten languages and five harm categories, addressing gaps in non-English language models. This joint testing exercise expands on global efforts to make models safer in different linguistic environments, given the current English-centric training and testing that potentially leaves gaps in non-English safeguards.
3. Singapore AI Safety Red Teaming Challenge Evaluation Report: This report evaluates LLMs for cultural bias in non-English languages and provides a consistent methodology for testing across diverse languages and cultures. It aims to address regional AI safety concerns by understanding how LLMs perform in different cultural and linguistic contexts. The report is based on findings from the AI Safety Red Teaming Challenge, organized by the IMDA and Humane Intelligence, which involved over 50 participants from nine countries across Asia Pacific. The Challenge aimed to advance the sciences in AI testing, a nascent space globally, and the data collected will be used to develop benchmarks and automate testing for regional safety concerns.
Minister Teo's participation in the AI Action Summit and her endorsement of the Leaders' Statement on Inclusive and Sustainable AI for People and the Planet reflect Singapore's approach to balancing AI innovation with necessary safeguards. By working closely with international partners, Singapore ensures that AI development remains inclusive, transparent, and accountable. This commitment to international collaboration sets a precedent for other countries to follow, encouraging greater cooperation in AI governance and innovation.
In conclusion, Singapore's new AI safety initiatives address the transboundary nature of AI products and services, enhancing the safety and reliability of AI systems worldwide. These initiatives align with Singapore's commitment to international collaboration, fostering a more inclusive and sustainable global AI landscape. By promoting best practices for AI testing and evaluation, Singapore contributes to the development of a safer and more accountable global AI ecosystem.