0

13.02.2025

Article Hero Image

Note: This is a translated text from Lianhe Zaobao's article.

Read the original article (in Mandarin) here: https://www.zaobao.com.sg/news/singapore/story20250211-5863058

Singapore has announced three AI safety governance initiatives aimed at addressing potential risks associated with AI usage across diverse languages and cultural contexts, offering practical insights for the safe global development of new technologies.

During the AI Action Summit in Paris, Mrs Josephine Teo (Minister for Digital Development and Information) revealed the initiatives on Monday (February 10) at a focused session on government-business collaboration.

These initiatives include: the Global AI Assurance Pilot launched by the AI Verify Foundation and the Infocomm Media Development Authority (IMDA); a joint report on the safety testing of large language models in ten different linguistic environments, conducted in collaboration with Japan; and an AI safety "Red Teaming" challenge assessment report, developed by IMDA and Human Intelligence.

Minister Teo stated, "AI is reshaping our world, and Singapore is seizing this opportunity to contribute to AI safety and governance. We are committed to building strong global partnerships across industry, academia, and society, ensuring that AI benefits Singapore and the world as a public good."

"Red Teaming" refers to the practice of simulating malicious attacks to identify potential vulnerabilities and weaknesses in AI systems. This exercise, conducted between November and December last year, involved approximately 350 experts from nine countries—Singapore, China, Malaysia, Indonesia, Japan, South Korea, India, Thailand, and Vietnam.

Minister Teo described it as the world’s first cross-cultural, multilingual exercise of its kind.

The exercise identified and summarized instances of misrepresentation by AI in different linguistic contexts, with gender bias being the most prominent, followed by biases related to race, religion, geography, identity verification, and socio-economic factors.

The Global AI Assurance Pilot, one of the three announced initiatives, will bring together AI security testing providers and generative AI companies from around the world to help develop AI safety standards and governance frameworks.

Minister Teo added, "Through the pilot project, we are taking the first step toward establishing industry standards for generative AI applications in sectors like healthcare and finance."

Professor Simon Chesterman, Vice Dean of Education Innovation at the National University of Singapore and Senior Director of the National AI Office, has been researching AI governance and data security for over a decade. He shared with Lianhe Zaobao that discussions on AI governance often focus solely on the EU, the US, or China.

Singapore is carving out a unique path by broadening the scope of discussions and emphasizing practical implementation, rapidly becoming an important global player in AI governance.

He explained, "You can think of LLMs as being like a very smart intern with a drinking problem. They can do excellent work, but you have to check it in case they’ve been on a bender. Singapore’s recent initiatives are all geared towards increasing the checks on these systems and increasing the thing we need most: trust."

Resaro, an AI assurance testing company with offices in Germany and Singapore, will participate in the Global AI Assurance Pilot. CEO (Singapore) of Resaro Singapore, April Chin, noted that the pilot project allows them to expand AI assurance solutions to a broader market and industry to meet the growing demand for AI assurance.

She said, "By participating in the pilot, we aim to accelerate AI adoption by providing these organizations with critical technical evidence on the performance, safety, and security of their AI applications, ensuring they can confidently navigate the AI landscape."