0
30.05.2025
Welcome address by Chia Song Hwee, Deputy CEO of Temasek, at the inaugural Temasek x Resaro AI Assurance Forum, held on 26 May 2025 in Singapore.
Good morning, esteemed guests, and members of the AI Community. It is my honour to welcome you to the AI Assurance Forum, organised by Temasek and Resaro.
We are standing at a defining crossroads in the evolution of AI.
AI has evolved beyond a technological breakthrough — it is now reshaping businesses, economies, societies, daily lives, and even geopolitics and national security.
Today, I want to highlight why aligning innovation with smart constraints is essential for building lasting trust in AI.
AI innovation is advancing at a rapid pace. From generative models to autonomous systems, we’re pushing into very disruptive and powerful new territory. Without thoughtfulness in innovation, rapid progress can overtake our ability to manage risk, protect people, and uphold public trust.
For innovation to succeed, we need the smart constraints to guide it toward safety, responsibility, and long-term value.
Smart constraints are like all the safety and protective gears in a F1 race car – essential for navigating speed with safety, not to slow us down, but to keep us on track.
Essentially, they encourage foresight, as developers anticipate risks before they cause harm, and ensure quality by identifying design flaws early and building assurance into the system.
Smart constraints should demand rigour from larger players yet provide startups the flexibility to scale responsibly. They also promote transparency, helping users, regulators, and partners understand how systems work.
This balance of innovation and accountability cannot be achieved by any one group alone.
It requires all stakeholders to work together, including: enterprises to build and deploy AI systems, investors to fund innovation and influence priorities, regulators to establish guardrails that protect society, standard setters to create tools for consistent measurement and governance, civil society to advocate for fairness, inclusion, and rights, and academia to push the boundaries of knowledge through fundamental research.
Each stakeholder plays a unique role. And together, we have the power to shape AI’s trajectory for generations. We must grasp the challenges of Responsible AI.
Poor enterprise-level AI implementation can significantly harm companies' performance, valuation, and reputation.
Let’s take a lesson from outside the AI world — from hardware.
When Apple introduced its butterfly keyboard in 2015, it aimed to innovate: a thinner, sleeker design. But the product was not adequately tested for real-world. It failed under basic conditions like dust and wear.
The result? Widespread failures, customer frustration, lawsuits, costly redesign and reputational impact.
The message is clear, innovation without upstream testing leads to downstream failures. With AI, the stakes are even higher. These systems are dynamic, fed by vast amount of data, and continuously learning and updating.
These risks aren’t just bugs — they’re also behavioural failures: bias, drift, hallucination, manipulation, or misuse. The implications of these failures can be exponential.
You can’t patch trust after a breach. Reputation is everything. Without trust, you lose your licence to operate— and your ability to scale.
Investors can play a powerful role in shaping Responsible development.
When they ask: “What are your model risk controls?”, “How are you ensuring fairness and explainability?” or “Are you audit-ready for upcoming regulations?”— they signal that responsibility is a condition for investment, not a trade-off.
And this matters, because where capital flows, innovation follows. In 2019, when we began looking closely at AI, we recognised the potential demand for AI Assurance in the market.
We consistently evaluated and anticipated the appropriate time for offering solutions to meet this need.
In sectors like healthcare, finance, and public services, high assurance and transparency are lighthouse to opportunity, not obstacles. Meeting rigorous safety, security, and compliance standards unlocks high-value contracts and partnerships.
This was the foundation for the establishment of Resaro, an AI assurance firm that offers independent, third-party testing of mission-critical AI systems. Resaro exemplifies our dedication to creating AI solutions that not only drive business success but also contribute positively to society.
By focusing on Responsible AI practices, we ensure AI benefits are shared fairly and sustainably. I would like to stress again how each stakeholder, represented here, can reinforce both the momentum and direction of Responsible AI and Innovation.
Enterprises lead in technical advancement — and embed transparency and assurance to accelerate adoption.
Investors drive innovation through capital and anchor responsibility through expectations.
Regulators and standard setters design rules and frameworks that balance protection with progress.
Civil society keeps the ecosystem honest and inclusive.
Academia builds knowledge and the next generation of Responsible AI talent.
Let’s be clear: trustworthy AI won’t happen by default. It will happen because we demanded it — and because we built it together. Let’s align innovation and smart constraints — to shape AI that uplifts all of us.
With AI adoption accelerating across industries, the complexity of the systems being deployed grows in tandem.
This shift has heightened the need for robust development and testing practices. While efforts to establish common AI standards are converging, regulatory perspectives remain varied — driving considerations around interoperability, trust, and accountability.
The Temasek x Resaro AI Assurance Forum provides an exclusive space for thought leaders to navigate this fast-evolving landscape. Gathering 35 experts from industry, government, academia, and civil society, the Forum will focus on the theme: "Assurance for Rapid AI Adoption."
Discussions are conducted under Chatham House rules, encouraging candid, open dialogue in a trusted setting.
This inaugural Forum is part of the AI assurance and testing series under ATxSummit, Asia’s flagship tech event. Key insights from the Forum will be synthesised and published in a post-event report, offering actionable takeaways to help enterprises move AI assurance from principle to practice.
Resaro is an independent, third-party AI assurance provider that assures the performance, safety, and security of mission-critical AI. We are also a member of the KI Park in Germany, and a premier member of Singapore IMDA’s AI Verify Foundation, and are committed to enhancing open-source tools that are globally interoperable as an enabler of trust in AI.
Resaro was founded by global investment company Temasek on the belief that as AI becomes increasingly embedded in industries and everyday lives, assurance of AI systems will go through a fundamental evolution to ensure innovation and accountability go hand-in-hand.