By April Chin, Co-CEO, Resaro
My first encounter with deepfakes was not through a viral video or headline. It was when I almost fell for a scam call. The voice on the other end sounded so authentic that I hesitated before hanging up. That moment made me realise how fragile trust can be, and how quickly our instinct to believe can be exploited.
Later, I began noticing synthetic media appearing in smaller, everyday ways. A fabricated video shared in a family group chat. A voice clone used in another scam. These seemingly trivial examples showed me that deepfakes are not just a technological challenge. They are a societal one. When we can no longer trust what we see and hear, we lose the ability to agree on what's real.
Why Deepfakes Matter
Singapore is among the most concerned globally about deepfakes. What worries me most is the speed at which misinformation can spread before truth catches up.
As synthetic media becomes more realistic, people are caught between believing too quickly and doubting everything. We need systems that can verify authenticity as quickly as misinformation moves.
At Resaro, we think about this as content integrity. Can people, institutions, and platforms trust the digital content they rely on? That question drives everything we build.
A Tool with Two Faces
Deepfakes also enable positive uses, from accessibility to retail. I am most inspired by applications that use generative AI to extend human capability rather than replace it. Using synthetic voice to give speech back to someone who has lost it, or creating multilingual avatars that bridge communication barriers, shows the positive impact that this technology can offer.
The EU AI Act defines deepfakes as "AI-generated or manipulated image, audio or video content that resembles existing persons, objects, places or events and would falsely appear to be authentic or truthful." Since 2019, Singapore has lost more than $3.4 billion to scams, with $1.1 billion recorded in 2024 alone. In Hong Kong, a sophisticated AI-generated video scam of the company’s CFO led to a finance worker paying out $25 million.
These examples all use the same underlying technology. The question is whether we have the right safeguards to ensure innovation does not come at the expense of content integrity.
Resaro's Approach
An ‘aha!’ moment for us was when we realised the same technology being used to generate harmful deepfake content can also be used to test deepfake detectors. In this ecosystem, detectors act as sentinels, working quietly in the background to spot synthetic content before it spreads.
Yet there is no one-size-fits-all detector. Each faces a moving target, because new generation models emerge almost weekly. Public datasets used for training detectors are also often used to train generators, making them less effective for testing. Releasing more datasets is risky too, because they can be repurposed for harm.
This led us to build the Resaro Deepfake Detector Evaluation Suite. It enables organisations to generate proprietary synthetic data to test their own detection systems safely. Our platform integrates more than 30 detection models across 10 types of deepfake generation, from voice and face to lip-sync and gesture, and is updated to reflect evolving threats.
What we are learning is that pre-trained detectors often perform well only on the types of fakes they have already seen. Accuracy can exceed 95 per cent on familiar datasets but drop sharply on new ones. Continuous evaluation is essential. Detectors must evolve as quickly as the deepfakes they defend against.
What drives us is restoring the possibility of believing what you see online, knowing that somewhere in the background, verification is happening.
Building Digital Trust from Singapore
Singapore has shown strong leadership in digital trust through initiatives like AI Verify, which promotes transparency and accountability in AI systems. As a Premier Member of the AI Verify Foundation, Resaro contributes to open-source testing tools that allow more organisations to innovate responsibly.
Being part of IMDA Spark accelerated how quickly we could test these ideas with real partners facing real challenges. Spark held space for unconventional approaches and connected us with organisations exploring constructive uses of generative AI, from content verification to accessibility tools. What these partnerships taught us is that technology earns trust through rigorous, open, and repeated testing.
Singapore's approach shows that regulation and innovation can reinforce each other. When we contribute globally, we do so with lessons grounded in real deployments here at home.
Looking Ahead
Deepfakes aren’t going away. They are becoming a permanent part of our digital landscape, increasingly realistic and convincing. Our challenge is to ensure they don’t erode trust, by rigorously testing the tools designed to detect them.
Our goal at Resaro is not to stop the creation of synthetic media, but to create assurance around it. We want people to engage with these technologies safely and confidently.
A decade from now, I hope we are remembered as advocates who insisted that trust must evolve alongside technology. The same tools that deceive can be used to defend. If we get this right, the technology that threatens trust could become its greatest guardian.
Pathfinding in AI assurance is not for the fainthearted, but with partnerships like IMDA Spark and Singapore's broader commitment to digital trust, we are building the foundations for a more verifiable future.