Written in collaboration with our partner, the Defence Science and Technology Agency (DSTA).
By Timothy Lin, April Chin, Dr. Sebastian Hallensleben, and Stanimir Arnaudov of Resaro, and Daniel Lee (DSTA).
The NATO Science & Technology Organization (STO) has accepted this research submission for presentation at the NATO IST-210 Research Symposium on AI Security and Assurance for Military Systems. The paper builds on ongoing work we are advancing closely with our partners and customers across defence and national security.
Abstract
The proliferation of sophisticated deepfake technologies presents unprecedented challenges for defense organizations, as adversaries can exploit them to manipulate information environments, disrupt decision-making, undermine trust in leadership and compromise operational security.
Unlike commercial applications, defense contexts demand reliability standards that account for asymmetric threat landscapes and the need to protect decision-making from adversarial influence.
In this paper, we introduce the AI Solution Quality Index (ASQI) for deepfake detection systems, translating high-level requirements into specific technical evaluations through six quality indicators spanning accuracy, robustness, transparency, stability & bias, adaptability & updates, and speed & throughput.
We also explain our data collection strategy combining over 3,000 “deepfakes in the wild” examples with controlled synthetic generation using multiple manipulation techniques across video and audio modalities, incorporating realistic post-processing effects to simulate operational conditions encountered in defense environments.