By Timothy Lin, April Chin, Dr. Sebastian Hallensleben, Stanimir Arnaudov, and Daniel Lee
Abstract
The proliferation of sophisticated deepfake technologies presents unprecedented challenges for defense organizations, as adversaries can exploit them to manipulate information environments, disrupt decision-making, undermine trust in leadership and compromise operational security.
Unlike commercial applications, defense contexts demand reliability standards that account for asymmetric threat landscapes and the need to protect decision-making from adversarial influence.
In this paper, we introduce the AI Solution Quality Index (ASQI) for deepfake detection systems, translating high-level requirements into specific technical evaluations through six quality indicators spanning accuracy, robustness, transparency, stability & bias, adaptability & updates, and speed & throughput.
We also explain our data collection strategy combining over 3,000 “deepfakes in the wild” examples with controlled synthetic generation using multiple manipulation techniques across video and audio modalities, incorporating realistic post-processing effects to simulate operational conditions encountered in defense environments.