When people talk about AI assurance, they typically picture model testing and certification. Broader interpretations might extend to evaluations of data quality or documentation of training methods. But AI models and their technical backbone make up only a small fraction of what can and must be assured in practice to achieve safe and responsible AI development and deployment. The real scope is much larger, it just has never been mapped out comprehensively. As a result, important things can get missed and overlooked in conversations about AI assurance.
Resaro's Objects of Assurance map addresses this gap by describing the full scope of AI assurance in a structured, consistent way. It organises everything that can be assured into three distinct layers, each answering a different essential question about an AI system.
Layer 1 addresses the management system of the deploying organisation. The core question which AI assurance at this layer must answer is: Does this organisation have the governance, processes, and people needed to operate AI responsibly? This layer covers the policies, risk management frameworks, internal audit processes, and workforce competencies that determine whether an organisation is capable of deploying AI in a safe way, before any particular system is even considered. Consider a healthcare organisation that wants to use a diagnostic AI tool. Assurance in this layer would examine how they intend to deploy and use the system responsibly, for example by vetting the tool’s vendor and by taking steps to assess and manage potential risks to their patients emerging from the tool’s use.
Layer 2 concerns the specific AI solution and its technical artefacts: the underlying model and algorithm, the training data, the technical documentation, the computing infrastructure, and the operating conditions. Here, assurance helps to answer the question: Is this particular AI solution technically sound, well-documented, and secure for its intended use? The objects in this layer are what most people have in mind when they think about AI assurance, since they are all about the quality of the AI system itself. Returning to our healthcare example, assurance in this layer would focus on the AI system’s performance and the adequacy of its training data, among other things. Arguably, if the AI solution isn’t technically sound, ensuring trustworthiness through organisational governance is a near-futile endeavor. But as the Objects of Assurance mapping shows, assuring the technology independent of its real-world deployment impact or the deployer’s governance model is insufficient to ensure a responsible AI ecosystem.
Layer 3, AI Impact, covers the real-world consequences of AI systems in operation, how they are overseen and whether harms and impacts are acknowledged, remedied and integrated in learning loops to improve systems. The question assurance of these objects answers is: What actually happens to real people when this system is deployed, and are those impacts handled responsibly? This layer covers impact assessments, incident tracking, human oversight arrangements, operator competence, and the mechanisms through which affected individuals can raise concerns and seek redress. It ensures that the wider context and consequences of AI system use are considered pre- and post-deployment and influence ongoing governance processes and decisions throughout the AI lifecycle. In our healthcare hypothetical, assurance would, for example, examine if doctors reviewing the AI system’s diagnosis have been trained to operate the system, and educated on issues such as automation bias.
Within each layer, the map organises its content using a consistent taxonomy. Each primary object of assurance, which might be a system, a process, a dataset, a document set, or a mechanism, is broken down into a set of sub-objects. These sub-objects are the concrete "hooks" that assurance activities grab onto: the narrower, auditable bundles of documentation, processes, or configurations that together make up the primary object.
Every object, at every level, is defined in the same way. It has a set of assurance properties, meaning the qualities of that object that matter for assurance such as performance, coverage, completeness, or independence, and a set of assurance inputs: the specific artefacts required to judge the required properties. This consistent structure facilitates navigating across the different layers and objects by showing at a glance what assuring this object entails.
Together, these three questions - about governance, technical soundness, and real-world impact - constitute a complete picture of what responsible AI assurance must cover. The layers are distinct but each critical in its own right, and conflating them can lead to blind spots. A technically sound model can still be deployed irresponsibly: without adequate human oversight, without meaningful redress mechanisms, or into a context its developers never intended. Equally, a well-governed organisation with mature internal processes can still deploy a system built on flawed or biased data, producing harmful outcomes despite good intentions. And an AI solution that passes every technical test may be run by a deployer that lacks the organisational capability to assess and manage risks, respond to incidents, and improve over time.
Comprehensive assurance requires intentional coverage across all three layers. None of them is optional, and none substitutes for the others. The layered structure makes this explicit and gives practitioners and developers a concrete basis for identifying where their current assurance efforts fall short.