AI assurance helps ensure claims about AI capabilities and risks are well-founded and justified. It is a key driver of responsible innovation. Meaningful, standardised ways to measure AI capabilities improve transparency across the industry, which supports healthy competition. A better understanding of what AI systems can and cannot do helps teams focus innovation where it matters most, and make more informed deployment decisions. This leads to fewer harmful incidents, stronger trust from users, greater business value, and faster adoption of AI in real-world settings.
Effective assurance relies on an entire ecosystem of assurance actors, institutions, defined responsibilities, frameworks, testing tools, and more. Crucially, this ecosystem does not have to be built from the ground up: a long history of effective quality and safety assurance in other industries serves as a strong foundation. Core institutions such as accreditation bodies, and processes such as standards development, are already well established and can be adapted to also handle AI. Resaro is one of the few organisations that considers and understands the full AI assurance landscape.
As a service to the assurance community and beyond, we are making three maps available that showcase the complex emerging AI assurance ecosystem in a structured way – including elements that are well-established for earlier technologies but still nascent for AI.
These maps provide a comprehensive overview of the ecosystem’s essential players, functions, and activities, alongside intricate detail about roles, responsibilities, relationships and requirements that make all elements work together. They demonstrate the complex interdependencies, feedback and accountability loops, and implicit and explicit incentive structures between established assurance actors and AI-specific newcomers to the ecosystem. Together, they advance understanding of the assurance ecosystem as a whole:
-
WHO are the relevant actors?
-
WHAT do they need to do to fulfill their roles?
-
HOW is assurance done in practice?
Each map focuses on a different subset or dimension of the ecosystem:
1: Objects of Assurance
This map illustrates what can actually be assured in the AI ecosystem, broadly categorised into 1. the management system of the organisation, 2. the AI system and its technical artefacts, and 3. the AI system’s impact. The map details – for every assurance object – its assurance properties (i.e., the qualities that matter) and the evidence required to judge those properties.
It can be used to answer questions such as “What needs to be assessed to assure that an organisation has the governance, processes, and people needed to operate AI responsibly?” and “What are typical evidence artefacts supporting assurance of data and datasets used to train, test, and run an AI system?”
2: Institutional Mapping
This map provides a comprehensive overview of the ecosystem’s institutional actors, their core responsibilities in the assurance process, and their mutual interdependencies – creating an intricate and complex network of actors that support robust assurance. Critically, it delineates both formal and informal interactions, which showcases relationships between actors that are easily overlooked.
This map can be used to answer questions such as “What role do insurers and investors play in AI assurance”, “How do societal actors’ feedback and accountability efforts support AI assurance?”, and “Who are the essential actors in the market for trust, and who are their enablers?”
3: Functional Mapping
This map zooms into the different functions the various actors fulfill in the “assurance stack”. It illustrates how research and policy communities create standardised frameworks and methods that assurers use to assess AI systems’ conformity, which in turn is subject to oversight and monitoring that generates insights and learnings that feed back into new methods, frameworks, and assurance processes.
The map can help answer questions such as “How does scientific research translate into verifiable trust in AI?” and “Which are the core functions that operationalise assurance in practice?”
These maps are living documents that illustrate the emerging AI assurance ecosystem. As the space develops, some information may become relevant or outdated, and we intend to reflect those changes in the maps via regular updates. We encourage feedback, questions and suggestions for improvement via contact@resaro.ai!
We are also excited to announce a series of reports that Resaro is launching in collaboration with Partnership on AI. These reports aim to promote the development of the AI assurance ecosystem by clarifying the need for independent assurance to build trust in AI, mapping interdependencies between assurance providers and other elements of the ecosystem, and identifying priorities for policy intervention to support effective AI assurance.
Readers interested in learning more about the AI assurance ecosystem will find the reports on our website as they are released.