A Field Guide to the AI Assurance Ecosystem: Who's Who and Why It Matters

Our previous two posts mapped out the objects of assurance: what needs to be assured across the management, technical stack, and real-world impact of AI systems. But objects don't assure themselves. To ensure AI is developed and deployed responsibly, a whole network of institutions must come together that share the responsibility. The overall picture is considerably more complex than most people assume. 

Resaro's Institutional Map organises this ecosystem into five zones, each representing a distinct category of actors. Together they form a complete picture of who plays a role in AI assurance, and why each zone is indispensable. Understanding who they are and what role they play is the essential first step to navigating it effectively. 

At the top: the Rule-Setters

The Framework & Governance zone sits at the top of the map: it is the source of the formal laws, mandates, and standards that flow down to the entire ecosystem. Its actors are diverse but complementary. Policymakers (parliaments and ministries) define mandatory rules, allocate responsibilities and liabilities, and balance the demands of innovation, competitiveness, and public protection. At the international level, inter-governmental organisations and forums such as the OECD and the UN convene governments and other AI stakeholders to set global norms and promote policy interoperability, ensuring that AI governance does not fragment along national lines. Regulators and market surveillance authorities then translate these laws into enforcement, intervening when AI systems create unacceptable risks or violate legal requirements.

Sitting alongside them are the institutions that define the technical and professional infrastructure of assurance. Standards Development Organisations convene global experts to create the shared language that makes consistency, interoperability, and trade possible: standards like ISO/IEC 42001 that give concrete meaning to abstract requirements. Accreditation bodies act as the "meta-assurer," verifying that the conformity assessment bodies performing audits are themselves competent, impartial, and independent. Professional bodies and consortia define the competencies, ethics, and conduct expected of individual assurance practitioners. And the judiciary provides the ultimate backstop: interpreting the law, assigning liability, and setting the legal precedents that define the standard of care for the entire ecosystem.

At the centre: the Builders and Deployers

The Core AI Value Network runs horizontally through the middle of the map, representing the actors directly involved in building, selling, and operating AI systems. Data and compute providers supply the foundational raw materials - training datasets, infrastructure, and platforms - on which everything else is built. Model providers and system developers are the central engine of the value chain: they build and train AI components, place systems on the market, conduct internal assurance, manage system-level risks, and generate the assurance documentation that all downstream actors rely on. Procurers and deployers, spanning enterprises, SMEs, and public sector organisations, are the "customer" side of the chain, but carry significant responsibility of their own: as the owners of a system's use in a specific context, they are accountable for the deployed solution, the human oversight arrangements around it, and its real-world impact on individuals and society.

At the bottom: the Accountability Engine

The Societal & Research zone forms the bottom-up foundation of the ecosystem: the sources of public accountability, foundational knowledge, and real-world feedback that close the loop between rule-setting, AI development and lived experience. End-users and affected persons are the ultimate beneficiaries of assurance, and their confidence is essential for a functioning AI market; their influence comes from leveraging governance institutions from the top through elections, complaints, and litigation. CSOs and advocacy groups act as the public conscience of the ecosystem, holding governments and corporations accountable and ensuring that marginalised communities are not left out of the picture. Trade unions and labour organisations give workers who are among the most directly affected by AI deployment a formal voice in how systems are designed and used. Academics and research institutes generate the pre-normative evidence, measurement frameworks, and evaluation methods that underpin standards and assurance practices across the value chain. And open-source projects and foundations democratise access to AI tools and, critically, develop the open evaluation frameworks and testing toolkits that the entire ecosystem relies on for assessment.

On the sides: the Market for Capital and Trust

Two service markets plug into the core value network from either side. On the left, the Financial & Risk zone represents the market for capital and risk, where financial resources are allocated to innovation and where liability is quantified and transferred. Investors and venture capital fund innovation and scale, and in an ideal ecosystem act as a governance lever by requiring responsible, audit-ready AI as a condition for investment. Insurers underwrite the risks associated with AI adoption and, by tying coverage and pricing to assurance practices, create powerful financial incentives for organisations to embed robust controls. On the right, the Third-Party Assurance zone represents the market for trust and verification. Advisory and audit firms help developers and deployers design governance frameworks, perform readiness assessments, and provide external attestation on the effectiveness of internal controls. Market-based conformity assessment bodies provide voluntary testing, inspection, and certification services. And designated notified bodies carry a formal legal mandate to perform mandatory conformity assessments for high-risk AI systems, a role created by legislation such as the EU AI Act.

Assurance actors form a network, not a hierarchy

What the map makes clear is that the AI assurance ecosystem is not a simple chain of command. Top-down mandates from policymakers only work because they are complemented by bottom-up accountability from CSOs and end-users. Effective audits and assessments by third-party assurers depend on methods, frameworks, and tools created by research and open-source coding communities. The financial discipline of investors and insurers depends on the quality of assurance signals generated by CABs and advisory firms. Each zone depends on the others to function.

Knowing your position in this map, and understanding who you depend on and who depends on you, is foundational to doing assurance well. Our next post will look at how all these actors actually connect: the five flows of activity that bind the ecosystem together, and where those connections are most likely to break down.