Our previous post introduced the actors of the AI assurance ecosystem and their roles in ensuring a robust ecosystem for responsible AI. Broken down into five broad categories, the article explained the responsibilities of government institutions, actors in the AI supply-chain, societal and research organizations, investors and insurers, and third-party assurance providers. But who the actors are is only half the picture. In this blog post, we look closer at how these groups interact to assure AI, drilling down on the flows that connect the network and the purposes they serve.
Resaro's Institutional Map depicts five distinct types of flows that characterise the interactions binding the ecosystem together. Each serves a different purpose necessary for the whole to function.
Flow 1: Core Value and R&D
The first flow is the supply chain of AI itself. It begins with knowledge creation through open collaboration between academia, civil society, and the open-source community that results in the foundational research and tooling that everything else is built on. This flows into the commercial AI market network, where developers, data and compute providers, and deployers interact to build, sell, and operate AI products through contractual relationships, documentation exchanges, and feedback loops. The source knowledge also informs the broader assurance ecosystem directly: academic research provides evaluation methods and metrics that assurance providers rely on, and open-source foundations supply the testing frameworks they use in practice. Standards alignment loops formalise and bind this Core Value and R&D flow together, with commercial and societal actors contributing technical expertise and real-world experience to the standards development process.
A particularly critical link in this flow is the system handoff between developers and deployers: this is where assurance documentation must transfer alongside the AI system itself. But information asymmetries mean that buyers frequently cannot verify vendors' claims about AI quality. Strengthening this handoff is one of the ecosystem's most pressing practical challenges.
Flow 2: Governance and Formal Mandate
The second flow originates among the government institutions at the top and defines the overall structure within which the ecosystem operates. It works in three ways. First, rule-setting: policymakers create the regulations that impose requirements on developers and deployers, and define the rights of end-users, while the judiciary interprets those rules and sets the liability precedents that drive compliance beyond what regulators alone can enforce. Second, empowerment: policymakers delegate authority to regulators, accreditation bodies, and notified bodies through legal mandates and formal designations, creating the chain of institutional trust that gives regulatory conformity assessment its legal weight. Third, monitoring and enforcement: regulators verify compliance across the core AI value network and oversee the activities of the assurance market itself, ensuring that the bodies issuing certificates are worthy of the trust placed in them.
Flow 3: Societal Feedback and Accountability
The third flow runs in the opposite direction: bottom-up, from individuals and societal groups through to governance institutions and the market. It aggregates people’s real-world experiences with AI through multiple channels: end-users and affected persons provide feedback and report harms directly to deployers and regulators; CSOs amplify individual cases into systemic advocacy and apply pressure on both market actors and policymakers; trade unions negotiate the terms of AI deployment in workplaces; and societal actors collectively shape expectations for what good assurance looks like — including calling out "audit-washing" when certification processes lack substance.
The actors and institutions driving this flow are often chronically undervalued and under-resourced. This poses a significant problem for the ecosystem as a whole because they offer crucial insights that should inform other actors’ practices. Real-world data on actual user behavior and affected people’s experiences with AI are among the most valuable inputs available to governance, standards development, and system design. Without them, regulators design rules that may not reflect lived reality, and standards bodies develop requirements that may miss what actually matters.
Flow 4: Assurance Engagement
The fourth flow connects the core value network to the third-party assurance market, transforming quality claims into verified facts. There are two foundational interactions: first, engagement between third-party assurance providers and professional bodies and SDOs ensure the assurance market is grounded in expertise and standards, establishing the basis for trust. Second, assurance providers consume the services and infrastructure of data and compute providers.
The main substance of this flow, however, is the formal, paid engagements between assurance providers and the members of the Core AI Value Network, i.e., infrastructure providers, AI developers, and deployers: mandatory conformity assessments for high-risk systems, voluntary certifications, readiness assessments, internal audit support, and advisory services for governance design. These engagements generate evidence that is central to the entire ecosystem: detailed reports and certificates that satisfy regulators, inform deployers' procurement and governance decisions, validate developers' performance claims, and feed directly into the risk calculations of insurers and investors. The assurance market does not just serve the organisations it audits; it produces the evidence infrastructure on which the whole ecosystem depends.
Flow 5: Financial Engagement
The fifth flow connects the market for capital and risk to the Core AI Value Network, and may be a more powerful governance mechanism than it initially appears. Pre-deployment, investors fuel AI innovation by providing risk capital, and do so through due diligence processes that draw on academic research, open-source signals, and assurance reports to validate technical claims and assess scalability. Post-deployment, insurers take on the financial liability of AI failure in exchange for transparency into risks, relying heavily on certifications, audit reports, and standards compliance evidence to price their policies.
What makes this flow significant beyond its transactional dimension is the upstream pressure it can create. Insurers set "insurability requirements" that ripple back through deployers to developers, effectively mandating product safety features through market demand. Investors signal that audit-ready AI is a condition for capital, creating financial incentives for governance investment that regulation alone may not produce, especially when enforcement is weak. In domains where standards and regulations have not yet caught up with the technology, financial markets can move faster to create pressures for safer AI.
All Five Must Function
Despite being grouped this way, the five flows are not independent. They all rely on each other to function effectively. A weak societal feedback flow means regulators lack the real-world evidence to enforce effectively. A thin assurance market means insurers cannot price risk accurately and investors cannot conduct meaningful due diligence. A poorly functioning governance flow means no actor has the mandate or authority to intervene when things go wrong. Each flow depends on the others to do its job.
However, not all five flows are equally well-developed today. Some are mature and well-resourced; others remain underinvested, fragmented, or not yet operationalised at the scale the ecosystem requires. Our next post will look at the specific functions and activities that make up AI assurance in practice, and explain which are already up and running, and which remain works in progress.