Whitepaper: AI Assurance Forum Report
Download now
The Resaro AI Solutions Quality Index (ASQI) provides a transparent, use-case-specific measure of AI quality — for applications such as customer chat services, object recognition, deepfake detection, or x-ray anomaly identification.
It is our ambition to make evidence of quality rather than hype, the driver of AI usage across the world. Without use-case-specific quality indices, comparing AI solutions or deciding when they are 'good enough' is nearly impossible, leaving the market opaque and slowing innovation.
1.
Specific to the use case
Ensuring a customer service chatbot isn't the same as ensuring a drone-testing system or a deepfake detection solution. ASQI balances overall flexibility with structured precision when deploying test solutions.
2.
A shared language
A quality index needs to be meaningful to business, governance, and technical teams. ASQI creates a common language for AI quality that all stakeholders can work with.
3.
Non-binary
Quality is not a yes/no characteristic. ASQI distinguishes 8 levels for each indicator — from best-in-class to strong training needs.
4.
Mapped to automatable technical tests
From every ASQI Indicator links to technical tests that can be automated, translating easily into the shared language of the index.
5.
The right level of detail
A quality index that is too broad is meaningless; too detailed is overwhelming. ASQI uses about one dozen indicators spanning key aspects of performance and risk handling — a practical balance for real-world decision making.
6.
Compatible with established AI governance frameworks
ASQI can support established regulations and standards like the EU AI Act, ISO IEC 42001, AI Verify, and company policies. Many indicators of quality will help to support compliance with such established governance frameworks.
7.
Compatible with standardised task catalogues
While a quality index is designed to be at the system or solution level, it will inevitably refer to the 27 core tasks that a solution is designed to address. Each reference should be to broadly recognised task catalogues as they are currently being developed in various standardisation initiatives.
ASQI Engineer is an open-source framework for testing and assessing AI systems. Built for scale and reliability, it uses conventional test packages, automated assessments, and repeatable workflows to make evaluation transparent and robust.
With ASQI Engineers, organisations also run ASQIs that they have created themselves, giving teams full control and confidence in AI quality.
We invite interested communities to pilot and co-create this approach. Test it is within with the beta Chatbot ASQI — a real-world use case showing how indicators, technical tests, and governance requirements come together.