New whitepaper released: AI Assurance Forum Report
Download now
The Resaro AI Solutions Quality Index (ASQI) provides a transparent, use-case-specific measure of AI quality — for applications such as customer chat services, object recognition, deepfake detection, or x-ray anomaly identification.
It is our ambition to make evidence of quality rather than hype, the driver of AI usage across the world. Without use-case-specific quality indices, comparing AI solutions or deciding when they are “good enough” is nearly impossible, leaving the market opaque and slowing innovation.
1.
Specific to the use case
Measuring a customer service chatbot isn’t the same as measuring a drone landing system or a deepfake detection solution. ASQI balances flexibility with specificity, so quality remains meaningful across different use cases.
2.
A shared language
A quality index needs to be meaningful to business, governance, and technical teams.
ASQI creates a common language for AI quality that all stakeholders can work with.
3.
Non-binary
Quality is not a yes/no characteristic. ASQI distinguishes 5 levels for each indicator — from best-in-class to minimal concern.
4.
Mapped to automatable technical tests
Every ASQI indicator links to technical tests that can be automated, translating results into the ‘shared language’ of the index.
5.
The right level of detail
A quality index that is too broad is meaningless, too detailed is overwhelming.
ASQI uses about two dozen indicators spanning key aspects of performance and risk handling — a practical balance for real-world decision-making.
6.
Compatible with established AI governance frameworks
ASQI can support established regulations and standards like the EU AI Act, ISO/ IEC 42001, AI Verify, and company policies.
Many indicators of quality will help to support compliance with such established governance frameworks.
7.
Compatible with standardised task catalogues
While a quality index is designed to be at the system or solution level, it will inevitably refer to the 2-3 core tasks that a solution is designed to address.
Such references should be to broadly recognised task catalogues as they are currently being developed in various standardardisation initiatives.
ASQI Engineer is an open-source framework for testing and assuring AI systems. Built for scale and reliability, it uses containerised test packages, automated assessments, and repeatable workflows to make evaluation transparent and robust.
With ASQI Engineer, organisations also run ASQIs that they have created themselves, giving teams full control and confidence in AI quality.
We invite interested communities to pilot and co-create this approach. See it in action with the beta Chatbot ASQI — a real-world use case showing how indicators, technical tests, and governance requirements come together.