0
13.05.2024
As the use of artificial intelligence (AI) has become more prevalent, so too have cases of AI failures. It is important to understand that while there are intentional modes of AI failures—for e.g. a result of adversarial cyberattacks—equally distressing are unintentional failures. Unintended failures usually occur because AI systems are designed poorly, not properly validated or tested, or were wrongly deployed by users.
From self-driving car accidents that resulted in pedestrian fatalities to erroneous predictions and diagnoses by AI algorithms designed to assist with patient care in hospitals, serious concerns have been raised about the safety and reliability of AI decision-making in real-world environments. Naturally, this has led to a decline in public trust. In this context, the development of trustworthy AI is not just a technical challenge but a business imperative. AI assurance emerges as a critical component in this landscape, ensuring that AI systems are reliable, ethical, and compliant.
AI assurance is necessary to establish and maintain trust in AI systems. Drawing inspiration from assurance services in fields like accounting and cybersecurity, AI assurance is the “infrastructure” for thorough checking, verification and communication of reliable evidence about the trustworthiness of an AI system. It is the bedrock upon which the edifice of artificial intelligence is built, scrutinising every layer—from the algorithms at the core to the data inputs and outputs—ensuring they are performing as intended, meeting stringent standards of safety and robustness.
This involves evaluating AI systems against established standards, regulations, and guidelines, thereby enabling stakeholders in the AI ecosystem to build justified trust in these systems' development and usage.
Across jurisdictions and markets, there are different approaches emerging to carrying out AI assurance. For example, the Centre for Data Ethics and Innovation provides a framework for organisations to build a robust AI assurance process:
By integrating AI assurance into your operations, you not only avoid the pitfalls of non-compliance but also showcase your commitment to responsible AI practices.
Consider, for instance, a financial institution deploying an AI-driven decision-making tool for credit assessments. By adhering to the rigorous standards set forth by international bodies, this institution does more than just comply; it demonstrates to its customers and stakeholders a steadfast dedication to ethical AI, positioning itself as a vanguard of responsible innovation.
Risk management is another area where AI assurance proves invaluable. It is about being proactive in spotting and addressing potential issues before they escalate. Think of it as quality control for AI. For instance, a tech company might use AI assurance to catch a flaw in its data processing algorithms that could lead to incorrect analytics, preventing potential customer service issues and safeguarding the company’s operational integrity.
Lastly, the importance of brand reputation in the digital age cannot be overstated. Consumers today are more informed and conscious about the responsible use of AI. By prioritising AI assurance, companies can enhance their reputation and gain consumer trust. For example, an e-commerce company using AI for personalised recommendations can proactively use assurance measures to show customers that their personal data is used responsibly, building trust and loyalty.
As AI continues to proliferate across various sectors, AI assurance continues to gain significance. It is a multifaceted approach that ensures AI systems are not only effective but also performing as intended, compliant, and trustworthy. For businesses, investing in AI assurance is not just about risk mitigation; it is about building a foundation of trust with customers and stakeholders in the digital age. As C-suite leaders, embracing AI assurance is crucial in steering organisations towards a future where AI is an asset rather than a liability.
By the UK’s Department for Science, Innovation, and Technology