0

13.05.2024

Ttest image

As the use of artificial intelligence (AI) has become more prevalent, so too have cases of AI failures. It is important to understand that while there are intentional modes of AI failures—for e.g. a result of adversarial cyberattacks—equally distressing are unintentional failures. Unintended failures usually occur because AI systems are designed poorly, not properly validated or tested, or were wrongly deployed by users.


From self-driving car accidents that resulted in pedestrian fatalities to erroneous predictions and diagnoses by AI algorithms designed to assist with patient care in hospitals, serious concerns have been raised about the safety and reliability of AI decision-making in real-world environments. Naturally, this has led to a decline in public trust. In this context, the development of trustworthy AI is not just a technical challenge but a business imperative. AI assurance emerges as a critical component in this landscape, ensuring that AI systems are reliable, ethical, and compliant.

Understanding AI Assurance

AI assurance is necessary to establish and maintain trust in AI systems. Drawing inspiration from assurance services in fields like accounting and cybersecurity, AI assurance is the “infrastructure” for thorough checking, verification and communication of reliable evidence about the trustworthiness of an AI system. It is the bedrock upon which the edifice of artificial intelligence is built, scrutinising every layer—from the algorithms at the core to the data inputs and outputs—ensuring they are performing as intended, meeting stringent standards of safety and robustness.

This involves evaluating AI systems against established standards, regulations, and guidelines, thereby enabling stakeholders in the AI ecosystem to build justified trust in these systems' development and usage.

AI assurance is not just about instilling trust in AI systems; it is also about confirming their trustworthiness to ensure that they perform as expected and bring about desired benefits without causing unintended harm.

Across jurisdictions and markets, there are different approaches emerging to carrying out AI assurance. For example, the Centre for Data Ethics and Innovation provides a framework for organisations to build a robust AI assurance process:

  1. Outcome-based Assessments — Before AI systems are deployed, the systems should be assessed to understand and mitigate potential negative consequences for their uses. Similarly, once implemented, these systems should be assessed to determine whether they have met their intended outcomes and objectives, and adhere to responsible standards. Examples of such assessments include impact assessment and impact evaluation.
  2. Audits — AI systems should also undergo rigorous audits such as compliance audits to ensure that responsibilities were carried out according to existing laws and guidelines. AI systems can also inadvertently perpetuate biases. Conducting a bias audit can help identify and mitigate these biases, ensuring that AI decisions are fair.
  3. Verification — If the systems pass muster, they should then receive accreditation or verification (either through the issuance of a certification or rating). This will be a clear indicator that the systems comply with the necessary standards and guidelines before they are released into the market. Conformity assessments are an example of verification that is carried out before the AI system is released into the market.
  4. Technical Evaluations — Upon verification, AI systems should undergo performance testing by measuring them against established benchmarks and requirements to ensure that they perform as expected in real-world scenarios. This will also ensure that systems are as error-free and reliable as possible.

How Organisations can Benefit from AI Assurance

By integrating AI assurance into your operations, you not only avoid the pitfalls of non-compliance but also showcase your commitment to responsible AI practices.

Consider, for instance, a financial institution deploying an AI-driven decision-making tool for credit assessments. By adhering to the rigorous standards set forth by international bodies, this institution does more than just comply; it demonstrates to its customers and stakeholders a steadfast dedication to ethical AI, positioning itself as a vanguard of responsible innovation.

Risk management is another area where AI assurance proves invaluable. It is about being proactive in spotting and addressing potential issues before they escalate. Think of it as quality control for AI. For instance, a tech company might use AI assurance to catch a flaw in its data processing algorithms that could lead to incorrect analytics, preventing potential customer service issues and safeguarding the company’s operational integrity.

Lastly, the importance of brand reputation in the digital age cannot be overstated. Consumers today are more informed and conscious about the responsible use of AI. By prioritising AI assurance, companies can enhance their reputation and gain consumer trust. For example, an e-commerce company using AI for personalised recommendations can proactively use assurance measures to show customers that their personal data is used responsibly, building trust and loyalty.

Conclusion

As AI continues to proliferate across various sectors, AI assurance continues to gain significance. It is a multifaceted approach that ensures AI systems are not only effective but also performing as intended, compliant, and trustworthy. For businesses, investing in AI assurance is not just about risk mitigation; it is about building a foundation of trust with customers and stakeholders in the digital age. As C-suite leaders, embracing AI assurance is crucial in steering organisations towards a future where AI is an asset rather than a liability.

Glossary

By the UK’s Department for Science, Innovation, and Technology

  • AI assurance: The process of assessing, auditing and testing to ensure that the AI system is performing as intended.
  • Impact Assessment: This process involves anticipating the effects of AI systems before they are deployed. It is a critical step in understanding and mitigating potential negative consequences on society or specific groups.
  • Impact Evaluation: Post-implementation assessments are equally important. They help in determining whether the AI systems meet the intended objectives and adhere to ethical standards.
  • Bias Audit: AI systems can inadvertently perpetuate biases. Conducting a bias audit is essential in identifying and mitigating these biases, ensuring that AI decisions are fair.
  • Compliance Audit: This ensures that AI systems adhere to existing laws and ethical guidelines, which is crucial in maintaining public trust and avoiding legal repercussions.
  • Certification: Independent verification through certification provides an external validation of an AI system's adherence to standards and best practices.
  • Conformity Assessment: Pre-market checks are vital for ensuring that AI systems comply with regulatory requirements before they are released into the market.
  • Performance Testing: Measuring AI systems against established benchmarks and requirements ensures that they perform as expected in real-world scenarios.
  • Formal Verification: The use of mathematical techniques to verify the compliance of AI systems adds an additional layer of assurance, ensuring that these systems are error-free and reliable.