0

01.08.2024

ai

Introduction

While recent events have focused on the EU's Artificial Intelligence Act ("EU AI Act"), two complementary pieces of legislation have been progressing alongside it. A revised Product Liability Directive and a new AI Liability Directive have been proposed to ease consumer and business claims against software, digital services, and AI.

This article addresses the implications of these proposals and provides recommendations going forward.

Background

The EU Product Liability Directive establishes a consumer protection regime, making obtaining compensation from manufacturers of defective products easier. It was set up in 1985 in a technological landscape very different from today. Defects at the time would typically have been physical defects in hardware products, such as faulty brakes that cause road accidents or broken appliances that give nasty electrical shocks.

Technology has changed rapidly since. Today, software and connected services often determine the features of a physical product and control how it operates. Software includes operating systems, firmware, computer programs, applications, or AI systems. Connected services are digital services that are integrated into, or inter-connected with, a product in such a way that its absence would prevent the product from performing one or more of its functions, such as a traffic data service for a navigation system or a voice-assistant service for a voice controlled product. Consequently, software and connected services defects can just as easily cause injury or material loss to a consumer as defects in hardware components.

Proposed revisions to the EU Product Liability Directive are intended to address the new technology landscape, bringing software and connected services expressly into the fold. AI systems are no exception. Revisions to the Product Liability Directive would treat AI as a type of software or connected system, depending on how it is provided.

AI systems also present unique problems. Modern AI systems are not coded line-by-line like traditional software. They are mathematical models optimised for the correlation between a given input and the resultant output. The inner workings of an AI system can be opaque, giving rise to what is commonly referred to as the "black-box effect".[1]

The technical complexity and interconnectivity of modern digital products and AI systems make it costly to establish claims against providers when things go awry. The proposed changes in the EU are intended to help claimants seeking compensation do so more effectively.

EU Product Liability Directive

Once the EU Product Liability Directive is revised, it will expressly:

  1. Hold software, connected systems, and AI providers responsible for compensating consumers harmed by a defective product if digital products are the cause. This responsibility extends not only to the providers of standalone digital products but also to providers of digital components and services that are integrated into or inter-connected with another provider's product ("Components");
  2. Penalise non-compliance with mandatory product safety requirements put in place since 1985, including AI-related product safety requirements under the EU AI Act[2] and any safety-related cybersecurity requirements under EU regulations;
  3. Extend responsibility for defects further into the product lifecycle.

    a. Digital components, such as software, connected services and AI systems, can be updated or upgraded during a product's lifecycle. Not only do the providers of Components continue to be responsible for any harm caused by updates or upgrades to these Components, manufacturers of the overall product are likewise accountable for updates and upgrades to Components supplied or authorised with their product. The failure to provide updates and upgrades can also be actionable, especially where it is necessary to address cybersecurity vulnerabilities or maintain the safety of a product.

    b. Substantive changes to the product made through software updates and upgrades can extend the duration of the provider's responsibilities for defects. The defects liability period is refreshed when substantial modifications are made to the product; and
  4. Address the impact of machine learning on AI systems. Machine learning AI systems can self-learn and adapt their behaviours over time. Consumers are entitled to trust that these adaptations do not negatively affect consumer safety and can seek compensation from providers should harm occur from resultant defects.

Amongst this, limits on compensation for property damage will be removed. Consumers will be entitled to be fully compensated for both personal injury and any consequential property damage.

AI Liability Directive

On the other hand, the AI Liability Directive focuses specifically on claims against AI systems providers for their actions. It is complementary to the impending updates to the EU Product Liability Directive by addressing the knowledge and informational gap between claimants and the providers of AI systems for consumers and businesses alike.

When passed, one of the main impacts of the AI Liability Directive is to lower the barrier to establishing that an AI system's harmful behaviour was caused by the system provider's actions or failure to act.

Typically, when a defective product causes harm, a claimant must be able to point to a specific failure of the provider, such as a fault in the design or a mistake in the production run, and show that this failure caused the claimant's harm. However, the inner workings of an AI system are shrouded by the "black box effect", making it difficult to connect the autonomous actions of an AI system with a specific failure in its design or development.

The AI Liability Directive gives claimants the benefit of the doubt when it is excessively difficult for them to prove this link. The provider is instead required to prove its innocence. This mechanism also applies to high-risk AI systems where the provider fails to comply with the safety requirements in the EU AI Act.

Consequently, AI systems providers will be put to explain how their actions affect the behaviour of their AI systems to show that their actions did not cause the AI system to behave in a manner that was harmful to the claimant. Failure to do so will be held against the AI systems provider.

Takeaways

AI businesses based in the EU, especially those of high-risk AI systems, are likely to find their processes and practices subject to increased scrutiny and in need of justification.

To manage this, take the first step to be clear on the legal obligations and carry out the technical tests necessary to achieve compliance. The EU AI Act imposes data quality and governance requirements as well as performance measures, such as on the AI system's accuracy, robustness, and resiliency. The accompanying EU liability proposals make obtaining compensation against AI businesses easier when their systems fail to meet compliance requirements.

Next, conduct and document compliance testing and assessments regularly, to be ready to respond under pressure. The EU proposals give claimants greater access to information about AI systems to support their claim, and AI businesses can more easily be called upon to justify their approach. Robust and well-documented compliance assessments and testing can help vindicate the design and development of AI systems and pave the way to a swifter and more favourable resolution for the provider.

Non-EU AI businesses with customers in the EU can also benefit from the assessment and testing of its AI systems. Technical evaluation and stress testing help identify weaknesses and build better AI products. In particular, third-party providers can lend expertise and experience to the evaluation methods and tests. The independence of third-party assessments can also help AI businesses build a more transparent AI supply chain and strengthen credibility and trust with their end-users.

* The information in this article is accurate at the time of publication.

[1] “Opacity is another main characteristic of some of the AI based products and systems that may result from the ability to improve their performance by learning from experience. Depending on the methodological approach, AI-based products and systems can be characterised by various degrees of opacity. This may lead to a decision making process of the system difficult to trace (‘black box-effect’).” - Report on the safety and liability implications of Artificial Intelligence, the Internet of Things and robotics

[2] Art. 6(1)(f), and the following from the explanatory memorandum: “Product safety legislation does not contain specific provisions on liability of businesses, but refers to the fact that the PLD applies when a defective product causes damage... A number of legislative proposals are currently under negotiation in the area of product safety: [including the draft EU AI Act]” - EU Commission’s proposal for the revised Product Liability Directive.