0
01.08.2024
While recent events have focused on the EU's Artificial Intelligence Act ("EU AI Act"), two complementary pieces of legislation have been progressing alongside it. A revised Product Liability Directive and a new AI Liability Directive have been proposed to ease consumer and business claims against software, digital services, and AI.
This article addresses the implications of these proposals and provides recommendations going forward.
The EU Product Liability Directive establishes a consumer protection regime, making obtaining compensation from manufacturers of defective products easier. It was set up in 1985 in a technological landscape very different from today. Defects at the time would typically have been physical defects in hardware products, such as faulty brakes that cause road accidents or broken appliances that give nasty electrical shocks.
Technology has changed rapidly since. Today, software and connected services often determine the features of a physical product and control how it operates. Software includes operating systems, firmware, computer programs, applications, or AI systems. Connected services are digital services that are integrated into, or inter-connected with, a product in such a way that its absence would prevent the product from performing one or more of its functions, such as a traffic data service for a navigation system or a voice-assistant service for a voice controlled product. Consequently, software and connected services defects can just as easily cause injury or material loss to a consumer as defects in hardware components.
Proposed revisions to the EU Product Liability Directive are intended to address the new technology landscape, bringing software and connected services expressly into the fold. AI systems are no exception. Revisions to the Product Liability Directive would treat AI as a type of software or connected system, depending on how it is provided.
AI systems also present unique problems. Modern AI systems are not coded line-by-line like traditional software. They are mathematical models optimised for the correlation between a given input and the resultant output. The inner workings of an AI system can be opaque, giving rise to what is commonly referred to as the "black-box effect".[1]
The technical complexity and interconnectivity of modern digital products and AI systems make it costly to establish claims against providers when things go awry. The proposed changes in the EU are intended to help claimants seeking compensation do so more effectively.
Once the EU Product Liability Directive is revised, it will expressly:
Amongst this, limits on compensation for property damage will be removed. Consumers will be entitled to be fully compensated for both personal injury and any consequential property damage.
On the other hand, the AI Liability Directive focuses specifically on claims against AI systems providers for their actions. It is complementary to the impending updates to the EU Product Liability Directive by addressing the knowledge and informational gap between claimants and the providers of AI systems for consumers and businesses alike.
When passed, one of the main impacts of the AI Liability Directive is to lower the barrier to establishing that an AI system's harmful behaviour was caused by the system provider's actions or failure to act.
Typically, when a defective product causes harm, a claimant must be able to point to a specific failure of the provider, such as a fault in the design or a mistake in the production run, and show that this failure caused the claimant's harm. However, the inner workings of an AI system are shrouded by the "black box effect", making it difficult to connect the autonomous actions of an AI system with a specific failure in its design or development.
The AI Liability Directive gives claimants the benefit of the doubt when it is excessively difficult for them to prove this link. The provider is instead required to prove its innocence. This mechanism also applies to high-risk AI systems where the provider fails to comply with the safety requirements in the EU AI Act.
Consequently, AI systems providers will be put to explain how their actions affect the behaviour of their AI systems to show that their actions did not cause the AI system to behave in a manner that was harmful to the claimant. Failure to do so will be held against the AI systems provider.
AI businesses based in the EU, especially those of high-risk AI systems, are likely to find their processes and practices subject to increased scrutiny and in need of justification.
To manage this, take the first step to be clear on the legal obligations and carry out the technical tests necessary to achieve compliance. The EU AI Act imposes data quality and governance requirements as well as performance measures, such as on the AI system's accuracy, robustness, and resiliency. The accompanying EU liability proposals make obtaining compensation against AI businesses easier when their systems fail to meet compliance requirements.
Next, conduct and document compliance testing and assessments regularly, to be ready to respond under pressure. The EU proposals give claimants greater access to information about AI systems to support their claim, and AI businesses can more easily be called upon to justify their approach. Robust and well-documented compliance assessments and testing can help vindicate the design and development of AI systems and pave the way to a swifter and more favourable resolution for the provider.
Non-EU AI businesses with customers in the EU can also benefit from the assessment and testing of its AI systems. Technical evaluation and stress testing help identify weaknesses and build better AI products. In particular, third-party providers can lend expertise and experience to the evaluation methods and tests. The independence of third-party assessments can also help AI businesses build a more transparent AI supply chain and strengthen credibility and trust with their end-users.
* The information in this article is accurate at the time of publication.
[1] “Opacity is another main characteristic of some of the AI based products and systems that may result from the ability to improve their performance by learning from experience. Depending on the methodological approach, AI-based products and systems can be characterised by various degrees of opacity. This may lead to a decision making process of the system difficult to trace (‘black box-effect’).” - Report on the safety and liability implications of Artificial Intelligence, the Internet of Things and robotics
[2] Art. 6(1)(f), and the following from the explanatory memorandum: “Product safety legislation does not contain specific provisions on liability of businesses, but refers to the fact that the PLD applies when a defective product causes damage... A number of legislative proposals are currently under negotiation in the area of product safety: [including the draft EU AI Act]” - EU Commission’s proposal for the revised Product Liability Directive.