Who's going to pay when thinking machines fail?
Home / Articles / Technology, IT and AI / Who's going to pay when thinking machines fail?
Artificial intelligence

Who's going to pay when thinking machines fail?

Artificial intelligence (AI) has revolutionized the way we live and work, but alongside this progress comes complex questions about responsibility. In this article, we take a closer look at the EU's proposed solutions for addressing liability for AI failures.
Published: 10.05.24

With systems ranging from autonomous vehicles to automated decision-making tools, we face challenges in determining who should be held accountable when AI systems cause damages. Who is responsible when autonomous buses cause accidents? Who is responsible when automated credit assessments lead to incorrect payments?

The starting point for claiming compensation for a financial damages, is that one must prove that there is a basis for liability. The basis of liability may be negligence or strict liability (which does not require any fault from any person). At the same time, the claimant must be able to prove a causal link between the damaging incident and the subsequent damages. If your neighbor has backed into your car, the claim of liability (in this instance negligence) and causation, causes little trouble. The dent in your bumper is a direct result of that neighbor's negligent driving.

When it comes to artificial intelligence, this causal link is harder to prove. Firstly, artificially intelligent systems often suffer from a lack of transparency. The algorithms that guide AI systems' decisions, as well as the input data on which they are based, may be inaccessible and hard to understand for users and other stakeholders. Secondly, it can be difficult to establish neglience or causation even with access to all available information. Artificial intelligence often works based on statistical correlations and can operate autonomously, making it challenging to predict system behavior and identify which factors have contributed to a given damage.

The EU's proposed solution

To meet these challenges, the EU has proposed two new directives: an updated Product Liability Directive and the new AI Liability Directive.

Traditional product liability is a form ofstrict liability, where the manufacturer is liable for damages caused by defects in the product, regardless of whether the manufacturer has been negligent or has exhibited other culpability.

The European Commission's proposal for an updated Product Liability Directive aims to expand the group of who may be held liable, beyond just the manufacturer. Furthermore, the Commission clarifies that responsibilities include software, AI systems and digital services necessary to service products.

The changes mean that manufacturers of physical products that contain artificial intelligence, as well as manufacturers of pure AI systems, will be liable for damage caused by faults or defects in the products, regardless of whether the manufacturer has exhibited culpability.

Although one does not have to prove negligence, one must prove that the product had the defect in question, prove that the damage has materialized and prove the causal link between the defect and the damage. The proposed directive contains several evidentiary rules meant to lower the burden of proof, including rules stating that in certain situations it must be presumed that a product has a defect and that it must be presumed that there is a causal link between a product's defect and an injury. If the product is so complex that it is difficult for the claimant to present evidence, it may be presumed that there is a causal link or that the product has a defect.

In addition to the new Product Liability Directive, the EU proposes another new directive, the AI Liability Directive, which aims to adapt rules on non-contractual liability for damages caused by artificial intelligence.

The proposed AI Liability Directive states that under certain conditions it should be presumed that there is a causal link between the damages and the artificially intelligent system's output data, which has caused the damage. The person held liable is given the right to present evidence to disprove such a presumption. In this way, part of the burden of proof is shifted from the claimant to the defendant. If the defendant fails to produce documentation when a court has requested it, it can be presumed that the defendant has exhibited culpability.

The purpose of these rules is to neutralize the informational advantage manufacturers have in documenting and understanding the function and decision-making processes of AI systems. The question is whether the AI Accountability Directive, as proposed, has gone far enough in protecting those who find AI systems inflicting damages on them. As long as one has to prove guilt, one faces great challenges. The Norwegian Consumer Council, amongst others, have argued that operators and manufacturers of AI systems should be held liable on the basis of strict liablity, i.e. so that users do not have to prove negligence.

What does this mean for Norwegian businesses?

The proposed directives from the EU aim to ensure our adaptation to technological developments by ensuring that claimants have sufficient opportunities to claim compensation for damage caused by artificially intelligent systems. By expanding the responsibility of the manufacturer and facilitating lower evidentiary requirements for claimants, the EU hopes to create a fairer compensation regime.

However, ensuring effective regulation of AI is no simple undertaking. Like for the AI Act, one of the biggest challenges is balancing the need to protect users' rights with the need for innovation and development in the AI sector.
Although the proposed directives can be amended before they become final, organisations should already start adapting to the consequences of the new rules. The rules on liability are linked to the AI Act, which impose security requirements and seek to prevent damage associated with AI systems. The provisions of the AI Act must be taken into account already when developing the systems. When it comes to potential liability for damages, Norwegian businesses should begin to consider how to distribute liability in contracts, and also how liability may be insured. In addition, manufacturers should, among other things, develop good routines for testing the products and good manuals for end users, in order to reduce the risk of compensation claims.

This article is also published on Digi.no

How can we help?

In need of legal assistance? Call or email us, and we'll figure out how we can help.