In September, the European Commission published a draft proposal for a directive on Artificial Intelligence (AI) liability, which will supplement the Artificial Intelligence (AI) Act currently under negotiation.
In a representative poll in 2020 (European enterprise survey on the use of technologies based on AI, Ipsos 2020), liability was mentioned by European companies as one of the top three barriers towards AI alongside the lack of public or external funding and Strict standards for data exchange was cited as the most significant external barrier (43%) for companies that are planning but have not yet adopted AI.
Current national liability standards, those based on fault, are not adequate to handle claims for damages arising from AI-enabled products and services.
According to the current rules, those suffering a damage must prove an unlawful act or omission by a person who caused the harm. The specific characteristics of AI, including complexity, autonomy and opacity, may often make it difficult or cost-prohibitive for victims to identify the liable party and prove the requirements for a successful liability action. When claiming reparations, victims may experience very high initial costs and face significantly longer legal proceedings, compared to cases not involving AI. Victims may therefore be reluctant to seek compensation.
These concerns were also stressed by the European Parliament (EP) in its resolution of 3 May 2022 on artificial intelligence in the digital age. If a person claims compensation, national courts, when facing the specific characteristics of IA, may have to adapt the way they apply existing rules on an ad hoc basis to reach a fair outcome for the claimant, but this leads to legal uncertainty.
Companies struggle to predict how existing liability rules will be applied, and thus to estimate and secure their liability exposure.
The effect is amplified for businesses operating on a cross-border basis, as uncertainty concerns several other jurisdictions with the associated need for extra information/legal representation, risk management costs and lost revenue. On the other hand, different national rules on claims for damages caused by AI cause considerable obstacles to the internal market.
Legal uncertainty and fragmentation mainly affect start-ups and SMEs, which are most companies and the majority of investments in the markets targeted.
The aim of this initiative is therefore to foster the introduction of reliable AI in order to maximize its benefits for the internal market. To this end, the proposal will guarantee that victims of damage caused by AI will receive protection equivalent to that of victims of damage caused by products in general. Moreover, it reduces the legal uncertainty of companies who develop or use AI regarding their potential liability exposure and prevents the rise of fragmented, AI-specific adaptations within national liability rules.
This initiative aims to boost confidence in AI and enhance its widespread use, as well as to strengthen the Union’s role in the definition of global norms and standards and in promoting reliable AI that is in line with the Union’s values and interests.
In terms of social impact, the directive will strengthen society’s trust in artificial intelligence technologies and increase access to an effective justice system. It will support the creation of an efficient liability regime adapted to the specificities of AI. The increase in the confidence of society would also benefit all businesses in the AI value chain, as the growth in public trust will contribute to a speedy adoption of AI.