News

Keep up with the latest news here

From Ambition to Action. A High Level Conference on AI

On 14 September 2021 our member Marco Bona represented PEOPIL (the Pan-European Organisation of Personal Injury Lawyers) as a speaker at the event “From Ambition to Action. A High Level Conference on AI”, organised by the European Commission and the Slovenian Presidency of the European Council (see (https://digital-strategy.ec.europa.eu/en/news/high-level-conference-weigh-eu-and-international-work-artificial-intelligence).  In particular, Marco Bona attended the afternoon “Break-out session on liability” which focused on the possible changes to the Product Liability Directive (directive 85/374/EEC) with respect to AI.

 

When 16-09-2021

Marco Bona, also in the light of PEOPIL “Response to the EU consultation on Artificial Intelligence. Liability and insurance for personal injury and death damages caused by ai artefacts/systems” (September 2020, https://www.peopil.com/document/3692, drafted together with Philip Mead and Mark Harvey), addressed the issue on the reversal of the burden of proof in relation to damages caused by AI systems. He outlined that: -) this issue should be addressed together with the definition of “defective product” and the expression “put into circulation”; -) the injured party, whether the primary victim or the secondary victim, should only have to prove the damage and the mere factual link between the harm and the AI artefact/system without any need to prove the underlying facts as to the conduct or operation of the AI artefact/system, the reason or the dynamics, including those internal to the AI artefact/system, behind the occurrence of the accident; in other words, it should be sufficient to prove the “implication” of the AI artefact/system. Marco Bona also indicated that the Product Liability Directive should not be reviewed by introducing different liability rules in consideration of the particular product involved in the accident (this would cause undesirable fragmentation of the product liability regime). During the session concerns have been raised about the scope of producers’ liability as to cybercrimes affecting the functioning of AI systems. Marco Bona stressed the necessity for also addressing the liability of subjects other than the producers (for example, professional operators) as well as, in relation to the victims of cybercrimes, the possibility of introducing State compensation schemes similar to the one established by the Directive 2004/80/CE relating to compensation to crime victims. However, he outlined that State compensation should not become a way of having the Member States to pay for the damages caused by producers and/or professional operators. The insurance of AI systems should also be taken into consideration.

The high-level event dedicated to AI followed the European Commission’s Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain union legislative acts proposed in April 2021, COM(2021) 206 final.

As announced during the session on liability this proposal will be followed by the review of the existing product safety legislation and by an initiative in 2022 on EU rules to address liability issues related to new technologies, including AI systems.

The session was attended by Felicia Stoica, Deputy Head of Unit at DG Grow, European Commission, Ioana Mazilescu, DHoU Contract Law, DG JUST, European Commission, Eleonora Rajneri, Professor of Private Law, Università del Piemonte Orientale, Université Paris Dauphine, Timotej Globačnik, Director - Research and Development, Gorenje, APPLiA, and Ursula Pachl, Deputy Director General, BEUC.

On 20 October 2020 the European Parliament approved its resolution with recommendations to the

Commission on a civil liability regime for artificial intelligence (2020/2014(INL)). In this proposal, that address the liability of operators of AI systems, there are some critical points, including the distinction between “high-risk” AI systems and “low-risk” AI systems (the latter excluded from the proposed strict liability regime), as well as the failure to include the compensation for non-pecuniary (or “immaterial”) damages among the losses to be compensated for in personal injury and death cases.

Presently, there still does not exist a sufficiently well-established and common legal background to permit legislative intervention by the European Union legislature in respect of specific detailed provision for categories of recoverable loss, methods of assessment (including criteria for medico-legal evaluation), minimum levels of awards for general damages, secondary victims entitled to compensation, etc.