Which Liability Laws for Artificial Intelligence?

Fiche du document

Date

2024

Type de document
Périmètre
Langue
Identifiants
Collection

Archives ouvertes

Licence

info:eu-repo/semantics/OpenAccess




Citer ce document

Eric Langlais et al., « Which Liability Laws for Artificial Intelligence? », HAL-SHS : économie et finance, ID : 10670/1.4uvswn


Métriques


Partage / Export

Résumé 0

This paper studies how the combination of Product Liability and Tort Law shapes a monopoly' incentives to invest in R&D for developing risky AI-based technologies ("robots") that may accidentally induce harm to third-party victims. We assume that at the engineering stage, robots are designed to have two alternative modes of motion (fully autonomous vs human-driven), corresponding to optimized performances in predefined circumstances. In the autonomous mode, the monopoly (i.e. AI designer) faces Product Liability and undertakes maintenance expenditures to mitigate victims' expected harm. In the human-driven mode, AI users face Tort Law and exert a level of care to reduce victims' expected harm. In this set-up, efficient maintenance by the AI designer and efficient care by AI users result whatever the liability rule enforced in each area of law (strict liability, or negligence). However, overinvestment as well as underinvestment in R&D may occur at equilibrium, whether liability laws rely on strict liability or negligence, and whether the monopoly uses or does not use price discrimination. The first best level of R&D investments is reached at equilibrium only if simultaneously the monopoly uses (perfect) price discrimination, a regulator sets the output at the socially optimal level, and Courts implement strict liability in Tort Law and Product Liability.

document thumbnail

Par les mêmes auteurs

Sur les mêmes sujets

Sur les mêmes disciplines

Exporter en