A Projected Stochastic Gradient Algorithm for Estimating Shapley Value Applied in Attribute Importance

Fiche du document

Date

25 août 2020

Type de document
Périmètre
Langue
Identifiants
Relations

Ce document est lié à :
info:eu-repo/semantics/altIdentifier/doi/10.1007/978-3-030-57321-8_6

Collection

Archives ouvertes

Licences

http://creativecommons.org/licenses/by/ , info:eu-repo/semantics/OpenAccess



Sujets proches En

Standard of value

Citer ce document

Grah Simon et al., « A Projected Stochastic Gradient Algorithm for Estimating Shapley Value Applied in Attribute Importance », HAL-SHS : sciences de l'information, de la communication et des bibliothèques, ID : 10.1007/978-3-030-57321-8_6


Métriques


Partage / Export

Résumé En

Machine Learning is enjoying an increasing success in many applications: medical, marketing, defence, cyber security, transportation. It is becoming a key tool in critical systems. However, models are often very complex and highly non-linear. This is problematic, especially for critical systems, because end-users need to fully understand the decisions of an algorithm (e.g. why an alert has been triggered or why a person has a high probability of cancer recurrence). One solution is to offer an interpretation for each individual prediction based on attribute relevance. Shapley Values allow to distribute fairly contributions for each attribute in order to understand the difference between a predicted value for an observation and a base value (e.g. the average prediction of a reference population). They come from cooperative game theory. While these values have many advantages, including their theoretical guarantees, they are however really hard to calculate. Indeed, the complexity increases exponentially with the dimension (the number of variables). In this article, we propose two novel methods to approximate these Shapley Values. The first one is an optimization of an already existing Monte Carlo scheme. It reduces the number of prediction function calls. The second method is based on a projected gradient stochastic algorithm. We prove for the second approach some probability bounds and convergence rates for the approximation errors according to the learning rate type used. Finally, we carry out experiments on simulated datasets for a classification and a regression task. We empirically show that these approaches outperform the classical Monte Carlo estimator in terms of convergence rate and number of prediction function calls, which is the major bottleneck in Shapley Value estimation for our application.

document thumbnail

Par les mêmes auteurs

Sur les mêmes sujets

Sur les mêmes disciplines

Exporter en