Explaining Predictive Models with Mixed Features Using Shapley Values and Conditional Inference Trees

Fiche du document

Date

25 août 2020

Type de document
Périmètre
Langue
Identifiants
Relations

Ce document est lié à :
info:eu-repo/semantics/altIdentifier/doi/10.1007/978-3-030-57321-8_7

Collection

Archives ouvertes

Licences

http://creativecommons.org/licenses/by/ , info:eu-repo/semantics/OpenAccess



Sujets proches En

Worth Axiology

Citer ce document

Annabelle Redelmeier et al., « Explaining Predictive Models with Mixed Features Using Shapley Values and Conditional Inference Trees », HAL-SHS : sciences de l'information, de la communication et des bibliothèques, ID : 10.1007/978-3-030-57321-8_7


Métriques


Partage / Export

Résumé En

It is becoming increasingly important to explain complex, black-box machine learning models. Although there is an expanding literature on this topic, Shapley values stand out as a sound method to explain predictions from any type of machine learning model. The original development of Shapley values for prediction explanation relied on the assumption that the features being described were independent. This methodology was then extended to explain dependent features with an underlying continuous distribution. In this paper, we propose a method to explain mixed (i.e. continuous, discrete, ordinal, and categorical) dependent features by modeling the dependence structure of the features using conditional inference trees. We demonstrate our proposed method against the current industry standards in various simulation studies and find that our method often outperforms the other approaches. Finally, we apply our method to a real financial data set used in the 2018 FICO Explainable Machine Learning Challenge and show how our explanations compare to the FICO challenge Recognition Award winning team.

document thumbnail

Par les mêmes auteurs

Sur les mêmes sujets

Sur les mêmes disciplines

Exporter en