Feature importance in the age of explainable AI : Case study of detecting fake news & misinformation via a multi-modal framework

Fiche du document

Date

4 octobre 2023

Type de document
Périmètre
Langue
Identifiants
Relations

Ce document est lié à :
info:eu-repo/semantics/altIdentifier/doi/10.1016/j.ejor.2023.10.003

Collection

Archives ouvertes



Sujets proches En

forged

Citer ce document

Ajay Kumar et al., « Feature importance in the age of explainable AI : Case study of detecting fake news & misinformation via a multi-modal framework », HAL-SHS : économie et finance, ID : 10.1016/j.ejor.2023.10.003


Métriques


Partage / Export

Résumé En

In recent years, fake news has become a global phenomenon due to its explosive growth and ability to leverage multimedia content to manipulate user opinions. Fake news is created by manipulating images, text, audio, and videos, particularly on social media, and the proliferation of such disinformation can trigger detrimental societal effects. False forwarded messages can have a devastating impact on society, spreading propaganda, inciting violence, manipulating public opinion, and even influencing elections. A major shortcoming of existing fake news detection methods is their inability to simultaneously learn and extract features from two modalities and train models with shared representations of multimodal (textual and visual) information. Feature engineering is a critical task in the fake news detection model's machine learning (ML) development process. For ML models to be explainable and trusted, feature engineering should describe how many features used in the ML models contribute to making more accurate predictions. Feature engineering, which plays an important role in the development of an explainable AI system by shaping the features used in the ML models, is an interconnected concept with explainable AI as it affects the model's interpretability. In the research, we develop a fake news detector model in which we (1) identify several textual and visual features that are associated with fake or credible news; specifically, we extract features from article titles, contents, and, top images; (2) investigate the role of all multimodal features (content, emotions and manipulation-based) and combine the cumulative effects using the feature engineering that represent the behavior of fake news propagators; and (3) develop a model to detect disinformation on benchmark multimodal datasets consisting of text and images. We conduct experiments on several real-world multimodal fake news datasets, and our results show that on average, our model outperforms existing single-modality methods by large margins that do not use any feature optimization techniques.

document thumbnail

Par les mêmes auteurs

Sur les mêmes sujets

Sur les mêmes disciplines

Exporter en