Evaluating the Quality of Answers in Political Q&A Sessions with Large Language Models

Fiche du document

Date

12 avril 2024

Type de document
Périmètre
Identifiant
  • 2404.08816
Collection

arXiv

Organisation

Cornell University




Citer ce document

R. Michael Alvarez et al., « Evaluating the Quality of Answers in Political Q&A Sessions with Large Language Models », arXiv - économie


Partage / Export

Résumé 0

This paper presents a new approach to evaluating the quality of answers in political question-and-answer sessions. We propose to measure an answer's quality based on the degree to which it allows us to infer the initial question accurately. This conception of answer quality inherently reflects their relevance to initial questions. Drawing parallels with semantic search, we argue that this measurement approach can be operationalized by fine-tuning a large language model on the observed corpus of questions and answers without additional labeled data. We showcase our measurement approach within the context of the Question Period in the Canadian House of Commons. Our approach yields valuable insights into the correlates of the quality of answers in the Question Period. We find that answer quality varies significantly based on the party affiliation of the members of Parliament asking the questions and uncover a meaningful correlation between answer quality and the topics of the questions.

document thumbnail

Par les mêmes auteurs

Sur les mêmes sujets

Sur les mêmes disciplines

Exporter en