Large Language Models Observatory: Creating New Benchmarks for AI Alignment in Sentiment Analysis of Socially Critical Issues

Fiche du document

Type de document
Périmètre
Langue
Identifiants
Licences

openAccess , https://creativecommons.org/licenses/by/4.0/ , BY




Citer ce document

Ljubiša Bojić, « Large Language Models Observatory: Creating New Benchmarks for AI Alignment in Sentiment Analysis of Socially Critical Issues », Repository of Institute for Philosophy and Social Theory of the University in Belgrade, ID : 10670/1.prtir6


Métriques


Partage / Export

Résumé 0

This lecture inquiries into the increasingly vital subject of Large Language Models (LLMs) and their profound influence on society. As artificial intelligence systems are progressively integrated into our societies, the necessity to critically understand and measure their impacts continuously increases. Our key focus will be to initiate the development of a benchmark for evaluating the sentiment of various LLMs, using methodologies such as the Likert scale survey. We will detail the analysis of seven LLMs, including GPT-4 and Bard, and contrast their sentiment data with that from three distinct human sample populations. In addition, we will analyse temporal sentiment variations over a sequential three-day period. The lecture concludes with an exploration of potential conflicts of interest, bias possibilities in LLMs, and an intriguing discussion about how these systems might subtly shape societal perceptions. Join us as we unravel how AI, mirroring human cognitive processes, could potentially develop unique sentiments and influence our opinions.

document thumbnail

Par les mêmes auteurs

Sur les mêmes sujets

Exporter en