openAccess , https://creativecommons.org/licenses/by/4.0/ , BY
Ljubiša Bojić, « Large Language Models Observatory: Creating New Benchmarks for AI Alignment in Sentiment Analysis of Socially Critical Issues », Repository of Institute for Philosophy and Social Theory of the University in Belgrade, ID : 10670/1.prtir6
This lecture inquiries into the increasingly vital subject of Large Language Models (LLMs) and their profound influence on society. As artificial intelligence systems are progressively integrated into our societies, the necessity to critically understand and measure their impacts continuously increases. Our key focus will be to initiate the development of a benchmark for evaluating the sentiment of various LLMs, using methodologies such as the Likert scale survey. We will detail the analysis of seven LLMs, including GPT-4 and Bard, and contrast their sentiment data with that from three distinct human sample populations. In addition, we will analyse temporal sentiment variations over a sequential three-day period. The lecture concludes with an exploration of potential conflicts of interest, bias possibilities in LLMs, and an intriguing discussion about how these systems might subtly shape societal perceptions. Join us as we unravel how AI, mirroring human cognitive processes, could potentially develop unique sentiments and influence our opinions.