20 octobre 2022
https://creativecommons.org/licenses/by-nc-nd/4.0/ , info:eu-repo/semantics/openAccess
Rocco Tripodi, « How Contextualized Word Embeddings Represent Word Senses », Accademia University Press, ID : 10.4000/books.aaccademia.10872
Contextualized embedding models, such as ELMo and BERT, allow the construction of vector representations of lexical items that adapt to the context in which words appear. It was demonstrated that the upper layers of these models capture semantic information. This evidence paved the way for the development of sense representations based on words in context. In this paper, we analyze the vector spaces produced by 11 pre-trained models and evaluate these representations on two tasks. The analysis shows that all these representations contain redundant information. The results show the disadvantage of this aspect.