11 août 2016
Ce document est lié à :
info:eu-repo/semantics/altIdentifier/doi/10.18653/v1/S16-2001
info:eu-repo/semantics/OpenAccess
Tal Linzen et al., « Quantificational features in distributional word representations », HAL-SHS : linguistique, ID : 10.18653/v1/S16-2001
Do distributional word representations encode the linguistic regularities that theories of meaning argue they should encode? We address this question in the case of the logical properties (monotonicity, force) of quantificational words such as everything (in the object domain) and always (in the time domain). Using the vector offset approach to solving word analogies, we find that the skip-gram model of distributional semantics behaves in a way that is remarkably consistent with encoding these features in some domains, with accuracy approaching 100%, especially with mediumsized context windows. Accuracy in others domains was less impressive. We compare the performance of the model to the behavior of human participants, and find that humans performed well even where the models struggled.