Concept information
Terme préférentiel
word embedding
Définition
- "Word embeddings are low-dimensional numeric representations of words generated by artificial intelligence (AI) methods that capture word co-occurrence statistics. The assumption in these models is that words located in close proximity to one another in the vector space are semantically similar. The similarity between two word meanings, such as “plate" and “bowl", can be quantified by taking the cosine distance between the corresponding vectors in the model." (Calistan & Lewis, 2020, p. 3).
Concept générique
Appartient au groupe
Référence(s) bibliographique(s)
-
• Caliskan, A., & Lewis, M. (2020). Social biases in word embeddings and their relation to human cognition. PsyArXiv. https://doi.org/10.31234/osf.io/d84kg
• Document type: empirical study
• Access: open
- • Kumar, A. A. (2021). Semantic memory : A review of methods, models, and current challenges. Psychonomic Bulletin & Review, 28(1), 40‑80. https://doi.org/10.3758/s13423-020-01792-x
• Document type: literature review
• Access: open
- • Lake, B. M., & Murphy, G. L. (2023). Word meaning in minds and machines. Psychological Review, 130(2), 401–431. https://doi.org/10.1037/rev0000297
• Document type: literature review
• Access: closed
Créateur
- Frank Arnould
Modèle de
Traductions
-
français
-
plongement de mots
URI
http://data.loterre.fr/ark:/67375/P66-M75L9P53-N - • Kumar, A. A. (2021). Semantic memory : A review of methods, models, and current challenges. Psychonomic Bulletin & Review, 28(1), 40‑80. https://doi.org/10.3758/s13423-020-01792-x
{{label}}
{{#each values }} {{! loop through ConceptPropertyValue objects }}
{{#if prefLabel }}
{{/if}}
{{/each}}
{{#if notation }}{{ notation }} {{/if}}{{ prefLabel }}
{{#ifDifferentLabelLang lang }} ({{ lang }}){{/ifDifferentLabelLang}}
{{#if vocabName }}
{{ vocabName }}
{{/if}}