Concept information
Terme préférentiel
transformer
Définition
- "sequence transduction model based entirely on attention, replacing the recurrent layers most commonly used in encoder-decoder architectures with multi-headed self-attention." (Vaswani et al., 2017, p. 10).
Concept générique
Synonyme(s)
- self-attention model
- transformer network
- transformer neural network
Appartient au groupe
Référence(s) bibliographique(s)
-
• Bhatia, S., & Richie, R. (2024). Transformer networks of human conceptual knowledge. Psychological Review, 131(1), 271–306. https://doi.org/10.1037/rev0000319
[Study type: empirical study / Access: closed]
-
• Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, L., & Polosukhin, I. (2017). Attention is all you need. arXiv:1706.03762 [cs]. http://arxiv.org/abs/1706.03762
[Study type: software description / Access: open]
Créateur
- Frank Arnould
Modèle de
Traductions
-
français
-
modèle auto-attentif
-
modèle d'auto-attention
-
transformer
URI
http://data.loterre.fr/ark:/67375/P66-XCCLZSQ5-8
{{label}}
{{#each values }} {{! loop through ConceptPropertyValue objects }}
{{#if prefLabel }}
{{/if}}
{{/each}}
{{#if notation }}{{ notation }} {{/if}}{{ prefLabel }}
{{#ifDifferentLabelLang lang }} ({{ lang }}){{/ifDifferentLabelLang}}
{{#if vocabName }}
{{ vocabName }}
{{/if}}