Concept information
Terme préférentiel
ALUM
Définition
- Language model which regularizes the training objective by applying perturbations in the embedding space that maximizes the adversarial loss (Liu et al., 2020).
Concept générique
Synonyme(s)
- Adversarial training for large neural LangUage Models
Contexte(s) définitoire(s)
- ALUM (Liu et al. 2020) is the state-of-theart adversarial training method for neural language models which regularizes fine-tuning via perturbations in the embedding space. (Chen, Shen, Chen & Yang, 2021)
Exemple
- Both ALUM and InfoBERT take RoBERTa-large as the backbone model. (Chen, Zhang & Zhao, 2022)
Traductions
-
français
URI
http://data.loterre.fr/ark:/67375/8LP-FZWTVBWP-M
{{label}}
{{#each values }} {{! loop through ConceptPropertyValue objects }}
{{#if prefLabel }}
{{/if}}
{{/each}}
{{#if notation }}{{ notation }} {{/if}}{{ prefLabel }}
{{#ifDifferentLabelLang lang }} ({{ lang }}){{/ifDifferentLabelLang}}
{{#if vocabName }}
{{ vocabName }}
{{/if}}