Concept information
Preferred term
ALUM
Definition
- Language model which regularizes the training objective by applying perturbations in the embedding space that maximizes the adversarial loss (Liu et al., 2020).
Broader concept
Synonym(s)
- Adversarial training for large neural LangUage Models
Definitional context(s)
- ALUM (Liu et al. 2020) is the state-of-theart adversarial training method for neural language models which regularizes fine-tuning via perturbations in the embedding space. (Chen, Shen, Chen & Yang, 2021)
Example
- Both ALUM and InfoBERT take RoBERTa-large as the backbone model. (Chen, Zhang & Zhao, 2022)
In other languages
-
French
URI
http://data.loterre.fr/ark:/67375/8LP-FZWTVBWP-M
{{label}}
{{#each values }} {{! loop through ConceptPropertyValue objects }}
{{#if prefLabel }}
{{/if}}
{{/each}}
{{#if notation }}{{ notation }} {{/if}}{{ prefLabel }}
{{#ifDifferentLabelLang lang }} ({{ lang }}){{/ifDifferentLabelLang}}
{{#if vocabName }}
{{ vocabName }}
{{/if}}