Concept information
Terme préférentiel
neural language model
Définition
- A language model based on a neural network architecture to understand and generate human-like language.
Concept générique
Exemple
- Fine tuning is a popular domain adaptation method which trains a neural language model in two phases first maximizing the likelihood of the generic set D (pre-training) before optimizing the likelihood of the target domain set T (fine-tuning). (Grangier & Iter, 2022)
- For instance the discovery that some word types are repeated in predictions more than others can help regularizing neural language models by making them adaptive to the different word types. (Hajipoor, Amiri, Rahgozar & Oroumchian, 2019)
- However most research in NLP has focused on the English language (Bender 2011) and as a consequence many other languages lack sufficient resources -in particular benchmarks for neural language models. (Osório, Leite, Lopes Cardoso, Gomes, Rodrigues, Santos & Branco, 2024)
- We see that the neural language model provides further performance gains compared to the N-gram model without introducing any badendings. (Guo, Chang, Yu & Bai, 2018)
- We train our neural language model based on the MSCOCO caption corpus with an LSTM unit. (Guo, Chang, Yu & Bai, 2018)
Traductions
-
français
URI
http://data.loterre.fr/ark:/67375/8LP-JWLRG1XS-N
{{label}}
{{#each values }} {{! loop through ConceptPropertyValue objects }}
{{#if prefLabel }}
{{/if}}
{{/each}}
{{#if notation }}{{ notation }} {{/if}}{{ prefLabel }}
{{#ifDifferentLabelLang lang }} ({{ lang }}){{/ifDifferentLabelLang}}
{{#if vocabName }}
{{ vocabName }}
{{/if}}