Skip to main

Vocabulary of natural language processing

Search from vocabulary

Concept information

Término preferido

pre-trained language model  

Definición

  • A large language model or model components that have already been trained. (Based on Google for Developers, Machine Learning Glossary)

Concepto genérico

Conceptos específicos

Etiquetas alternativas

  • pretrained language model

Ejemplo

  • Past work measures the perplexity using a pre-trained language model to gauge the fluency or grammatical correctness of the style transferred outputs. (Narasimhan, Dey & Desarkar, 2022)
  • Pre-trained language models (PLMs) may fail in giving reliable estimates of their predictive uncertainty. (Chen, Yuan, Cui, Liu & Ji, 2023)
  • Pre-trained language models reach state-of-the-art results in most current natural language processing (NLP) tasks. (Helcl & Libovický, 2023)
  • The system is based on pre-trained language model BERT (Devlin Chang Lee & Toutanova 2018) with external knowledge. (Zhao, Xiong & Tang, 2020)
  • To our knowledge there is no domain-specific pre-trained language model on the chemistry corpus. (Wang, Hu, Song, Garg, Xiao & Han, 2021)

En otras lenguas

URI

http://data.loterre.fr/ark:/67375/8LP-FMRM91HV-N

Descargue este concepto:

RDF/XML TURTLE JSON-LD última modificación 13/5/24