Concept information
Preferred term
one-hot encoding
Broader concept
Example
- Here we use one-hot encoding to transform the relations into integer vectors. (Chernyavskiy, Ilvovsky & Nakov, 2024)
- Note that DIET can incorporate pre-trained word and sentence embeddings from language models as dense features with the flexibility to combine these with token level one-hot encodings and multi-hot encodings of character n-grams as sparse features. (Okur, Sahay & Nachman, 2022)
- One-hot encoding is employed in classification tasks assigning values from 1 to 4 as labels to train the classifiers where each classifier is tailored to a different retrieval noise type. (Fang, Bai, Ni, Yang, Chen & Xu, 2024)
- We embed POStags as vectors using one-hot encoding. (Chernyavskiy & Ilvovsky, 2020)
In other languages
-
French
URI
http://data.loterre.fr/ark:/67375/8LP-Z0KFTC6G-W
{{label}}
{{#each values }} {{! loop through ConceptPropertyValue objects }}
{{#if prefLabel }}
{{/if}}
{{/each}}
{{#if notation }}{{ notation }} {{/if}}{{ prefLabel }}
{{#ifDifferentLabelLang lang }} ({{ lang }}){{/ifDifferentLabelLang}}
{{#if vocabName }}
{{ vocabName }}
{{/if}}