@prefix skos: <http://www.w3.org/2004/02/skos/core#> .
@prefix ltk: <http://data.loterre.fr/ark:/67375/LTK> .
@prefix dc: <http://purl.org/dc/terms/> .
@prefix xsd: <http://www.w3.org/2001/XMLSchema#> .
@prefix owl: <http://www.w3.org/2002/07/owl#> .

<http://data.loterre.fr/ark:/67375/8LP-GKGZ083L-Z>
  skos:prefLabel "plongement"@fr, "embedding"@en ;
  a skos:Concept ;
  skos:narrower <http://data.loterre.fr/ark:/67375/8LP-M1NFTGZ7-1> .

<http://data.loterre.fr/ark:/67375/8LP-SGWQ4KZK-Q>
  skos:prefLabel "target word embedding vector"@en, "vecteur de plongement des mots cibles"@fr ;
  a skos:Concept ;
  skos:broader <http://data.loterre.fr/ark:/67375/8LP-M1NFTGZ7-1> .

<http://data.loterre.fr/ark:/67375/8LP-M1NFTGZ7-1>
  skos:exactMatch <https://www.wikidata.org/wiki/Q18395344>, ltk:-SCTX2SHF-V ;
  skos:narrower <http://data.loterre.fr/ark:/67375/8LP-SGWQ4KZK-Q> ;
  skos:prefLabel "word embedding"@en, "plongement lexical"@fr ;
  a skos:Concept ;
  skos:hiddenLabel "Plongement lexical"@fr, "Word embedding"@en ;
  skos:inScheme <http://data.loterre.fr/ark:/67375/8LP> ;
  skos:altLabel "word embedding"@fr, "plongement de mot"@fr ;
  dc:modified "2024-10-23T12:48:40"^^xsd:dateTime ;
  skos:definition "Process by which words are mapped into real-valued vectors. (ARTES)"@en, "Procédé par lequel un mot est représenté par un vecteur de nombres réels. (ARTES)"@fr ;
  skos:note "Dans ces méthodes, un mot est projeté sur une représentation latente par un vecteur appelé word embeddings qui est capable de capturer la sémantique contextuelle des mots. (Nguyen, GH and al., Modèle Neuronal de Recherche d'Information Augmenté par une Ressource Sémantique, Conférence francophone en Recherche d'Information et Applications (CORIA 2017))"@fr, "Each word in the vocabulary is represented by a vector w ∈ RD, where D is the dimension fixed in advance. One of the major advantages of representing words as vectors is the fact that standard similarity measures such as cosine similarity or Euclidean distance can be used, enabling semantic distances to be calculated between words. Contrary to what we may be led to think by the recent popularity surge for word embeddings, the use of compact, vectorial word representations is by no means new, and the theoretical underpinnings can be traced back at least to the 1950s and the theory of distributional semantics. The distributional hypothesis, the idea that you can define a word by the company it keeps (Harris, 1954), popularised in the 1950s by philosophers and linguists such as Harris (1954), Firth (1957) and Wittgenstein (1953), has been influential in the way textual input is represented in NLP [...) (Bawden, Rachel, Going beyond the sentence : Contextual Machine Translation of Dialogue, Université Paris-Saclay, 2018)"@en ;
  skos:broader <http://data.loterre.fr/ark:/67375/8LP-GKGZ083L-Z> .

<http://data.loterre.fr/ark:/67375/8LP> a owl:Ontology, skos:ConceptScheme .
