Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Principle:LaurentMazare Tch rs Token Embedding

From Leeroopedia


Knowledge Sources
Domains NLP, Neural_Network_Layers
Last Updated 2026-02-08 14:00 GMT

Overview

Lookup table layer that maps discrete token indices to dense continuous vector representations.

Description

An embedding layer stores a weight matrix of shape [vocab_size, embedding_dim] where each row corresponds to a learned vector representation for a token. During a forward pass, integer token indices are used to index into this matrix (a lookup operation, not a matrix multiplication), producing dense vectors that capture semantic meaning. Embeddings are the standard first layer in all language models, converting discrete token IDs into the continuous space required by neural network computation.

Usage

Use as the first layer of any language model to convert token IDs to dense representations. The embedding dimension should match the model's hidden dimension.

Theoretical Basis

Embedding Operation:
  Given token IDs [batch, seq_len] with values in [0, vocab_size)
  Weight matrix W of shape [vocab_size, embedding_dim]

  output[b, t] = W[token_ids[b, t]]  (simple row lookup)

  Output shape: [batch, seq_len, embedding_dim]

Related Pages

Implemented By

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment