LLM Construction
Transformer Architecture
The mathematical formulation of the transformer block: self-attention, multi-head attention, layer normalization, FFN blocks, positional encoding, and parameter counting.
Prerequisites
Why This Matters
The transformer is the architecture behind every modern large language model: GPT-4, Claude, Gemini, Llama. It replaced recurrent and convolutional architectures for sequence modeling because of one key property: self-attention allows every token to attend to every other token in parallel, enabling the model to learn long-range dependencies without the vanishing gradient problem of RNNs.
Understanding the transformer mathematically (not just as a diagram, but as a sequence of matrix operations with specific dimensions, costs, and properties) is essential for understanding everything built on top of it: RLHF, mechanistic interpretability, scaling laws, and efficiency research.
Mental Model
A transformer processes a sequence of tokens by passing them through a stack of identical blocks. Each block has two sub-layers: a self-attention layer (which lets tokens communicate with each other) and a feed-forward network (which processes each token independently). Residual connections and layer normalization stabilize training.
Self-attention is the key innovation. Each token creates a query ("what am I looking for?"), a key ("what do I contain?"), and a value ("what do I contribute?"). Tokens attend to each other based on query-key similarity, and the output is a weighted sum of values.
Formal Setup and Notation
Let the input sequence have tokens, each represented as a -dimensional vector. The input is a matrix .
Self-Attention
Scaled Dot-Product Attention
Given an input , compute queries, keys, and values:
where and are learned weight matrices.
The attention output is:
where the softmax is applied row-wise (each row sums to 1).
Attention Dimensions
Statement
For input :
- , ,
- : the attention matrix
- : each row is a probability distribution
- : the output
The output of attention for token is a weighted average of value vectors:
Intuition
Each token computes a query and compares it against all keys via dot products. The softmax converts these similarity scores into attention weights . The output is a weighted sum of value vectors, where tokens with similar query-key pairs get higher weight. The scaling prevents the dot products from becoming too large (which would cause the softmax to saturate).
Why It Matters
Tracking dimensions through the transformer is the single most useful exercise for understanding the architecture. Every research paper assumes you can do this fluently. The attention matrix is both the source of the transformer's power (global context) and its main computational bottleneck.
Why scale by ? If the entries of and are independent with zero mean and unit variance, then has variance . Without scaling, large causes the dot products to have large magnitude, pushing the softmax into regions with near-zero gradients. Dividing by normalizes the variance to 1, keeping the softmax in a useful range.
Multi-Head Attention
Multi-Head Attention
Instead of computing a single attention function, use heads in parallel:
where and with .
Concatenate the heads and project:
where .
Why multiple heads? Each head can attend to different aspects of the input—one head might focus on syntactic relationships, another on semantic similarity, another on positional proximity. Multi-head attention allows the model to jointly attend to information from different representation subspaces. Mechanistic interpretability work gives concrete examples of specialization: previous-token heads that copy information from position to position (Elhage et al., 2021), induction heads that implement in-context pattern completion (Olsson et al., 2022), and name mover heads that move subject tokens to the final position in the indirect object identification circuit (Wang et al., 2022).
Parameter count for MHA: Each head has each of size . With heads, total QKV parameters are . The output projection adds . Total: parameters (ignoring biases).
MQA and GQA: KV-cache memory optimizations
At inference time, autoregressive decoding caches the keys and values of every past token. For standard multi-head attention, the KV cache grows as per sequence, which becomes the dominant memory cost at long contexts and large batch sizes.
Multi-Query Attention
MQA (Shazeer, 2019) uses query heads but only one shared key head and one shared value head across all queries:
The KV cache shrinks by a factor of . Decoding throughput improves because each decode step reads far less memory. The quality cost versus full MHA is small but nonzero.
Grouped-Query Attention
GQA (Ainslie et al., 2023) interpolates between MQA and full MHA: the query heads are partitioned into groups, with one shared key-value head per group. Each K/V head is shared across query heads. recovers MQA, recovers MHA. Llama 2 70B, Llama 3, and Mistral use GQA with chosen to balance quality and KV-cache cost.
See attention variants and efficiency for a broader survey of efficient-attention schemes.
Residual Connections and Layer Normalization
Transformer Sub-Layer with Residual Connection
Each sub-layer (attention or FFN) is wrapped with a residual connection:
This allows gradients to flow directly through the network and enables training of deep transformers.
Layer Normalization
For a vector , layer normalization computes:
where , , and are learned scale and shift parameters.
Root Mean Square Layer Normalization
RMSNorm (Zhang and Sennrich, 2019) drops the mean-subtraction step of LayerNorm and re-scales only by the root mean square of the activations:
where is a learned scale. No shift and no centering. RMSNorm is about 15 percent faster than LayerNorm in practice and matches or exceeds its quality, which is why it is the normalization used in Llama, Mistral, DeepSeek, and Qwen.
Pre-norm vs. post-norm. The original transformer (Vaswani et al., 2017) uses post-norm: . Most modern LLMs use pre-norm: . Pre-norm is more stable for training deep networks—the residual path stays unobstructed end to end.
Feed-Forward Network
Position-wise Feed-Forward Network
The original 2017 transformer FFN applies two linear transformations with a nonlinearity:
where , , and is ReLU (Vaswani et al., 2017) or GeLU in later variants. The standard choice is .
Parameter count for the 2017 FFN. has parameters, has parameters. With : total is parameters (ignoring biases).
Gated FFN (SwiGLU / GeGLU)
Since 2023, every major frontier LLM (Llama 2, Llama 3, Mistral, Mixtral, DeepSeek, Qwen, Gemma) replaces the two-matrix FFN with a gated variant that has three projection matrices :
where is elementwise product, , , and is SiLU (SwiGLU) or GeLU (GeGLU). The gate modulates the activation elementwise.
Parameter count for gated FFN. Three projections give parameters. To keep the parameter budget comparable to the 2017 FFN, Shazeer (2020) recommends , which yields parameters. This matches the legacy per-layer formula used below. Llama 2 7B () picks , rounded up to a multiple of 256 for kernel alignment.
The role of the FFN. Geva et al. (2021) proposed that FFN layers act as key-value memories: maps inputs to a high-dimensional space where patterns are detected, and maps back to the residual stream with the associated information. Under this view, the FFN is where factual knowledge is primarily stored. This is an empirical interpretation from mechanistic interpretability, not a settled fact. Hase et al. (2023) show that the location where Causal Tracing localizes a fact is not a reliable predictor of which layer is best to edit, which complicates the simple "localization equals storage" reading.
Positional Encoding
Self-attention is permutation-equivariant: shuffling the input tokens shuffles the output tokens identically. Without positional information, the model cannot distinguish "the dog bit the man" from "the man bit the dog."
Sinusoidal Positional Encoding
The original transformer uses fixed sinusoidal encodings added to the input embeddings:
for position and dimension . This allows the model to attend to relative positions because is a linear function of .
Rotary Position Embedding (RoPE)
RoPE (Su et al., 2021) encodes position by rotating the query and key vectors in independent 2D subspaces:
where is a block-diagonal matrix of planar rotations, with the -th block a 2x2 rotation by angle using the base frequencies
Rotations that act on the same 2D subspace commute and satisfy block-by-block. Since is block-diagonal, the full matrix product also satisfies , so the attention score becomes:
The QK dot product depends only on the relative position . This makes the attention pattern (the matrix of softmax weights) translation-equivariant in the relative-position sense; the full attention output also picks up content from , which RoPE leaves unrotated, so the layer is not literally translation-invariant in input space — only its query-key similarity structure is. That is enough to give the model good length generalization without absolute positional embeddings.
Why RoPE dominates. RoPE naturally encodes relative positions (not absolute), extrapolates better to longer sequences than seen during training, and does not add parameters. It is used in Llama, Mistral, and most modern open-source LLMs.
Attention with Linear Biases
ALiBi (Press, Smith, Lewis, 2022) is the canonical length-extrapolation alternative to RoPE. Instead of rotating queries and keys, ALiBi adds a fixed linear bias to the attention logits based on the relative position:
where is a per-head slope fixed at initialization (heads get a geometric sequence of slopes). No positional embeddings are added to the inputs. The linear penalty gives tokens increasing distance a monotonically decreasing prior, and the model extrapolates to sequence lengths well beyond the training window. ALiBi is used in BLOOM and MPT.
Computational Complexity
Attention is Quadratic in Sequence Length
Statement
The computational cost of self-attention is:
The factor comes from computing the attention matrix . The memory cost for storing attention weights is per head.
For a full transformer with layers and heads:
- Attention cost per layer:
- FFN cost per layer: with
- Total cost:
Intuition
Every token must attend to every other token, producing an matrix. For short sequences (), the FFN dominates. For long sequences (), attention dominates. This is why extending context length is hard: doubling quadruples the attention cost.
Why It Matters
The quadratic cost is the fundamental bottleneck for long-context models. A model processing 100K tokens needs attention matrices with entries per layer. This has motivated extensive research into efficient attention: sparse attention, linear attention, FlashAttention (which reduces memory but not FLOPs), and sub-quadratic architectures like Mamba.
Failure Mode
The scaling is for standard dense attention. Methods like FlashAttention reduce the memory cost from to by computing attention in tiles, but the compute cost remains . True sub-quadratic compute requires architectural changes (sparse or linear attention), which can reduce model quality.
Parameter Counting
Transformer Parameter Count
Statement
A decoder-only transformer with layers has approximately:
Breaking this down:
- Token embedding: parameters
- Per layer:
- Multi-head attention (QKV + output):
- FFN (two linear layers):
- Layer norm (2 per layer): (negligible)
- Subtotal: per layer
- Output projection (often tied with embedding):
For GPT-3 scale (, , ): approximately B parameters.
This is an order-of-magnitude approximation. It ignores biases, layer norm scale and shift, the final output projection head, and absolute positional embeddings, and it double-counts in the presence of weight tying. Many modern implementations tie the input embedding with the output unembedding matrix, so only one block is counted. The headline 175B figure works out because the omitted and double-counted terms approximately cancel against the neglected layer norm and bias parameters, not because plus two embedding copies is exact.
Intuition
The vast majority of parameters are in the transformer layers, not the embeddings (unless the vocabulary is very large). Within each layer, the FFN contains of the parameters ( vs. for attention). This is why the FFN layers are where most of the model's knowledge capacity resides.
Why It Matters
Parameter counting is essential for: (1) estimating compute costs for training and inference, (2) understanding scaling laws, (3) comparing architectures, and (4) estimating memory requirements. A model with parameters in fp16 requires bytes of memory just for weights, plus additional memory for activations and optimizer states. Techniques like speculative decoding and quantization reduce these costs at serving time.
A Complete Transformer Block
Putting it all together, one transformer block computes (using pre-norm):
The full model stacks such blocks, preceded by token embedding + positional encoding and followed by a final layer norm and linear output projection to vocabulary logits.
Common Confusions
Attention is not a learned weight matrix
The attention weights are computed dynamically from the input. They change for every input sequence. The learned parameters are , which determine how attention is computed. This input-dependence is what gives transformers their flexibility compared to fixed-weight architectures.
Multi-head attention does not multiply the cost by h
Each head operates on dimensions, so the total computation across all heads is the same as a single head with full dimensions. Multi-head attention is a reorganization of computation, not a multiplication.
FlashAttention reduces memory, not FLOPs
FlashAttention computes the same mathematical operation as standard attention. It reduces memory from to by computing attention in blocks and never materializing the full matrix. But the number of floating-point operations is unchanged. True compute savings require architectural changes.
Summary
- Self-attention:
- Multi-head attention: parallel heads with , concatenated and projected
- Each transformer block: attention + residual + LayerNorm + FFN + residual + LayerNorm
- Attention cost is . quadratic in sequence length
- FFN cost is . dominates for short sequences
- Per-layer parameters: (attention + FFN ). Modern LLMs replace the two-matrix FFN with a SwiGLU or GeGLU gated FFN that has three projections and , which preserves the budget. Mixture-of-experts variants sparsely activate a subset of FFN parameters.
- RoPE gives relative position encoding via rotation of Q and K. ALiBi is the canonical alternative: adds a linear bias to logits, extrapolates well
- Pre-norm (LayerNorm before sub-layer) is standard in modern LLMs. RMSNorm (mean-centering dropped, RMS re-scaling kept) is about 15 percent faster and is used in Llama, Mistral, DeepSeek, Qwen
- MQA and GQA shrink the inference-time KV cache by sharing K/V heads across query heads
Exercises
Problem
For a transformer with , heads, and , using the 2017-original two-matrix FFN with ReLU or GeLU (not the modern SwiGLU-gated FFN used in Llama, Mistral, DeepSeek, Qwen; see Shazeer 2020, arXiv:2002.05202), compute the number of parameters in one transformer block (ignoring biases and layer norm parameters).
Problem
If the sequence length doubles from to , by what factor does the attention computation cost increase? By what factor does the FFN computation cost increase?
Problem
Show that without positional encoding, self-attention is permutation-equivariant: if you permute the input tokens by a permutation , the output tokens are permuted by the same .
Problem
A transformer model has layers, , heads, (as in Llama 2 7B, which uses a SwiGLU-gated FFN with , rounded to a multiple of 256 for kernel alignment), and vocabulary . Estimate the total parameter count and the memory required to store weights in fp16.
Related Comparisons
- Autoregressive Models vs. Diffusion Models
- Autoregressive Models vs. JEPA
- Dense Transformers vs. Mixture-of-Experts
- Transformer vs. Mamba vs. TTT
Frequently Asked Questions
- Why does attention use a softmax?
- The softmax produces a probability distribution over keys, giving attention weights that are non-negative and sum to 1. This lets attention be interpreted as a soft dictionary lookup: a convex combination of values weighted by query-key similarity. The softmax is differentiable, so the whole operation can be trained end-to-end.
- Why divide by sqrt(d_k) in attention?
- Without scaling, the variance of QK^T grows linearly with d_k (assuming Q, K have unit variance). For large d_k the softmax saturates: a few entries dominate and gradients to all other positions vanish. Dividing by sqrt(d_k) keeps the logit variance ~1 regardless of dimension, keeping softmax in its sensitive regime.
- What is the residual stream?
- The shared d-dimensional vector that flows through the transformer unchanged except for additive contributions from each block. Attention and FFN sub-blocks read from and write back to the residual stream. Mechanistic interpretability uses this as the canonical model: the network is a sequence of read-process-write operations on one shared memory.
- Pre-norm vs post-norm — which is better?
- Pre-norm applies LayerNorm before each sub-block; post-norm applies it after. Pre-norm trains stably at large depth because the gradient flows through the residual without being squashed by an early LayerNorm. Modern LLMs (GPT-2 onward, LLaMA, Mistral, all dense decoder-only models) use pre-norm; original Transformer paper used post-norm.
- Why use Mixture of Experts in modern transformers?
- MoE replaces the dense FFN with N expert FFNs and a router that selects k of them per token (typically k=2). Total parameters scale with N; per-token compute scales with k. This decouples capacity from per-token cost, making 671B-parameter models like DeepSeek-V3 affordable to serve. The router is the new failure mode: it can collapse to a few experts (load balancing loss is the standard fix).
References
Canonical:
- Vaswani et al., "Attention Is All You Need" (2017). The original transformer paper, Sections 3.1-3.3 and 3.5 for architecture and positional encoding.
Current:
- Su et al., "RoFormer: Enhanced Transformer with Rotary Position Embedding" (2021), arXiv:2104.09864. RoPE, Sections 3.1-3.4 for the construction.
- Shazeer, "GLU Variants Improve Transformer" (2020), arXiv:2002.05202. Motivates SwiGLU and GeGLU, and recommends shrinking by to match the 2017 parameter budget.
- Touvron et al., "Llama 2: Open Foundation and Fine-Tuned Chat Models" (2023), arXiv:2307.09288. SwiGLU-gated FFN with at , Section 2.
- Dao et al., "FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness" (NeurIPS 2022), arXiv:2205.14135.
- Zhang and Sennrich, "Root Mean Square Layer Normalization" (NeurIPS 2019), arXiv:1910.07467. RMSNorm: drops mean-centering, keeps RMS re-scaling; used in Llama, Mistral, DeepSeek.
- Shazeer, "Fast Transformer Decoding: One Write-Head is All You Need" (2019), arXiv:1911.02150. Introduces Multi-Query Attention (MQA) to shrink the KV cache.
- Ainslie, Lee-Thorp, de Jong, Zemlyanskiy, Lebrón, Sanghai, "GQA: Training Generalized Multi-Query Transformer Models from Multi-Head Checkpoints" (EMNLP 2023), arXiv:2305.13245. Grouped-Query Attention, used in Llama 2 70B and Llama 3.
- Press, Smith, Lewis, "Train Short, Test Long: Attention with Linear Biases Enables Input Length Extrapolation" (ICLR 2022), arXiv:2108.12409. ALiBi, used in BLOOM and MPT.
Mechanistic interpretability:
- Elhage et al., "A Mathematical Framework for Transformer Circuits" (Anthropic, 2021). Previous-token heads and the residual-stream view.
- Olsson et al., "In-context Learning and Induction Heads" (Anthropic, 2022). Induction heads as a mechanism for in-context learning.
- Wang et al., "Interpretability in the Wild: a Circuit for Indirect Object Identification in GPT-2 Small" (2022). Name mover heads.
- Geva et al., "Transformer Feed-Forward Layers Are Key-Value Memories" (EMNLP 2021). FFN-as-memory hypothesis.
- Hase et al., "Does Localization Inform Editing? Surprising Differences in Causality-Based Localization vs. Knowledge Editing in Language Models" (NeurIPS 2023). Evidence that Causal Tracing localizations do not predict editable layers.
Textbooks:
- Jurafsky & Martin, Speech and Language Processing (3rd ed., draft), Chapters 7-12.
- Goodfellow, Bengio, Courville, Deep Learning (2016), Chapters 10-12.
Next Topics
The natural next steps from transformer architecture:
- Mechanistic interpretability: what do the attention heads and FFN layers actually compute?
- Hallucination theory: why the next-token prediction objective leads to confabulation
- RLHF and alignment: fine-tuning the transformer for human preferences
- Vision transformer lineage: how the transformer was adapted for computer vision (ViT, Swin, DINO, CLIP)
Last reviewed: April 26, 2026
Canonical graph
Required before and derived from this topic
These links come from prerequisite edges in the curriculum graph. Editorial suggestions are shown here only when the target page also cites this page as a prerequisite.
Required prerequisites
15- Deep Learning (Goodfellow, Bengio, Courville)layer 0B · tier 1
- Softmax and Numerical Stabilitylayer 1 · tier 1
- Adam Optimizerlayer 2 · tier 1
- Feedforward Networks and Backpropagationlayer 2 · tier 1
- Linear Layer: Shapes, Bias, and Memorylayer 2 · tier 1
Derived topics
33- Tabular Foundation Models as Bayesian Inference Engineslayer 3 · tier 1
- Attention Is All You Need (Paper)layer 4 · tier 1
- Hallucination Theorylayer 4 · tier 1
- Mechanistic Interpretability: Features, Circuits, and Causal Faithfulnesslayer 4 · tier 1
- Vision Transformer Lineage: ViT, DeiT, Swin, MAE, DINOv2, SAMlayer 4 · tier 1
+28 more on the derived-topics page.
Graph-backed continuations