Unlock: Induction Heads
Induction heads are attention head circuits that implement a specific kind of pattern completion: given a sequence like [A][B]...[A], they predict [B]. Olsson et al. (2022) give strong causal (ablation) evidence that these heads exist in small attention-only models, and correlational co-occurrence evidence linking their formation to a sudden jump in in-context-learning ability in larger transformers. They are one mechanism among several that have been proposed for in-context learning, not the whole story.
273 Prerequisites0 Mastered0 Working208 Gaps
Prerequisite mastery24%
Recommended probe
Floating-Point Arithmetic is your weakest prerequisite with available questions. You haven't been assessed on this topic yet.
Induction HeadsTARGET
Not assessed3 questions
Graph Neural NetworksAdvanced
Not assessed4 questions
Numerical Linear AlgebraFoundations
Not assessed1 question
No quiz
Not assessed2 questions
Not assessed4 questions
Attention Mechanism TheoryResearch
Not assessed11 questions
Not assessed1 question
Transformer ArchitectureResearch
Not assessed11 questions
Sign in to track your mastery and see personalized gap analysis.