Skip to main content

Prerequisite chain

Prerequisites for Distributed Training Theory

Topics you need before working through Distributed Training Theory. Direct prerequisites are listed first; transitive prerequisites (the chain reachable through them) follow.

Direct prerequisites (10)

  1. Optimizer Theory: SGD, Adam, and Muonlayer 3, tier 1
  2. Parallel Processing Fundamentalslayer 5, tier 2
  3. Batch Size and Learning Dynamicslayer 2, tier 2
  4. Broadcast Joins in Distributed Computelayer 4, tier 3
  5. Dask Parallel Pythonlayer 4, tier 3
  6. Federated Learninglayer 3, tier 2
  7. Kafka Streaming Platformlayer 4, tier 3
  8. Ray Distributed Pythonlayer 4, tier 3
  9. Running ML Workloads on GPUslayer 4, tier 3
  10. Tokenization and Information Theorylayer 4, tier 3

Reachable through the chain (197)

These topics are not directly cited as prerequisites but are reached transitively by following the chain upward. Working through the direct prerequisites pulls these in.