diff --git a/site/checkpoint-system.md b/site/checkpoint-system.md
index 5c2946ea..4f6c1469 100644
--- a/site/checkpoint-system.md
+++ b/site/checkpoint-system.md
@@ -1,5 +1,12 @@
# π― TinyTorch Checkpoint System
+
Technical Implementation Guide
Capability validation system architecture and implementation details
@@ -7,7 +14,7 @@
**Purpose**: Technical documentation for the checkpoint validation system. Understand the architecture and implementation details of capability-based learning assessment.
-The TinyTorch checkpoint system provides technical infrastructure for capability validation and progress tracking. This system transforms traditional module completion into measurable skill assessment through automated testing and validation.
+The TinyTorch checkpoint system provides optional infrastructure for capability validation and progress tracking. This system transforms traditional module completion into measurable skill assessment through automated testing and validation.
diff --git a/site/learning-progress.md b/site/learning-progress.md
index d76ec47c..4a619693 100644
--- a/site/learning-progress.md
+++ b/site/learning-progress.md
@@ -2,21 +2,44 @@
Monitor Your Learning Journey
-
Track your capability development through 16 essential ML systems skills
+
Track your capability development through 18 modules and 6 historical milestones
-**Purpose**: Monitor your capability development through the 21-checkpoint system. Track progress from foundation skills to production ML systems mastery.
+**Purpose**: Monitor your progress as you build a complete ML framework from scratch. Track module completion and milestone achievements.
-Track your progression through 21 essential ML systems capabilities. Each checkpoint represents fundamental competencies you'll master through hands-on implementationβfrom tensor operations to production-ready systems.
+## The Core Workflow
-## How to Track Your Progress
+TinyTorch follows a simple three-step cycle:
+
+```
+1. Edit modules β 2. Export to package β 3. Validate with milestones
+```
+
+**π See [Student Workflow](student-workflow.html)** for the complete development cycle.
+
+## Understanding Modules vs Checkpoints vs Milestones
-
π― Capability-Based Learning
-Use TinyTorch's 21-checkpoint system to monitor your capability development. Track progress from foundation skills to production ML systems mastery.
+**Modules (18 total)**: What you're building - the actual code implementations
-**π See [Essential Commands](tito-essentials.html)** for complete progress tracking commands and workflow.
+- Located in `modules/source/`
+- You implement each component from scratch
+- Export with `tito module complete N`
+
+**Milestones (6 total)**: How you validate - historical proof scripts
+
+- Located in `milestones/`
+- Run scripts that use YOUR implementations
+- Recreate ML history (1957 Perceptron β 2018 MLPerf)
+
+**Checkpoints (21 total)**: Optional progress tracking
+
+- Use `tito checkpoint status` to view
+- Tracks capability mastery
+- Not required for the core workflow
+
+**π See [Journey Through ML History](chapters/milestones.html)** for milestone details.
@@ -40,40 +63,66 @@ TinyTorch organizes learning through **three pedagogically-motivated tiers**, ea
**π See [Quick Start Guide](quickstart-guide.html)** for immediate hands-on experience with your first module.
-## 21 Core Capabilities
+## Module Progression
-Track progress through essential ML systems competencies:
+Your journey through 18 modules organized in three tiers:
-```{admonition} Capability Tracking
-:class: note
-Each checkpoint validates mastery of fundamental ML systems skills.
+### ποΈ Foundation Tier (Modules 01-07)
+
+Build the mathematical infrastructure:
+
+| Module | Component | What You Build |
+|--------|-----------|----------------|
+| 01 | Tensor | N-dimensional arrays with operations |
+| 02 | Activations | ReLU, Softmax, nonlinear functions |
+| 03 | Layers | Linear layers, forward/backward |
+| 04 | Losses | CrossEntropyLoss, MSELoss |
+| 05 | Autograd | Automatic differentiation engine |
+| 06 | Optimizers | SGD, Adam, parameter updates |
+| 07 | Training | Complete training loops |
+
+**Milestone unlocked**: M01 Perceptron (1957), M02 XOR (1969)
+
+### ποΈ Architecture Tier (Modules 08-13)
+
+Implement modern architectures:
+
+| Module | Component | What You Build |
+|--------|-----------|----------------|
+| 08 | DataLoader | Batching and data pipelines |
+| 09 | Spatial | Conv2d, MaxPool2d for vision |
+| 10 | Tokenization | Character-level tokenizers |
+| 11 | Embeddings | Token and positional embeddings |
+| 12 | Attention | Multi-head self-attention |
+| 13 | Transformers | LayerNorm, TransformerBlock, GPT |
+
+**Milestones unlocked**: M03 MLP (1986), M04 CNN (1998), M05 Transformers (2017)
+
+### β‘ Optimization Tier (Modules 14-18)
+
+Optimize for production:
+
+| Module | Component | What You Build |
+|--------|-----------|----------------|
+| 14 | Profiling | Performance measurement tools |
+| 15 | Quantization | INT8/FP16 implementations |
+| 16 | Compression | Pruning techniques |
+| 17 | Memoization | KV-cache for generation |
+| 18 | Acceleration | Batching strategies |
+
+**Milestone unlocked**: M06 MLPerf (2018)
+
+## Optional: Checkpoint System
+
+Track capability mastery with the optional checkpoint system:
+
+```bash
+tito checkpoint status # View your progress
```
-| Checkpoint | Capability Question | Modules Required | Status |
-|------------|-------------------|------------------|--------|
-| 00 | Can I set up my environment? | 01 | β¬ Setup |
-| 01 | Can I manipulate tensors? | 02 | β¬ Foundation |
-| 02 | Can I add nonlinearity? | 03 | β¬ Intelligence |
-| 03 | Can I build network layers? | 04 | β¬ Components |
-| 04 | Can I measure loss? | 05 | β¬ Networks |
-| 05 | Can I compute gradients? | 06 | β¬ Learning |
-| 06 | Can I optimize parameters? | 07 | β¬ Optimization |
-| 07 | Can I train models? | 08 | β¬ Training |
-| 08 | Can I process images? | 09 | β¬ Vision |
-| 09 | Can I load data efficiently? | 10 | β¬ Data |
-| 10 | Can I process text? | 11 | β¬ Language |
-| 11 | Can I create embeddings? | 12 | β¬ Representation |
-| 12 | Can I implement attention? | 13 | β¬ Attention |
-| 13 | Can I build transformers? | 14 | β¬ Architecture |
-| 14 | Can I profile performance? | 14 | β¬ Deployment |
-| 15 | Can I quantize models? | 15 | β¬ Quantization |
-| 16 | Can I compress networks? | 16 | β¬ Compression |
-| 17 | Can I cache computations? | 17 | β¬ Memoization |
-| 18 | Can I accelerate algorithms? | 18 | β¬ Acceleration |
-| 19 | Can I benchmark competitively? | 19 | β¬ Competition |
-| 20 | Can I build complete language models? | 20 | β¬ TinyGPT Capstone |
+This provides 21 capability checkpoints corresponding to modules and validates your understanding. Helpful for self-assessment but **not required** for the core workflow.
-**π See [Essential Commands](tito-essentials.html)** for progress monitoring commands.
+**π See [Essential Commands](tito-essentials.html)** for checkpoint commands.
---
@@ -121,10 +170,25 @@ Begin developing ML systems competencies immediately:
Begin Setup β
-## Track Your Progress
+## How to Track Your Progress
-To monitor your capability development and learning progression, use the TITO checkpoint commands.
+The essential workflow:
-**π See [Essential Commands](tito-essentials.html)** for complete command reference and usage examples.
+```bash
+# 1. Work on a module
+cd modules/source/03_layers
+jupyter lab 03_layers_dev.py
-**Approach**: You're building ML systems engineering capabilities through hands-on implementation. Each capability checkpoint validates practical competency, not just theoretical understanding.
\ No newline at end of file
+# 2. Export when ready
+tito module complete 03
+
+# 3. Validate with milestones
+cd ../../milestones/01_1957_perceptron
+python 01_rosenblatt_forward.py # Uses YOUR implementation!
+```
+
+**Optional**: Use `tito checkpoint status` to see capability tracking
+
+**π See [Student Workflow](student-workflow.html)** for the complete development cycle.
+
+**Approach**: You're building ML systems engineering capabilities through hands-on implementation. Each module adds new functionality to your framework, and milestones prove it works.
\ No newline at end of file
diff --git a/site/usage-paths/classroom-use.md b/site/usage-paths/classroom-use.md
index b16c7fe7..b57d16ba 100644
--- a/site/usage-paths/classroom-use.md
+++ b/site/usage-paths/classroom-use.md
@@ -1,36 +1,43 @@
# TinyTorch for Instructors: Complete ML Systems Course
+
+
π§ Classroom Integration: Coming Soon
+
NBGrader integration and instructor tooling are under active development. Full documentation and automated grading workflows will be available in future releases.
+
Currently available: Students can use TinyTorch with the standard workflow (edit modules β export β validate with milestones)
+
π See Student Workflow for the current development cycle.
+
+
-
π Course Overview & Benefits: This page explains WHAT TinyTorch offers for ML education and WHY it's effective.
-
π For Setup & Daily Workflow: See
Technical Instructor Guide for step-by-step NBGrader setup and semester management.
+
π Course Vision: This page describes the planned TinyTorch classroom experience.
+
π For Current Usage: Students should follow the
Student Workflow guide.
-
π« Turn-Key ML Systems Education
+
π« Planned: Turn-Key ML Systems Education
Transform students from framework users to systems engineers
-**Transform Your ML Teaching:** Replace black-box API courses with deep systems understanding. Your students will build neural networks from scratch, understand every operation, and graduate job-ready for ML engineering roles.
+**Vision:** Replace black-box API courses with deep systems understanding. Students will build neural networks from scratch, understand every operation, and graduate job-ready for ML engineering roles.
---
-## π― Complete Course Infrastructure
+## π― Planned Course Infrastructure
-
What You Get: Production-Ready Course Materials
+
Planned Features: Production-Ready Course Materials
-- Three-tier progression (20 modules) with NBGrader integration
-- 200+ automated tests for immediate feedback
+- Three-tier progression (18 modules) with NBGrader integration
+- Automated grading for immediate feedback
- Professional CLI tools for development workflow
- Real datasets (CIFAR-10, text generation)
-- Complete instructor guide with setup & grading
-- Flexible pacing (8-20 weeks depending on depth)
+- Complete instructor guide with setup & grading (coming soon)
+- Flexible pacing (14-18 weeks depending on depth)
- Industry practices (Git, testing, documentation)
- Academic foundation from university research
@@ -38,19 +45,10 @@
-**Course Duration:** 14-16 weeks (flexible pacing)
+**Planned Course Duration:** 14-16 weeks (flexible pacing)
**Student Outcome:** Complete ML framework supporting vision AND language models
-```{admonition} Complete Instructor Documentation
-:class: tip
-**See our comprehensive [Instructor Guide](../instructor-guide.md)** for:
-- Complete setup walkthrough (30 minutes)
-- Weekly assignment workflow with NBGrader
-- Grading automation and feedback generation
-- Student support and troubleshooting
-- End-to-end course management
-- Quick reference commands
-```
+**Current Status:** Students can work through modules individually using the standard workflow. Full classroom integration (NBGrader automation, instructor dashboards) coming soon.
---