From c7bc68fa372429b3101ef14943ca280515b7ece2 Mon Sep 17 00:00:00 2001 From: Vijay Janapa Reddi Date: Tue, 11 Nov 2025 21:49:37 -0500 Subject: [PATCH] Complete Phase 2 and 3 workflow documentation updates MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Updated remaining documentation to clarify the actual TinyTorch workflow and mark optional/future features appropriately. **Phase 2 (Important files):** - **learning-progress.md**: Added workflow context at top, clear modules vs checkpoints vs milestones explanation, module progression tables by tier, marked checkpoints as optional - **checkpoint-system.md**: Added prominent "Optional Progress Tracking" banner at top, clarified this is not required for core workflow **Phase 3 (Supporting files):** - **classroom-use.md**: Added "Coming Soon" banner for NBGrader integration, clarified current status vs planned features, updated to reflect 18 modules (not 20) Key clarifications across all files: - Core workflow: Edit modules β†’ `tito module complete N` β†’ Run milestone scripts - Checkpoints are optional capability tracking (helpful for self-assessment) - Instructor features marked as "coming soon" / "under development" - All pages reference canonical student-workflow.md Completes the workflow documentation audit identified by website-manager. --- site/checkpoint-system.md | 9 +- site/learning-progress.md | 144 +++++++++++++++++++++--------- site/usage-paths/classroom-use.md | 40 ++++----- 3 files changed, 131 insertions(+), 62 deletions(-) diff --git a/site/checkpoint-system.md b/site/checkpoint-system.md index 5c2946ea..4f6c1469 100644 --- a/site/checkpoint-system.md +++ b/site/checkpoint-system.md @@ -1,5 +1,12 @@ # 🎯 TinyTorch Checkpoint System +
+

πŸ“‹ Optional Progress Tracking

+

This checkpoint system is optional for tracking your learning progress. It's not required for the core TinyTorch workflow.

+

Core workflow: Edit modules β†’ Export with tito module complete N β†’ Validate with milestone scripts

+

πŸ“– See Student Workflow for the essential development cycle.

+
+

Technical Implementation Guide

Capability validation system architecture and implementation details

@@ -7,7 +14,7 @@ **Purpose**: Technical documentation for the checkpoint validation system. Understand the architecture and implementation details of capability-based learning assessment. -The TinyTorch checkpoint system provides technical infrastructure for capability validation and progress tracking. This system transforms traditional module completion into measurable skill assessment through automated testing and validation. +The TinyTorch checkpoint system provides optional infrastructure for capability validation and progress tracking. This system transforms traditional module completion into measurable skill assessment through automated testing and validation.
diff --git a/site/learning-progress.md b/site/learning-progress.md index d76ec47c..4a619693 100644 --- a/site/learning-progress.md +++ b/site/learning-progress.md @@ -2,21 +2,44 @@

Monitor Your Learning Journey

-

Track your capability development through 16 essential ML systems skills

+

Track your capability development through 18 modules and 6 historical milestones

-**Purpose**: Monitor your capability development through the 21-checkpoint system. Track progress from foundation skills to production ML systems mastery. +**Purpose**: Monitor your progress as you build a complete ML framework from scratch. Track module completion and milestone achievements. -Track your progression through 21 essential ML systems capabilities. Each checkpoint represents fundamental competencies you'll master through hands-on implementationβ€”from tensor operations to production-ready systems. +## The Core Workflow -## How to Track Your Progress +TinyTorch follows a simple three-step cycle: + +``` +1. Edit modules β†’ 2. Export to package β†’ 3. Validate with milestones +``` + +**πŸ“– See [Student Workflow](student-workflow.html)** for the complete development cycle. + +## Understanding Modules vs Checkpoints vs Milestones
-

🎯 Capability-Based Learning

-Use TinyTorch's 21-checkpoint system to monitor your capability development. Track progress from foundation skills to production ML systems mastery. +**Modules (18 total)**: What you're building - the actual code implementations -**πŸ“– See [Essential Commands](tito-essentials.html)** for complete progress tracking commands and workflow. +- Located in `modules/source/` +- You implement each component from scratch +- Export with `tito module complete N` + +**Milestones (6 total)**: How you validate - historical proof scripts + +- Located in `milestones/` +- Run scripts that use YOUR implementations +- Recreate ML history (1957 Perceptron β†’ 2018 MLPerf) + +**Checkpoints (21 total)**: Optional progress tracking + +- Use `tito checkpoint status` to view +- Tracks capability mastery +- Not required for the core workflow + +**πŸ“– See [Journey Through ML History](chapters/milestones.html)** for milestone details.
@@ -40,40 +63,66 @@ TinyTorch organizes learning through **three pedagogically-motivated tiers**, ea **πŸ“– See [Quick Start Guide](quickstart-guide.html)** for immediate hands-on experience with your first module. -## 21 Core Capabilities +## Module Progression -Track progress through essential ML systems competencies: +Your journey through 18 modules organized in three tiers: -```{admonition} Capability Tracking -:class: note -Each checkpoint validates mastery of fundamental ML systems skills. +### πŸ—οΈ Foundation Tier (Modules 01-07) + +Build the mathematical infrastructure: + +| Module | Component | What You Build | +|--------|-----------|----------------| +| 01 | Tensor | N-dimensional arrays with operations | +| 02 | Activations | ReLU, Softmax, nonlinear functions | +| 03 | Layers | Linear layers, forward/backward | +| 04 | Losses | CrossEntropyLoss, MSELoss | +| 05 | Autograd | Automatic differentiation engine | +| 06 | Optimizers | SGD, Adam, parameter updates | +| 07 | Training | Complete training loops | + +**Milestone unlocked**: M01 Perceptron (1957), M02 XOR (1969) + +### πŸ›οΈ Architecture Tier (Modules 08-13) + +Implement modern architectures: + +| Module | Component | What You Build | +|--------|-----------|----------------| +| 08 | DataLoader | Batching and data pipelines | +| 09 | Spatial | Conv2d, MaxPool2d for vision | +| 10 | Tokenization | Character-level tokenizers | +| 11 | Embeddings | Token and positional embeddings | +| 12 | Attention | Multi-head self-attention | +| 13 | Transformers | LayerNorm, TransformerBlock, GPT | + +**Milestones unlocked**: M03 MLP (1986), M04 CNN (1998), M05 Transformers (2017) + +### ⚑ Optimization Tier (Modules 14-18) + +Optimize for production: + +| Module | Component | What You Build | +|--------|-----------|----------------| +| 14 | Profiling | Performance measurement tools | +| 15 | Quantization | INT8/FP16 implementations | +| 16 | Compression | Pruning techniques | +| 17 | Memoization | KV-cache for generation | +| 18 | Acceleration | Batching strategies | + +**Milestone unlocked**: M06 MLPerf (2018) + +## Optional: Checkpoint System + +Track capability mastery with the optional checkpoint system: + +```bash +tito checkpoint status # View your progress ``` -| Checkpoint | Capability Question | Modules Required | Status | -|------------|-------------------|------------------|--------| -| 00 | Can I set up my environment? | 01 | ⬜ Setup | -| 01 | Can I manipulate tensors? | 02 | ⬜ Foundation | -| 02 | Can I add nonlinearity? | 03 | ⬜ Intelligence | -| 03 | Can I build network layers? | 04 | ⬜ Components | -| 04 | Can I measure loss? | 05 | ⬜ Networks | -| 05 | Can I compute gradients? | 06 | ⬜ Learning | -| 06 | Can I optimize parameters? | 07 | ⬜ Optimization | -| 07 | Can I train models? | 08 | ⬜ Training | -| 08 | Can I process images? | 09 | ⬜ Vision | -| 09 | Can I load data efficiently? | 10 | ⬜ Data | -| 10 | Can I process text? | 11 | ⬜ Language | -| 11 | Can I create embeddings? | 12 | ⬜ Representation | -| 12 | Can I implement attention? | 13 | ⬜ Attention | -| 13 | Can I build transformers? | 14 | ⬜ Architecture | -| 14 | Can I profile performance? | 14 | ⬜ Deployment | -| 15 | Can I quantize models? | 15 | ⬜ Quantization | -| 16 | Can I compress networks? | 16 | ⬜ Compression | -| 17 | Can I cache computations? | 17 | ⬜ Memoization | -| 18 | Can I accelerate algorithms? | 18 | ⬜ Acceleration | -| 19 | Can I benchmark competitively? | 19 | ⬜ Competition | -| 20 | Can I build complete language models? | 20 | ⬜ TinyGPT Capstone | +This provides 21 capability checkpoints corresponding to modules and validates your understanding. Helpful for self-assessment but **not required** for the core workflow. -**πŸ“– See [Essential Commands](tito-essentials.html)** for progress monitoring commands. +**πŸ“– See [Essential Commands](tito-essentials.html)** for checkpoint commands. --- @@ -121,10 +170,25 @@ Begin developing ML systems competencies immediately: Begin Setup β†’
-## Track Your Progress +## How to Track Your Progress -To monitor your capability development and learning progression, use the TITO checkpoint commands. +The essential workflow: -**πŸ“– See [Essential Commands](tito-essentials.html)** for complete command reference and usage examples. +```bash +# 1. Work on a module +cd modules/source/03_layers +jupyter lab 03_layers_dev.py -**Approach**: You're building ML systems engineering capabilities through hands-on implementation. Each capability checkpoint validates practical competency, not just theoretical understanding. \ No newline at end of file +# 2. Export when ready +tito module complete 03 + +# 3. Validate with milestones +cd ../../milestones/01_1957_perceptron +python 01_rosenblatt_forward.py # Uses YOUR implementation! +``` + +**Optional**: Use `tito checkpoint status` to see capability tracking + +**πŸ“– See [Student Workflow](student-workflow.html)** for the complete development cycle. + +**Approach**: You're building ML systems engineering capabilities through hands-on implementation. Each module adds new functionality to your framework, and milestones prove it works. \ No newline at end of file diff --git a/site/usage-paths/classroom-use.md b/site/usage-paths/classroom-use.md index b16c7fe7..b57d16ba 100644 --- a/site/usage-paths/classroom-use.md +++ b/site/usage-paths/classroom-use.md @@ -1,36 +1,43 @@ # TinyTorch for Instructors: Complete ML Systems Course +
+

🚧 Classroom Integration: Coming Soon

+

NBGrader integration and instructor tooling are under active development. Full documentation and automated grading workflows will be available in future releases.

+

Currently available: Students can use TinyTorch with the standard workflow (edit modules β†’ export β†’ validate with milestones)

+

πŸ“– See Student Workflow for the current development cycle.

+
+
-πŸ“– Course Overview & Benefits: This page explains WHAT TinyTorch offers for ML education and WHY it's effective.
-πŸ“– For Setup & Daily Workflow: See Technical Instructor Guide for step-by-step NBGrader setup and semester management. +πŸ“– Course Vision: This page describes the planned TinyTorch classroom experience.
+πŸ“– For Current Usage: Students should follow the Student Workflow guide.
-

🏫 Turn-Key ML Systems Education

+

🏫 Planned: Turn-Key ML Systems Education

Transform students from framework users to systems engineers

-**Transform Your ML Teaching:** Replace black-box API courses with deep systems understanding. Your students will build neural networks from scratch, understand every operation, and graduate job-ready for ML engineering roles. +**Vision:** Replace black-box API courses with deep systems understanding. Students will build neural networks from scratch, understand every operation, and graduate job-ready for ML engineering roles. --- -## 🎯 Complete Course Infrastructure +## 🎯 Planned Course Infrastructure
-

What You Get: Production-Ready Course Materials

+

Planned Features: Production-Ready Course Materials

    -
  • Three-tier progression (20 modules) with NBGrader integration
  • -
  • 200+ automated tests for immediate feedback
  • +
  • Three-tier progression (18 modules) with NBGrader integration
  • +
  • Automated grading for immediate feedback
  • Professional CLI tools for development workflow
  • Real datasets (CIFAR-10, text generation)
    -
  • Complete instructor guide with setup & grading
  • -
  • Flexible pacing (8-20 weeks depending on depth)
  • +
  • Complete instructor guide with setup & grading (coming soon)
  • +
  • Flexible pacing (14-18 weeks depending on depth)
  • Industry practices (Git, testing, documentation)
  • Academic foundation from university research
@@ -38,19 +45,10 @@
-**Course Duration:** 14-16 weeks (flexible pacing) +**Planned Course Duration:** 14-16 weeks (flexible pacing) **Student Outcome:** Complete ML framework supporting vision AND language models -```{admonition} Complete Instructor Documentation -:class: tip -**See our comprehensive [Instructor Guide](../instructor-guide.md)** for: -- Complete setup walkthrough (30 minutes) -- Weekly assignment workflow with NBGrader -- Grading automation and feedback generation -- Student support and troubleshooting -- End-to-end course management -- Quick reference commands -``` +**Current Status:** Students can work through modules individually using the standard workflow. Full classroom integration (NBGrader automation, instructor dashboards) coming soon. ---