Clean up documentation by collapsing multiple consecutive blank lines into single blank lines for consistency.
12 KiB
Getting Started with TinyTorch
:class: warning
You're ahead of the curve. TinyTorch is functional but still being refined. Expect rough edges, incomplete documentation, and things that might change. If you proceed, you're helping us shape this by finding what works and what doesn't.
**Best approach right now:** Browse the code and concepts. For hands-on building, check back when we announce classroom readiness (Summer/Fall 2026).
Questions or feedback? [Join the discussion →](https://github.com/harvard-edge/cs249r_book/discussions/1076)
Welcome to TinyTorch! This comprehensive guide will get you started whether you're a student building ML systems, an instructor setting up a course, or a TA supporting learners.
Choose Your Path
Jump directly to your role-specific guide
For Students: Build Your ML Framework
Quick Setup (2 Minutes)
Get your development environment ready to build ML systems from scratch:
# One-line install (run from a project folder like ~/projects)
curl -sSL tinytorch.ai/install | bash
# Activate and verify
cd tinytorch
source .venv/bin/activate
tito setup
What this does:
- Checks your system (Python 3.8+, git)
- Downloads TinyTorch to a
tinytorch/folder - Creates an isolated virtual environment
- Installs all dependencies
- Verifies installation
Keeping up to date:
tito update # Check for and install updates (your work is preserved)
Join the Community (Optional)
After setup, join the global TinyTorch community and validate your installation:
# Log in to join the community
tito community login
# Run baseline benchmark to validate setup
tito benchmark baseline
All community data is stored locally in .tinytorch/ directory. See Community Guide for complete features.
The TinyTorch Build Cycle
TinyTorch follows a simple three-step workflow that you'll repeat for each module:
graph LR
A[1. Edit Module<br/>modules/NN_name.ipynb] --> B[2. Export to Package<br/>tito module complete N]
B --> C[3. Validate with Milestones<br/>Run milestone scripts]
C --> A
style A fill:#fffbeb
style B fill:#f0fdf4
style C fill:#fef3c7
Step 1: Edit Modules
Work on module notebooks interactively:
# Example: Working on Module 01 (Tensor)
cd modules/01_tensor
jupyter lab 01_tensor.ipynb
Each module is a Jupyter notebook where you'll:
- Implement the required functionality from scratch
- Add docstrings and comments
- Run and test your code inline
- See immediate feedback
Step 2: Export to Package
Once your implementation is complete, export it to the main TinyTorch package:
tito module complete MODULE_NUMBER
# Example:
tito module complete 01 # Export Module 01 (Tensor)
After export, your code becomes importable:
from tinytorch.core.tensor import Tensor # YOUR implementation!
Step 3: Validate with Milestones
Run milestone scripts to prove your implementation works:
cd milestones/01_1957_perceptron
python 01_rosenblatt_forward.py # Uses YOUR Tensor (M01)
python 02_rosenblatt_trained.py # Uses YOUR implementation (M01-M07)
Each milestone has a README explaining:
- Required modules
- Historical context
- Expected results
- What you're learning
See Historical Milestones for the complete progression through ML history.
Your First Module (15 Minutes)
Start with Module 01 to build tensor operations - the foundation of all neural networks:
# Step 1: Edit the module
cd modules/01_tensor
jupyter lab 01_tensor.ipynb
# Step 2: Export when ready
tito module complete 01
# Step 3: Validate
from tinytorch.core.tensor import Tensor
x = Tensor([1, 2, 3]) # YOUR implementation!
What you'll implement:
- N-dimensional array creation
- Mathematical operations (add, multiply, matmul)
- Shape manipulation (reshape, transpose)
- Memory layout understanding
Module Progression
TinyTorch has 20 modules organized in progressive tiers:
- Foundation (01-07): Core ML infrastructure - tensors, autograd, training
- Architecture (08-13): Neural architectures - data loading, CNNs, transformers
- Optimization (14-19): Production optimization - profiling, quantization, benchmarking
- Capstone (20): Torch Olympics Competition
See Complete Course Structure for detailed module descriptions.
Essential Commands Reference
The most important commands you'll use daily:
# Export module to package
tito module complete MODULE_NUMBER
# Check module status
tito module status
# System information
tito system info
# Community features
tito community login
tito benchmark baseline
See TITO CLI Reference for complete command documentation.
Notebook Platform Options
For Viewing & Exploration (Online):
- Jupyter/MyBinder: Click "Launch Binder" on any notebook page
- Google Colab: Click "Launch Colab" for GPU access
- Marimo: Click "~ Open in Marimo" for reactive notebooks
For Full Development (Local - Required):
To actually build the framework, you need local installation:
- Full
tinytorch.*package available - Run milestone validation scripts
- Use
titoCLI commands - Execute complete experiments
- Export modules to package
Note for NBGrader assignments: Submit .ipynb files to preserve grading metadata.
What's Next?
- Continue Building: Follow the module progression (01 → 02 → 03...)
- Run Milestones: Prove your implementations work with real ML history
- Build Intuition: Understand ML systems from first principles
The goal isn't just to write code - it's to understand how modern ML frameworks work by building one yourself.
For Instructors & TAs: Classroom Support Coming Soon
📢 Stay Tuned: NBGrader Integration In Development
We're building comprehensive classroom support with NBGrader integration that will enable:
- Automated Assignment Generation - Create student assignments from TinyTorch modules with solutions removed
- Auto-Grading - Automatically grade student implementations against test suites
- Manual Review Interface - Grade ML Systems Thinking questions through a browser-based interface
- Progress Tracking - Monitor student progress across all 20 modules
- Grade Export - Export grades to CSV for LMS integration
What's Planned
Course Structure:
- 14-16 week curriculum covering all 20 modules
- Progressive difficulty from tensors to transformers to optimization
- Historical milestones that validate student implementations
- Capstone competition (Torch Olympics)
Grading Components:
- 70% Auto-Graded: Code implementation correctness via NBGrader test cells
- 30% Manual Review: ML Systems Thinking questions (3 per module)
Assessment Tools:
tito grade generate- Create instructor versions with solutionstito grade release- Generate student versions (solutions removed)tito grade collect- Collect student submissionstito grade autograde- Run automatic gradingtito grade feedback- Generate student feedbacktito grade export- Export grades to CSV
Current Status
TinyTorch is fully functional for self-paced learning today. Students can:
- Work through all 20 modules independently
- Run milestone validation scripts
- Use the complete
titoCLI for module management - Join the community and run benchmarks
For classroom deployment, we recommend waiting for the official NBGrader integration announcement (target: Summer/Fall 2026).
Interested in Early Adoption?
If you're considering using TinyTorch in your course before full classroom support is ready:
- Review the curriculum - Browse modules and milestones to assess fit
- Test the workflow - Complete a few modules yourself to understand the student experience
- Contact us - Join the discussion to share your use case
We're actively seeking instructor feedback to shape the classroom experience.
Stay Updated
- GitHub Discussions - Join the conversation
- Course Structure Overview - Full curriculum details
- Module Documentation - Technical module specifications
Additional Resources
📚 Course Documentation
🛠 CLI & Tools
Ready to start building? Choose your path above and dive into the most comprehensive ML systems course available!