Files
cs249r_book/tinytorch/site/getting-started.md
Vijay Janapa Reddi 7432aae15c docs: improve figure captions with bold titles and descriptions
Updated all mermaid diagram captions across the site to follow a consistent format:
- Bold title followed by a period
- Descriptive explanation of the diagram
- Ends with a period

Files updated:
- big-picture.md: TinyTorch Module Flow
- getting-started.md: TinyTorch Build Cycle
- milestones.md: Pedagogical Acts, Journey Through ML History
- intro.md: Build-Use-Reflect Learning Cycle
- learning-journey.md: Six-Act Learning Narrative
- optimization.md: Optimization Module Flow, Production Timeline
- foundation.md: Module Dependencies, Tier Milestones
- architecture.md: Module Flow, Tier Milestones
- modules.md: Module Development Workflow
- data.md: Progress Tracking Flow

🤖 Generated with [Claude Code](https://claude.com/claude-code)
2025-12-14 15:05:13 -05:00

8.8 KiB

Getting Started with TinyTorch


You're ahead of the curve. TinyTorch is functional but still being refined. Expect rough edges, incomplete documentation, and things that might change. If you proceed, you're helping us shape this by finding what works and what doesn't.

**Best approach right now:** Browse the code and concepts. For hands-on building, check back when we announce classroom readiness (Summer/Fall 2026).

Questions or feedback? [Join the discussion →](https://github.com/harvard-edge/cs249r_book/discussions/1076)
This guide requires **Python programming** (classes, functions, NumPy basics) and **basic linear algebra** (matrix multiplication).

Welcome to TinyTorch! This comprehensive guide will get you started whether you're a student building ML systems, an instructor setting up a course, or a TA supporting learners.

Choose Your Path

Jump directly to your role-specific guide

For Students: Build Your ML Framework

Quick Setup (2 Minutes)

Get your development environment ready to build ML systems from scratch:

# One-line install (run from a project folder like ~/projects)
curl -sSL tinytorch.ai/install | bash

# Activate and verify
cd tinytorch
source .venv/bin/activate
tito setup

What this does:

  • Checks your system (Python 3.8+, git)
  • Downloads TinyTorch to a tinytorch/ folder
  • Creates an isolated virtual environment
  • Installs all dependencies
  • Verifies installation

Keeping up to date:

tito update # Check for and install updates (your work is preserved)

Join the Community (Optional)

After setup, join the global TinyTorch community and validate your installation:

# Log in to join the community
tito community login

# Run baseline benchmark to validate setup
tito benchmark baseline

All community data is stored locally in .tinytorch/ directory. See Community Guide for complete features.

The TinyTorch Build Cycle

TinyTorch follows a simple three-step workflow that you'll repeat for each module:

:align: center
:caption: "**TinyTorch Build Cycle.** The three-step workflow you repeat for each module: edit in Jupyter, export to the package, and validate with milestone scripts."
graph LR
 A[1. Edit Module<br/>modules/NN_name.ipynb] --> B[2. Export to Package<br/>tito module complete N]
 B --> C[3. Validate with Milestones<br/>Run milestone scripts]
 C --> A

 style A fill:#fffbeb
 style B fill:#f0fdf4
 style C fill:#fef3c7

Step 1: Edit Modules

Work on module notebooks interactively:

# Example: Working on Module 01 (Tensor)
cd modules/01_tensor
jupyter lab 01_tensor.ipynb

Each module is a Jupyter notebook where you'll:

  • Implement the required functionality from scratch
  • Add docstrings and comments
  • Run and test your code inline
  • See immediate feedback

Step 2: Export to Package

Once your implementation is complete, export it to the main TinyTorch package:

tito module complete MODULE_NUMBER

# Example:
tito module complete 01 # Export Module 01 (Tensor)

After export, your code becomes importable:

from tinytorch.core.tensor import Tensor # YOUR implementation!

Step 3: Validate with Milestones

Run milestone scripts to prove your implementation works:

tito milestone run perceptron  # Uses YOUR Tensor, Activations, Layers

Each milestone validates that your modules work together correctly. Use tito milestone list to see all available milestones and their required modules.

What if validation fails? If a milestone script produces errors:

  1. Read the error message carefully—it usually points to the problem
  2. Run module tests: tito module test 01 to check your implementation
  3. Return to your Jupyter notebook to debug and fix
  4. Re-export with tito module complete 01 and try again

See Milestone System for the complete progression through ML history.

Your First Module (15 Minutes)

Start with Module 01 to build tensor operations - the foundation of all neural networks:

# Step 1: Edit the module
cd modules/01_tensor
jupyter lab 01_tensor.ipynb

# Step 2: Export when ready
tito module complete 01

# Step 3: Validate
from tinytorch.core.tensor import Tensor
x = Tensor([1, 2, 3]) # YOUR implementation!

What you'll implement:

  • N-dimensional array creation
  • Mathematical operations (add, multiply, matmul)
  • Shape manipulation (reshape, transpose)
  • Memory layout understanding

Module Progression

TinyTorch has 20 modules organized in progressive tiers:

Tier Modules Focus Time Estimate
Foundation 01-07 Core ML infrastructure (tensors, autograd, training) ~15-20 hours
Architecture 08-13 Neural architectures (data loading, CNNs, transformers) ~18-24 hours
Optimization 14-19 Production optimization (profiling, quantization) ~18-24 hours
Capstone 20 Torch Olympics Competition ~8-10 hours

Total: ~60-80 hours over 14-18 weeks (4-6 hours/week pace).

See Foundation Tier Overview for detailed module descriptions.

Essential Commands Reference

The most important commands you'll use daily:

# Export module to package
tito module complete MODULE_NUMBER

# Check module status
tito module status

# System information
tito system info

# Community features
tito community login
tito benchmark baseline

See TITO CLI Reference for complete command documentation.

Notebook Platform Options

For Viewing & Exploration (Online):

  • Jupyter/MyBinder: Click "Launch Binder" on any notebook page
  • Google Colab: Click "Launch Colab" for GPU access
  • Marimo: Click "~ Open in Marimo" for reactive notebooks

For Full Development (Local - Required):

To actually build the framework, you need local installation:

  • Full tinytorch.* package available
  • Run milestone validation scripts
  • Use tito CLI commands
  • Execute complete experiments
  • Export modules to package

Note for NBGrader assignments: Submit .ipynb files to preserve grading metadata.

What's Next?

  1. Continue Building: Follow the module progression (01 → 02 → 03...)
  2. Run Milestones: Prove your implementations work with real ML history
  3. Build Intuition: Understand ML systems from first principles

The goal isn't just to write code - it's to understand how modern ML frameworks work by building one yourself.

For Instructors & TAs: Classroom Support Coming Soon

We're building comprehensive classroom support with NBGrader integration. For hands-on building today, TinyTorch is fully functional for self-paced learning.

What's Planned:

  • Automated assignment generation with solutions removed
  • Auto-grading against test suites
  • Manual review interface for ML Systems Thinking questions
  • Progress tracking across all 20 modules
  • Grade export to CSV for LMS integration

Current Status: TinyTorch works for self-paced learning today. For classroom deployment, we recommend waiting for the official NBGrader integration (target: Summer/Fall 2026).

Interested in early adoption? Join the discussion to share your use case.

Check back for detailed setup instructions and grading rubrics when classroom support is available.

Ready to start building? Head to the Foundation Tier and begin with Module 01!