Update demo tapes and fix reset command

Demo improvements:
- Add hidden setup phase to demo tapes for clean state
- New benchmark and logo demo tapes
- Improved build-test-ship, milestone, and share-journey demos
- All demos now use Hide/Show for cleaner presentation

CLI fix:
- Add default=None to module reset command argument
- Prevents argparse error when no module specified

Cleanup:
- Remove outdated tinytorch/core/activations.py binary

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
Vijay Janapa Reddi
2025-11-29 13:28:19 -05:00
parent b5bd08763c
commit c776519284
8 changed files with 290 additions and 326 deletions

View File

@@ -330,11 +330,11 @@ validate() {
get_demo_name() {
case $1 in
00) echo "00-test" ;;
01) echo "01-zero-to-ready" ;;
00) echo "00-welcome" ;;
02) echo "02-build-test-ship" ;;
03) echo "03-milestone-unlocked" ;;
04) echo "04-share-journey" ;;
05) echo "05-logo" ;;
*) echo "" ;;
esac
}
@@ -457,13 +457,13 @@ interactive() {
2)
echo -e "${BOLD}Which demo to generate?${NC}"
echo ""
echo " 00) Quick test (5 seconds)"
echo " 01) Zero to Ready"
echo " 00) Welcome (Quick test)"
echo " 02) Build, Test, Ship"
echo " 03) Milestone Unlocked"
echo " 04) Share Journey"
echo " 05) TinyTorch Logo & Story"
echo ""
read -p "Choose demo [00-04]: " demo_num
read -p "Choose demo [00,02-05]: " demo_num
echo ""
cd "$(dirname "$0")/../../.."
@@ -480,13 +480,13 @@ interactive() {
# Step 2: Generate
echo -e "${BOLD}Which demo to generate?${NC}"
echo ""
echo " 00) Quick test (5 seconds)"
echo " 01) Zero to Ready"
echo " 00) Welcome (Quick test)"
echo " 02) Build, Test, Ship"
echo " 03) Milestone Unlocked"
echo " 04) Share Journey"
echo " 05) TinyTorch Logo & Story"
echo ""
read -p "Choose demo [00-04]: " demo_num
read -p "Choose demo [00,02-05]: " demo_num
echo ""
cd "$(dirname "$0")/../../.."

View File

@@ -25,23 +25,45 @@ Set Shell bash
Env PS1 "@profvjreddi 🔥 "
Set TypingSpeed 100ms
# Opening: Show what this demo is about
# ==============================================================================
# SETUP PHASE (HIDDEN): Like a unit test, set up the environment cleanly first
# ==============================================================================
Hide
# Navigate to project and activate environment
Type "cd /Users/VJ/GitHub/TinyTorch"
Enter
Sleep 1s
Type "source activate.sh"
Enter
Sleep 3s
# Reset all modules to pristine state using git
Type "git checkout -- modules/"
Enter
Sleep 2s
# Clean any cached/built files
Type "rm -rf tinytorch/__pycache__ tinytorch/*.pyc"
Enter
Sleep 1s
# Complete module 01 as prerequisite (hidden from user)
Type "tito module complete 01"
Enter
Sleep 5s
Show
# ==============================================================================
# DEMO BEGINS (VISIBLE): Now show the actual workflow
# ==============================================================================
# Opening comment explaining what we'll demonstrate
Type "# Build → Test → Ship 🔨"
Sleep 2s
Enter
Sleep 500ms
# Show everything - users see the full setup
Type "cd /Users/VJ/GitHub/TinyTorch"
Sleep 400ms
Enter
Sleep 1s
Type "source activate.sh"
Sleep 400ms
Enter
Sleep 3s
# Start module 02
Type "tito module start 02"
Sleep 400ms

View File

@@ -25,25 +25,30 @@ Set Shell bash
Env PS1 "@profvjreddi 🔥 "
Set TypingSpeed 100ms
# Opening: Show what this demo is about
Type "# Milestone: Recreate ML History 🏆"
Sleep 2s
Enter
Sleep 500ms
# ==============================================================================
# SETUP PHASE (HIDDEN): Like a unit test, set up the environment cleanly first
# ==============================================================================
Hide
# Show cd and activate, then fast-forward module completions (hidden)
# Navigate to project and activate environment
Type "cd /Users/VJ/GitHub/TinyTorch"
Sleep 400ms
Enter
Sleep 1s
Type "source activate.sh"
Sleep 400ms
Enter
Sleep 3s
# Fast-forward: Complete modules 01-06 to unlock module 07 (hidden for speed)
Hide
# Reset all modules to pristine state using git
Type "git checkout -- modules/"
Enter
Sleep 2s
# Clean any cached/built files
Type "rm -rf tinytorch/__pycache__ tinytorch/*.pyc"
Enter
Sleep 1s
# Complete modules 01-06 as prerequisites to unlock module 07 (hidden for speed)
Type "tito module complete 01"
Enter
Sleep 5s
@@ -62,7 +67,17 @@ Sleep 5s
Type "tito module complete 06"
Enter
Sleep 5s
Show
# ==============================================================================
# DEMO BEGINS (VISIBLE): Now show the actual milestone workflow
# ==============================================================================
# Opening comment explaining what we'll demonstrate
Type "# Milestone: Recreate ML History 🏆"
Sleep 2s
Enter
Sleep 500ms
# Show where we are in the journey (modules 01-06 completed)
Type "tito module status"

View File

@@ -25,23 +25,51 @@ Set Shell bash
Env PS1 "@profvjreddi 🔥 "
Set TypingSpeed 100ms
# Opening: Show what this demo is about
# ==============================================================================
# SETUP PHASE (HIDDEN): Like a unit test, set up the environment cleanly first
# ==============================================================================
Hide
# Navigate to project and activate environment
Type "cd /Users/VJ/GitHub/TinyTorch"
Enter
Sleep 1s
Type "source activate.sh"
Enter
Sleep 3s
# Reset all modules to pristine state using git
Type "git checkout -- modules/"
Enter
Sleep 2s
# Clean any cached/built files
Type "rm -rf tinytorch/__pycache__ tinytorch/*.pyc"
Enter
Sleep 1s
# Complete a few modules so we have progress to show
Type "tito module complete 01"
Enter
Sleep 5s
Type "tito module complete 02"
Enter
Sleep 5s
Type "tito module complete 03"
Enter
Sleep 5s
Show
# ==============================================================================
# DEMO BEGINS (VISIBLE): Now show the community/progress features
# ==============================================================================
# Opening comment explaining what we'll demonstrate
Type "# Share Your Journey 🌍"
Sleep 2s
Enter
Sleep 500ms
# Show everything - users see the full setup
Type "cd /Users/VJ/GitHub/TinyTorch"
Sleep 400ms
Enter
Sleep 1s
Type "source activate.sh"
Sleep 400ms
Enter
Sleep 3s
# Check progress
Type "tito module status"
Sleep 400ms

View File

@@ -0,0 +1,81 @@
# VHS Tape: ⚡ Benchmark Your Build - Performance Validation
# Purpose: Show how to benchmark TinyTorch implementations
# Duration: 25-30 seconds
Output "gifs/05-benchmark.gif"
# Window bar for realistic terminal look
Set WindowBar Colorful
# Carousel-optimized dimensions (16:9 aspect ratio)
Set Width 1280
Set Height 720
Set FontSize 18
Set FontFamily "JetBrains Mono, Monaco, Menlo, monospace"
Set Theme { "name": "TinyTorch", "black": "#1E1E2E", "red": "#F38BA8", "green": "#A6E3A1", "yellow": "#F9E2AF", "blue": "#89B4FA", "magenta": "#CBA6F7", "cyan": "#94E2D5", "white": "#CDD6F4", "brightBlack": "#585B70", "brightRed": "#F38BA8", "brightGreen": "#A6E3A1", "brightYellow": "#F9E2AF", "brightBlue": "#89B4FA", "brightMagenta": "#CBA6F7", "brightCyan": "#94E2D5", "brightWhite": "#CDD6F4", "background": "#1E1E2E", "foreground": "#CDD6F4", "selection": "#585B70", "cursor": "#F5E0DC" }
Set Padding 30
Set Framerate 30
Set Margin 20
Set MarginFill "#1E1E2E"
Set BorderRadius 10
Set LoopOffset 10%
# Set shell with custom prompt for reliable waiting
Set Shell bash
Env PS1 "@profvjreddi 🔥 "
Set TypingSpeed 100ms
# ==============================================================================
# SETUP PHASE (HIDDEN): Like a unit test, set up the environment cleanly first
# ==============================================================================
Hide
# Navigate to project and activate environment
Type "cd /Users/VJ/GitHub/TinyTorch"
Enter
Sleep 1s
Type "source activate.sh"
Enter
Sleep 3s
# Reset all modules to pristine state using git
Type "git checkout -- modules/"
Enter
Sleep 2s
# Clean any cached/built files
Type "rm -rf tinytorch/__pycache__ tinytorch/*.pyc"
Enter
Sleep 1s
# Complete modules 01-03 to have some implementations to benchmark
Type "tito module complete 01"
Enter
Sleep 5s
Type "tito module complete 02"
Enter
Sleep 5s
Type "tito module complete 03"
Enter
Sleep 5s
Show
# ==============================================================================
# DEMO BEGINS (VISIBLE): Now show the benchmarking workflow
# ==============================================================================
# Opening comment explaining what we'll demonstrate
Type "# Benchmark Your Build ⚡"
Sleep 2s
Enter
Sleep 500ms
# Run the baseline benchmark
Type "tito benchmark baseline"
Sleep 400ms
Enter
Sleep 15s # Wait for benchmark to complete
# Final message
Type "# Fast code. Built by you. 🚀"
Sleep 3s

100
docs/_static/demos/tapes/05-logo.tape vendored Normal file
View File

@@ -0,0 +1,100 @@
# VHS Tape: 🔥 The TinyTorch Story - Philosophy & Vision
# Purpose: Show the beautiful story behind TinyTorch with tito logo
# Duration: 45-50 seconds
Output "gifs/05-logo.gif"
# Window bar for realistic terminal look
Set WindowBar Colorful
# Carousel-optimized dimensions (16:9 aspect ratio)
Set Width 1280
Set Height 720
Set FontSize 18
Set FontFamily "JetBrains Mono, Monaco, Menlo, monospace"
Set Theme { "name": "TinyTorch", "black": "#1E1E2E", "red": "#F38BA8", "green": "#A6E3A1", "yellow": "#F9E2AF", "blue": "#89B4FA", "magenta": "#CBA6F7", "cyan": "#94E2D5", "white": "#CDD6F4", "brightBlack": "#585B70", "brightRed": "#F38BA8", "brightGreen": "#A6E3A1", "brightYellow": "#F9E2AF", "brightBlue": "#89B4FA", "brightMagenta": "#CBA6F7", "brightCyan": "#94E2D5", "brightWhite": "#CDD6F4", "background": "#1E1E2E", "foreground": "#CDD6F4", "selection": "#585B70", "cursor": "#F5E0DC" }
Set Padding 30
Set Framerate 30
Set Margin 20
Set MarginFill "#1E1E2E"
Set BorderRadius 10
Set LoopOffset 10%
# Set shell with custom prompt for reliable waiting
Set Shell bash
Env PS1 "@profvjreddi 🔥 "
Set TypingSpeed 100ms
# ==============================================================================
# SETUP PHASE (HIDDEN): Like a unit test, set up the environment cleanly first
# ==============================================================================
Hide
# Navigate to project and activate environment
Type "cd /Users/VJ/GitHub/TinyTorch"
Enter
Sleep 1s
Type "source activate.sh"
Enter
Sleep 3s
Show
# ==============================================================================
# DEMO BEGINS (VISIBLE): Now show the philosophy and story
# ==============================================================================
# Opening comment explaining what we'll demonstrate
Type "# The Story Behind TinyTorch 🔥"
Sleep 2s
Enter
Sleep 500ms
# Show the beautiful logo and story
Type "tito logo"
Sleep 400ms
Enter
Sleep 5s # Let the logo appear
# Scroll down to show the flame symbolism
Type ""
Enter
Sleep 3s
# Continue scrolling through the philosophy
Type ""
Enter
Sleep 3s
# Scroll to show "Why Tiny?"
Type ""
Enter
Sleep 3s
# Scroll to show "Why Torch?"
Type ""
Enter
Sleep 3s
# Scroll to show the hidden network
Type ""
Enter
Sleep 3s
# Scroll to show the philosophy section
Type ""
Enter
Sleep 3s
# Scroll to show Professor Reddi's message
Type ""
Enter
Sleep 4s
# Scroll to the final message
Type ""
Enter
Sleep 3s
# Final comment
Type "# Don't just import it. Build it. 🔥"
Sleep 3s

View File

@@ -1,282 +0,0 @@
# ╔═══════════════════════════════════════════════════════════════════════════════╗
# ║ 🚨 CRITICAL WARNING 🚨 ║
# ║ AUTOGENERATED! DO NOT EDIT! ║
# ║ ║
# ║ This file is AUTOMATICALLY GENERATED from source modules. ║
# ║ ANY CHANGES MADE HERE WILL BE LOST when modules are re-exported! ║
# ║ ║
# ║ ✅ TO EDIT: src/02_activations/02_activations.py ║
# ║ ✅ TO EXPORT: Run 'tito module complete <module_name>' ║
# ║ ║
# ║ 🛡️ STUDENT PROTECTION: This file contains optimized implementations. ║
# ║ Editing it directly may break module functionality and training. ║
# ║ ║
# ║ 🎓 LEARNING TIP: Work in src/ (developers) or modules/ (learners) ║
# ║ The tinytorch/ directory is generated code - edit source files instead! ║
# ╚═══════════════════════════════════════════════════════════════════════════════╝
# %% auto 0
__all__ = ['TOLERANCE', 'Sigmoid', 'ReLU', 'Tanh', 'GELU', 'Softmax']
# %% ../../modules/02_activations/02_activations.ipynb 3
import numpy as np
from typing import Optional
# Import from TinyTorch package (previous modules must be completed and exported)
from .tensor import Tensor
# Constants for numerical comparisons
TOLERANCE = 1e-10 # Small tolerance for floating-point comparisons in tests
# %% ../../modules/02_activations/02_activations.ipynb 8
from .tensor import Tensor
class Sigmoid:
"""
Sigmoid activation: σ(x) = 1/(1 + e^(-x))
Maps any real number to (0, 1) range.
Perfect for probabilities and binary classification.
"""
def forward(self, x: Tensor) -> Tensor:
"""
Apply sigmoid activation element-wise.
TODO: Implement sigmoid function
APPROACH:
1. Apply sigmoid formula: 1 / (1 + exp(-x))
2. Use np.exp for exponential
3. Return result wrapped in new Tensor
EXAMPLE:
>>> sigmoid = Sigmoid()
>>> x = Tensor([-2, 0, 2])
>>> result = sigmoid(x)
>>> print(result.data)
[0.119, 0.5, 0.881] # All values between 0 and 1
HINT: Use np.exp(-x.data) for numerical stability
"""
### BEGIN SOLUTION
# Apply sigmoid: 1 / (1 + exp(-x))
# Clip extreme values to prevent overflow (sigmoid(-500) ≈ 0, sigmoid(500) ≈ 1)
# Clipping at ±500 ensures exp() stays within float64 range
z = np.clip(x.data, -500, 500)
# Use numerically stable sigmoid
# For positive values: 1 / (1 + exp(-x))
# For negative values: exp(x) / (1 + exp(x)) = 1 / (1 + exp(-x)) after clipping
result_data = np.zeros_like(z)
# Positive values (including zero)
pos_mask = z >= 0
result_data[pos_mask] = 1.0 / (1.0 + np.exp(-z[pos_mask]))
# Negative values
neg_mask = z < 0
exp_z = np.exp(z[neg_mask])
result_data[neg_mask] = exp_z / (1.0 + exp_z)
return Tensor(result_data)
### END SOLUTION
def __call__(self, x: Tensor) -> Tensor:
"""Allows the activation to be called like a function."""
return self.forward(x)
def backward(self, grad: Tensor) -> Tensor:
"""Compute gradient (implemented in Module 05)."""
pass # Will implement backward pass in Module 05
# %% ../../modules/02_activations/02_activations.ipynb 12
class ReLU:
"""
ReLU activation: f(x) = max(0, x)
Sets negative values to zero, keeps positive values unchanged.
Most popular activation for hidden layers.
"""
def forward(self, x: Tensor) -> Tensor:
"""
Apply ReLU activation element-wise.
TODO: Implement ReLU function
APPROACH:
1. Use np.maximum(0, x.data) for element-wise max with zero
2. Return result wrapped in new Tensor
EXAMPLE:
>>> relu = ReLU()
>>> x = Tensor([-2, -1, 0, 1, 2])
>>> result = relu(x)
>>> print(result.data)
[0, 0, 0, 1, 2] # Negative values become 0, positive unchanged
HINT: np.maximum handles element-wise maximum automatically
"""
### BEGIN SOLUTION
# Apply ReLU: max(0, x)
result = np.maximum(0, x.data)
return Tensor(result)
### END SOLUTION
def __call__(self, x: Tensor) -> Tensor:
"""Allows the activation to be called like a function."""
return self.forward(x)
def backward(self, grad: Tensor) -> Tensor:
"""Compute gradient (implemented in Module 05)."""
pass # Will implement backward pass in Module 05
# %% ../../modules/02_activations/02_activations.ipynb 16
class Tanh:
"""
Tanh activation: f(x) = (e^x - e^(-x))/(e^x + e^(-x))
Maps any real number to (-1, 1) range.
Zero-centered alternative to sigmoid.
"""
def forward(self, x: Tensor) -> Tensor:
"""
Apply tanh activation element-wise.
TODO: Implement tanh function
APPROACH:
1. Use np.tanh(x.data) for hyperbolic tangent
2. Return result wrapped in new Tensor
EXAMPLE:
>>> tanh = Tanh()
>>> x = Tensor([-2, 0, 2])
>>> result = tanh(x)
>>> print(result.data)
[-0.964, 0.0, 0.964] # Range (-1, 1), symmetric around 0
HINT: NumPy provides np.tanh function
"""
### BEGIN SOLUTION
# Apply tanh using NumPy
result = np.tanh(x.data)
return Tensor(result)
### END SOLUTION
def __call__(self, x: Tensor) -> Tensor:
"""Allows the activation to be called like a function."""
return self.forward(x)
def backward(self, grad: Tensor) -> Tensor:
"""Compute gradient (implemented in Module 05)."""
pass # Will implement backward pass in Module 05
# %% ../../modules/02_activations/02_activations.ipynb 20
class GELU:
"""
GELU activation: f(x) = x * Φ(x) x * Sigmoid(1.702 * x)
Smooth approximation to ReLU, used in modern transformers.
Where Φ(x) is the cumulative distribution function of standard normal.
"""
def forward(self, x: Tensor) -> Tensor:
"""
Apply GELU activation element-wise.
TODO: Implement GELU approximation
APPROACH:
1. Use approximation: x * sigmoid(1.702 * x)
2. Compute sigmoid part: 1 / (1 + exp(-1.702 * x))
3. Multiply by x element-wise
4. Return result wrapped in new Tensor
EXAMPLE:
>>> gelu = GELU()
>>> x = Tensor([-1, 0, 1])
>>> result = gelu(x)
>>> print(result.data)
[-0.159, 0.0, 0.841] # Smooth, like ReLU but differentiable everywhere
HINT: The 1.702 constant comes from (2/π) approximation
"""
### BEGIN SOLUTION
# GELU approximation: x * sigmoid(1.702 * x)
# First compute sigmoid part
sigmoid_part = 1.0 / (1.0 + np.exp(-1.702 * x.data))
# Then multiply by x
result = x.data * sigmoid_part
return Tensor(result)
### END SOLUTION
def __call__(self, x: Tensor) -> Tensor:
"""Allows the activation to be called like a function."""
return self.forward(x)
def backward(self, grad: Tensor) -> Tensor:
"""Compute gradient (implemented in Module 05)."""
pass # Will implement backward pass in Module 05
# %% ../../modules/02_activations/02_activations.ipynb 24
class Softmax:
"""
Softmax activation: f(x_i) = e^(x_i) / Σ(e^(x_j))
Converts any vector to a probability distribution.
Sum of all outputs equals 1.0.
"""
def forward(self, x: Tensor, dim: int = -1) -> Tensor:
"""
Apply softmax activation along specified dimension.
TODO: Implement numerically stable softmax
APPROACH:
1. Subtract max for numerical stability: x - max(x)
2. Compute exponentials: exp(x - max(x))
3. Sum along dimension: sum(exp_values)
4. Divide: exp_values / sum
5. Return result wrapped in new Tensor
EXAMPLE:
>>> softmax = Softmax()
>>> x = Tensor([1, 2, 3])
>>> result = softmax(x)
>>> print(result.data)
[0.090, 0.245, 0.665] # Sums to 1.0, larger inputs get higher probability
HINTS:
- Use np.max(x.data, axis=dim, keepdims=True) for max
- Use np.sum(exp_values, axis=dim, keepdims=True) for sum
- The max subtraction prevents overflow in exponentials
"""
### BEGIN SOLUTION
# Numerical stability: subtract max to prevent overflow
# Use Tensor operations to preserve gradient flow!
x_max_data = np.max(x.data, axis=dim, keepdims=True)
x_max = Tensor(x_max_data, requires_grad=False) # max is not differentiable in this context
x_shifted = x - x_max # Tensor subtraction!
# Compute exponentials (NumPy operation, but wrapped in Tensor)
exp_values = Tensor(np.exp(x_shifted.data), requires_grad=x_shifted.requires_grad)
# Sum along dimension (Tensor operation)
exp_sum_data = np.sum(exp_values.data, axis=dim, keepdims=True)
exp_sum = Tensor(exp_sum_data, requires_grad=exp_values.requires_grad)
# Normalize to get probabilities (Tensor division!)
result = exp_values / exp_sum
return result
### END SOLUTION
def __call__(self, x: Tensor, dim: int = -1) -> Tensor:
"""Allows the activation to be called like a function."""
return self.forward(x, dim)
def backward(self, grad: Tensor) -> Tensor:
"""Compute gradient (implemented in Module 05)."""
pass # Will implement backward pass in Module 05

View File

@@ -40,7 +40,7 @@ class ModuleResetCommand(BaseCommand):
def add_arguments(self, parser: ArgumentParser) -> None:
"""Add reset command arguments."""
parser.add_argument(
"module_number", nargs="?", help="Module number to reset (01, 02, etc.)"
"module_number", nargs="?", default=None, help="Module number to reset (01, 02, etc.)"
)
parser.add_argument(
"--all",