Commit Graph

8 Commits

Author SHA1 Message Date
Vijay Janapa Reddi
3265eabe79 Add Profiler demo to Module 17 Quantization
- Added Section 5.5: Measuring Quantization Savings with Profiler
- Demonstrates FP32 to INT8 memory reduction (4x savings)
- Shows actual memory measurements before/after quantization
- Uses Profiler from Module 15 for measurements
- Educates students on production workflow: measure compress validate deploy
2025-11-06 20:38:44 -05:00
Vijay Janapa Reddi
43a293c23d Streamline Module 17 Quantization by removing analysis functions
- Removed Section: Quantization Quality + analyze_quantization_error (84 lines)
- Removed Section 5: Systems Analysis + analyze_quantization_performance (226 lines)
- Removed Section: Quantization Error Visualization (122 lines)
- Removed analyze_quantization_strategies function (108 lines)
- Total reduction: 540 lines (24%)
- Renumbered remaining sections
- Fixed markdown cell formatting

Result: 2295 → 1703 lines
Focus: Core quantization (quantize/dequantize/QuantizedLinear/quantize_model)
2025-11-06 17:48:47 -05:00
Vijay Janapa Reddi
6259f91be9 Module 17: Export QuantizationComplete for INT8 quantization
- Added QuantizationComplete class with quantize/dequantize methods
- Exported quantization functions to tinytorch/optimization/quantization.py
- Provides 4x memory reduction with minimal accuracy loss
- Removed pedagogical QuantizedLinear export to avoid conflicts
- Added proper imports to export block
2025-11-06 15:50:48 -05:00
Vijay Janapa Reddi
dfc8577cad feat: Add PyTorch-style __call__ methods and update milestone syntax
This commit implements comprehensive PyTorch compatibility improvements:

**Core Changes:**
- Add __call__ methods to all neural network components in modules 11-18
- Enable PyTorch-standard calling syntax: model(input) vs model.forward(input)
- Maintain backward compatibility - forward() methods still work

**Modules Updated:**
- Module 11 (Embeddings): Embedding, PositionalEncoding, EmbeddingLayer
- Module 12 (Attention): MultiHeadAttention
- Module 13 (Transformers): LayerNorm, MLP, TransformerBlock, GPT
- Module 17 (Quantization): QuantizedLinear
- Module 18 (Compression): Linear, Sequential classes

**Milestone Updates:**
- Replace all .forward() calls with direct () calls in milestone examples
- Update transformer milestones (vaswani_shakespeare, tinystories_gpt, tinytalks_gpt)
- Update CNN and MLP milestone examples
- Update MILESTONE_TEMPLATE.py for consistency

**Educational Benefits:**
- Students now write identical syntax to production PyTorch code
- Seamless transition from TinyTorch to PyTorch development
- Industry-standard calling conventions from day one

**Implementation Pattern:**
```python
def __call__(self, *args, **kwargs):
    """Allows the component to be called like a function."""
    return self.forward(*args, **kwargs)
```

All changes maintain full backward compatibility while enabling PyTorch-style usage.(https://claude.ai/code)
2025-10-28 13:46:05 -04:00
Vijay Janapa Reddi
0e997e4a10 refactor: Standardize imports across modules 10-17 to match 01-09
Enforce consistent import pattern across all modules:
- Direct imports from tinytorch.core.* (no fallbacks)
- Remove all sys.path.append manipulations
- Remove try/except import fallbacks
- Remove mock/dummy class fallbacks

Fixed modules:
- Module 10 (tokenization): Removed try/except fallback
- Module 12 (attention): Removed sys.path.append for tensor/layers
- Module 15 (profiling): Removed sys.path + mock Tensor/Linear/Conv2d
- Module 16 (acceleration): Removed hardcoded path + importlib + mock Tensor
- Module 17 (quantization): Removed sys.path + disabled fallback block

All modules now follow the same pattern as modules 01-09:
  from tinytorch.core.tensor import Tensor
  from tinytorch.core.layers import Linear
  # etc.

No development fallbacks - assume tinytorch package is installed.
2025-10-24 17:51:10 -04:00
Vijay Janapa Reddi
8be87d0add Fix nbdev export system across all 20 modules
PROBLEM:
- nbdev requires #| export directive on EACH cell to export when using # %% markers
- Cell markers inside class definitions split classes across multiple cells
- Only partial classes were being exported to tinytorch package
- Missing matmul, arithmetic operations, and activation classes in exports

SOLUTION:
1. Removed # %% cell markers INSIDE class definitions (kept classes as single units)
2. Added #| export to imports cell at top of each module
3. Added #| export before each exportable class definition in all 20 modules
4. Added __call__ method to Sigmoid for functional usage
5. Fixed numpy import (moved to module level from __init__)

MODULES FIXED:
- 01_tensor: Tensor class with all operations (matmul, arithmetic, shape ops)
- 02_activations: Sigmoid, ReLU, Tanh, GELU, Softmax classes
- 03_layers: Linear, Dropout classes
- 04_losses: MSELoss, CrossEntropyLoss, BinaryCrossEntropyLoss classes
- 05_autograd: Function, AddBackward, MulBackward, MatmulBackward, SumBackward
- 06_optimizers: Optimizer, SGD, Adam, AdamW classes
- 07_training: CosineSchedule, Trainer classes
- 08_dataloader: Dataset, TensorDataset, DataLoader classes
- 09_spatial: Conv2d, MaxPool2d, AvgPool2d, SimpleCNN classes
- 10-20: All exportable classes in remaining modules

TESTING:
- Test functions use 'if __name__ == "__main__"' guards
- Tests run in notebooks but NOT on import
- Rosenblatt Perceptron milestone working perfectly

RESULT:
 All 20 modules export correctly
 Perceptron (1957) milestone functional
 Clean separation: development (modules/source) vs package (tinytorch)
2025-09-30 11:21:04 -04:00
Vijay Janapa Reddi
9cfc7673cf feat: update advanced modules (09-20) with latest improvements
- Update spatial, tokenization, embeddings, attention modules
- Update transformers, kv-caching, profiling modules
- Update acceleration, quantization, compression modules
- Update benchmarking and capstone modules
- Align with current TinyTorch standards and patterns
2025-09-30 09:45:00 -04:00
Vijay Janapa Reddi
e1a9541c4b Clean up module imports: convert tinytorch.core to sys.path style
- Remove circular imports where modules imported from themselves
- Convert tinytorch.core imports to sys.path relative imports
- Only import dependencies that are actually used in each module
- Preserve documentation imports in markdown cells
- Use consistent relative path pattern across all modules
- Remove hardcoded absolute paths in favor of relative imports

Affected modules: 02_activations, 03_layers, 04_losses, 06_optimizers,
07_training, 09_spatial, 12_attention, 17_quantization
2025-09-30 08:58:58 -04:00