feat: update advanced modules (09-20) with latest improvements

- Update spatial, tokenization, embeddings, attention modules
- Update transformers, kv-caching, profiling modules
- Update acceleration, quantization, compression modules
- Update benchmarking and capstone modules
- Align with current TinyTorch standards and patterns
This commit is contained in:
Vijay Janapa Reddi
2025-09-30 09:45:00 -04:00
parent 56285026ff
commit ea2d0809d6
12 changed files with 46 additions and 82 deletions

View File

@@ -47,14 +47,12 @@ Let's make models 4× smaller!
"""
## 📦 Where This Code Lives in the Final Package
**Learning Side:** You work in modules/17_quantization/quantization_dev.py
**Building Side:** Code exports to tinytorch.optimization.quantization
**Learning Side:** You work in `modules/17_quantization/quantization_dev.py`
**Building Side:** Code exports to `tinytorch.optimization.quantization`
```python
# Final package structure:
# How to use this module:
from tinytorch.optimization.quantization import quantize_int8, QuantizedLinear, quantize_model
from tinytorch.core.tensor import Tensor
from tinytorch.core.layers import Linear
```
**Why this matters:**