Vijay Janapa Reddi
d30257577c
refactor(tinytorch): migrate from legacy np.random to default_rng(7)
...
Replace np.random.randn/rand/seed with np.random.default_rng(7) across
all 93 source modules, tests, and milestones for reproducible, isolated
random state.
2026-04-03 17:57:51 -04:00
Vijay Janapa Reddi
dc3267027a
fix(tinytorch): fix remaining test failures (847/847 passing)
...
- Fix x.T -> x.transpose() in profiling progressive test
- Fix KVCache constructor calls in memoization progressive test
(3-arg -> 5-arg: batch_size, max_seq_len, num_layers, num_heads, head_dim)
2026-03-24 08:49:47 -04:00
Vijay Janapa Reddi
d3919434ab
fix(tinytorch): batch 7 test infrastructure cleanup
...
Module numbering fixes (4 progressive test files):
- M01: TestModule02TensorCore -> TestModule01TensorCore, remove
nonexistent 01_setup references
- M02: TestModule03ActivationsCore -> TestModule02ActivationsCore
- M03: TestModule04LayersCore -> TestModule03LayersCore,
test_module_04_complete -> test_module_03_complete
- M07: TestModule06OptimizersCore -> TestModule07OptimizersCore
DataLoader core tests:
- Replace raw Python list with TensorDataset in 3 tests
- Add meaningful assertions to test_shuffling (was vacuous)
Other test fixes:
- M18: test_tinygpt_integration module_name 16_tinygpt -> 18_memoization
- M06: Add NOTE to test_autograd_core.py explaining Variable
tests are placeholders (class not implemented)
2026-03-24 08:49:47 -04:00
Vijay Janapa Reddi
6fdfe203cb
fix(tinytorch): batch 2 critical fixes from module audit
...
M06 Autograd:
- SumBackward now stores axis/keepdims and expands grad_output
correctly for axis-reduced sums (was producing wrong gradients
for bias accumulation and axis-specific reductions)
M13 Transformers:
- Convert 8 TransformerBlock calls from positional to keyword args
(ff_dim=) across test files for M13, M14, M18 — prevents silent
mlp_ratio interpretation creating 64x larger hidden layers
- Fix parameter count: 4*embed_dim^2 -> 12*embed_dim^2, 11.4M -> 29.7M
M18 Memoization:
- Add missing 2x factor in KV cache formula (K and V are two tensors)
- Fix GPT-2 37MB -> 72MB, GPT-3 4.7GB -> 18GB
- Fix speedup formula: sum(i^2) -> sum(i), 170x -> ~50x
M01 Tensor:
- Fix analyze_memory_layout transpose text (was still saying
"changes memory layout" contradicting the earlier fix)
2026-03-24 08:49:46 -04:00
Vijay Janapa Reddi
03f58ad021
fix(tinytorch): first batch of critical fixes from module audit
...
Test infrastructure:
- Fix wrong import paths in test files for losses, profiling,
memoization, KV cache integration, and capstone modules
- Remove broken backward() test from losses core (stub in M04)
- Fix module labels in test docstrings
Correctness bugs:
- M01: Transpose is a view not a copy; dtype float32 not int64
- M04: MSELoss docstring 0.1467 -> 0.1800
- M08: Move scheduler before batch loop (was one-epoch late)
- M10: encode("abc") returns [1,2,3] not [0,1,2]
- M19: Remove *1000 from demo (values already in ms)
- M20: Import BenchmarkSuite from perf.benchmarking
2026-03-24 08:49:46 -04:00
Vijay Janapa Reddi
796bedbec1
fix: update memoization test assertions for new error message format
...
Updated test assertions to use case-insensitive matching for the
new 3-part educational error messages.
2026-01-25 11:44:20 -05:00
Vijay Janapa Reddi
999fd13447
refactor(tests): reorganize test folders and fix misplaced files
...
Folder consolidation:
- Merge system/ into integration/ (removed duplicate folder)
- Remove performance/ (only had framework, no tests)
File relocations:
- Move test_dense_layer.py, test_dense_integration.py from 04_losses/ to 03_layers/
- Move test_network_capability.py from 04_losses/ to integration/
- Move test_kv_cache_integration.py from 14_profiling/ to 18_memoization/
- Move system/ tests (forward_passes, gradients, shapes, etc.) to integration/
Removed duplicates:
- system/test_gradient_flow_overall.py (duplicate of integration version)
- system/test_integration.py (redundant with integration/ folder)
- system/test_milestones.py (duplicate of milestones/ tests)
Final structure: 26 folders, 100 test files
2026-01-24 12:44:40 -05:00
Vijay Janapa Reddi
389989ece7
refactor(tests): clean up test folder and fix gradient flow issues
...
Test Cleanup (113 files, -22,000 lines):
- Remove 21 redundant run_all_tests.py files
- Remove checkpoints/ folder (22 obsolete checkpoint files)
- Remove progressive/, debugging/, diagnostic/ folders
- Remove duplicate integration tests and examples
- Remove orphaned dev artifacts and generated outputs
- Consolidate test_gradient_flow_overall.py into system/
Documentation Cleanup (4 files removed):
- Remove duplicate HOW_TO_USE.md, WORKFLOW.md, SYSTEM_DESIGN.md
- Trim environment/README.md from 334 to 86 lines
- Update capstone/README.md removing outdated bug references
Test Fixes:
- Add requires_grad=True to layer parameters in gradient tests
- Fix PositionalEncoding argument order in test_shapes.py
- Adjust performance thresholds for realistic expectations
- Fix gradient clipping to handle memoryview correctly
- Update zero_grad assertions to accept None or zeros
2026-01-24 12:22:37 -05:00
Vijay Janapa Reddi
42face28fb
refactor(tests): remove all pytest.skip patterns for honest test results
...
- Move imports to module level in all *_core.py test files (16 files)
- Remove try/except/skip patterns from integration tests
- Remove @pytest.mark.skip decorators from gradient flow tests
- Convert environment validation skips to warnings for optional checks
- Change milestone tests from skip to fail when scripts missing
Tests now either pass or fail - no silent skipping that hides issues.
This ensures the test suite provides accurate feedback about what works.
2026-01-23 23:06:23 -05:00
Vijay Janapa Reddi
acb5142fd7
fix(tests): resolve import issues and test naming collisions
...
- Fix incorrect imports (tinytorch.text/nn/data → tinytorch.core.*)
- Fix MeanSquaredError → MSELoss imports
- Fix learning_rate= → lr= for optimizer arguments
- Rename test_progressive_integration.py files to unique names
- Add missing PerformanceTestSuite classes to performance framework
- Add pytest config to tinytorch/pyproject.toml to override coverage
This resolves the pytest collection errors caused by module name conflicts.
2026-01-23 17:59:43 -05:00
Vijay Janapa Reddi
44e5822fbc
fix(tests): correct MODULE_NUMBER and MODULE_NAME in all run_all_tests.py
...
Fixed copy-paste errors where MODULE metadata was incorrect:
- 01_tensor: 02 → 01
- 02_activations: 03 → 02
- 03_layers: 04 → 03
- 04_losses: Dense/Networks → Losses
- 05_dataloader: 09/Autograd → 05/DataLoader
- 06_autograd: XX → 06/Autograd
- 07_optimizers: 06/Spatial/CNN → 07/Optimizers
- 08_training: XX → 08/Training
- 09_convolutions: XX → 09/Convolutions
- 10_tokenization: XX → 10/Tokenization
- 11_embeddings: XX → 11/Embeddings
- 12_attention: XX → 12/Attention
- 13_transformers: XX → 13/Transformers
- 14_profiling: KV Caching → Profiling
- 15_quantization: Module 16 → Module 15
- 18_memoization: XX → 18/Memoization
2026-01-23 13:17:15 -05:00
Vijay Janapa Reddi
a1863e80a7
fix(tests): complete progressive disclosure audit and fix all modules
...
Comprehensive audit and fix of all module integration tests:
MOVED (wrong location):
- test_attention_pipeline_integration.py: 09_convolutions → 12_attention
- test_tensor_attention_integration.py: 09_convolutions → 12_attention
REWRITTEN (violated progressive disclosure):
- Module 11: Was testing compression (16) and attention (12) from embeddings
- Module 12: Was testing kernels (17) instead of attention
- Module 13: Was testing benchmarking (19) instead of transformers
- Module 14: Was testing mlops and benchmarking from profiling
- Module 18: Was importing modules 19+
All 20 modules now follow progressive disclosure:
- Each module only imports from modules 01 to itself
- No future module dependencies
- Proper regression tests for prior modules
Validation: 20/20 modules pass
2026-01-15 14:45:14 -05:00
Vijay Janapa Reddi
2dbd652832
refactor: swap Acceleration (17) and Memoization (18) directories
...
Reorder optimization tier modules:
- Module 17: Acceleration (general runtime - vectorization, fusion)
- Module 18: Memoization (domain-specific - KV-cache for transformers)
Rationale: General techniques before specialized applications
2025-12-19 19:30:36 -05:00