Vijay Janapa Reddi
796bedbec1
fix: update memoization test assertions for new error message format
...
Updated test assertions to use case-insensitive matching for the
new 3-part educational error messages.
2026-01-25 11:44:20 -05:00
Vijay Janapa Reddi
999fd13447
refactor(tests): reorganize test folders and fix misplaced files
...
Folder consolidation:
- Merge system/ into integration/ (removed duplicate folder)
- Remove performance/ (only had framework, no tests)
File relocations:
- Move test_dense_layer.py, test_dense_integration.py from 04_losses/ to 03_layers/
- Move test_network_capability.py from 04_losses/ to integration/
- Move test_kv_cache_integration.py from 14_profiling/ to 18_memoization/
- Move system/ tests (forward_passes, gradients, shapes, etc.) to integration/
Removed duplicates:
- system/test_gradient_flow_overall.py (duplicate of integration version)
- system/test_integration.py (redundant with integration/ folder)
- system/test_milestones.py (duplicate of milestones/ tests)
Final structure: 26 folders, 100 test files
2026-01-24 12:44:40 -05:00
Vijay Janapa Reddi
389989ece7
refactor(tests): clean up test folder and fix gradient flow issues
...
Test Cleanup (113 files, -22,000 lines):
- Remove 21 redundant run_all_tests.py files
- Remove checkpoints/ folder (22 obsolete checkpoint files)
- Remove progressive/, debugging/, diagnostic/ folders
- Remove duplicate integration tests and examples
- Remove orphaned dev artifacts and generated outputs
- Consolidate test_gradient_flow_overall.py into system/
Documentation Cleanup (4 files removed):
- Remove duplicate HOW_TO_USE.md, WORKFLOW.md, SYSTEM_DESIGN.md
- Trim environment/README.md from 334 to 86 lines
- Update capstone/README.md removing outdated bug references
Test Fixes:
- Add requires_grad=True to layer parameters in gradient tests
- Fix PositionalEncoding argument order in test_shapes.py
- Adjust performance thresholds for realistic expectations
- Fix gradient clipping to handle memoryview correctly
- Update zero_grad assertions to accept None or zeros
2026-01-24 12:22:37 -05:00
Vijay Janapa Reddi
42face28fb
refactor(tests): remove all pytest.skip patterns for honest test results
...
- Move imports to module level in all *_core.py test files (16 files)
- Remove try/except/skip patterns from integration tests
- Remove @pytest.mark.skip decorators from gradient flow tests
- Convert environment validation skips to warnings for optional checks
- Change milestone tests from skip to fail when scripts missing
Tests now either pass or fail - no silent skipping that hides issues.
This ensures the test suite provides accurate feedback about what works.
2026-01-23 23:06:23 -05:00
Vijay Janapa Reddi
acb5142fd7
fix(tests): resolve import issues and test naming collisions
...
- Fix incorrect imports (tinytorch.text/nn/data → tinytorch.core.*)
- Fix MeanSquaredError → MSELoss imports
- Fix learning_rate= → lr= for optimizer arguments
- Rename test_progressive_integration.py files to unique names
- Add missing PerformanceTestSuite classes to performance framework
- Add pytest config to tinytorch/pyproject.toml to override coverage
This resolves the pytest collection errors caused by module name conflicts.
2026-01-23 17:59:43 -05:00
Vijay Janapa Reddi
44e5822fbc
fix(tests): correct MODULE_NUMBER and MODULE_NAME in all run_all_tests.py
...
Fixed copy-paste errors where MODULE metadata was incorrect:
- 01_tensor: 02 → 01
- 02_activations: 03 → 02
- 03_layers: 04 → 03
- 04_losses: Dense/Networks → Losses
- 05_dataloader: 09/Autograd → 05/DataLoader
- 06_autograd: XX → 06/Autograd
- 07_optimizers: 06/Spatial/CNN → 07/Optimizers
- 08_training: XX → 08/Training
- 09_convolutions: XX → 09/Convolutions
- 10_tokenization: XX → 10/Tokenization
- 11_embeddings: XX → 11/Embeddings
- 12_attention: XX → 12/Attention
- 13_transformers: XX → 13/Transformers
- 14_profiling: KV Caching → Profiling
- 15_quantization: Module 16 → Module 15
- 18_memoization: XX → 18/Memoization
2026-01-23 13:17:15 -05:00
Vijay Janapa Reddi
a1863e80a7
fix(tests): complete progressive disclosure audit and fix all modules
...
Comprehensive audit and fix of all module integration tests:
MOVED (wrong location):
- test_attention_pipeline_integration.py: 09_convolutions → 12_attention
- test_tensor_attention_integration.py: 09_convolutions → 12_attention
REWRITTEN (violated progressive disclosure):
- Module 11: Was testing compression (16) and attention (12) from embeddings
- Module 12: Was testing kernels (17) instead of attention
- Module 13: Was testing benchmarking (19) instead of transformers
- Module 14: Was testing mlops and benchmarking from profiling
- Module 18: Was importing modules 19+
All 20 modules now follow progressive disclosure:
- Each module only imports from modules 01 to itself
- No future module dependencies
- Proper regression tests for prior modules
Validation: 20/20 modules pass
2026-01-15 14:45:14 -05:00
Vijay Janapa Reddi
2dbd652832
refactor: swap Acceleration (17) and Memoization (18) directories
...
Reorder optimization tier modules:
- Module 17: Acceleration (general runtime - vectorization, fusion)
- Module 18: Memoization (domain-specific - KV-cache for transformers)
Rationale: General techniques before specialized applications
2025-12-19 19:30:36 -05:00