mirror of
https://github.com/MLSysBook/TinyTorch.git
synced 2025-12-05 19:17:52 -06:00
- Fixed FLOPs calculation to handle models with .layers attribute (not just Sequential) - Fixed quantization compression ratio to calculate theoretical INT8 size (1 byte per element) - Fixed pruning accuracy delta sign to correctly show +/- direction - Added missing export directives for Tensor and numpy imports in acceleration module Results now correctly show: - FLOPs: 4,736 (was incorrectly showing 64) - Quantization: 4.0x compression (was incorrectly showing 1.0x) - Pruning delta: correct +/- sign based on actual accuracy change