Optimization Level 18: Caching

Results:
- Perceptron:  (1.86s) 100.0%
- XOR:  (1.93s) 54.5%
- MNIST:  (1.95s) 10.5%
- CIFAR:  (60.00s)
- TinyGPT:  (1.88s)
This commit is contained in:
Vijay Janapa Reddi
2025-09-28 21:47:18 -04:00
parent e5061f9797
commit 0532abb783
2 changed files with 54 additions and 0 deletions

View File

@@ -94,3 +94,23 @@ Testing Optimization Level 17: Compression
[2025-09-28 21:46:39] ✅ Complete in 1.82s
[2025-09-28 21:46:39]
Committing results for Compression...
[2025-09-28 21:46:40] Committed results
[2025-09-28 21:46:40]
Verifying previous optimizations still work...
[2025-09-28 21:46:40] Previous optimizations verified
[2025-09-28 21:46:40]
Testing Optimization Level 18: Caching
[2025-09-28 21:46:40] Description: Module 18: Caching and memory optimization
[2025-09-28 21:46:40] ------------------------------------------------------------
[2025-09-28 21:46:40] Testing Perceptron with Caching...
[2025-09-28 21:46:42] ✅ Complete in 1.86s
[2025-09-28 21:46:42] Testing XOR with Caching...
[2025-09-28 21:46:44] ✅ Complete in 1.93s
[2025-09-28 21:46:44] Testing MNIST with Caching...
[2025-09-28 21:46:46] ✅ Complete in 1.95s
[2025-09-28 21:46:46] Testing CIFAR with Caching...
[2025-09-28 21:47:16] ⏱️ Timeout after 60s
[2025-09-28 21:47:16] Testing TinyGPT with Caching...
[2025-09-28 21:47:18] ✅ Complete in 1.88s
[2025-09-28 21:47:18]
Committing results for Caching...

34
results_Caching.json Normal file
View File

@@ -0,0 +1,34 @@
{
"Perceptron": {
"success": true,
"time": 1.8569300174713135,
"output_preview": "ion\n\n\ud83d\ude80 Next Steps:\n \u2022 Continue to XOR 1969 milestone after Module 06 (Autograd)\n \u2022 YOUR foundation enables solving non-linear problems!\n \u2022 With 100.0% accuracy, YOUR perceptron works perfectly!\n",
"loss": 0.2038,
"accuracy": 100.0
},
"XOR": {
"success": true,
"time": 1.9278671741485596,
"output_preview": "ayer networks\n\n\ud83d\ude80 Next Steps:\n \u2022 Continue to MNIST MLP after Module 08 (Training)\n \u2022 YOUR XOR solution scales to real vision problems!\n \u2022 Hidden layers principle powers all modern deep learning!\n",
"loss": 0.2497,
"accuracy": 54.5
},
"MNIST": {
"success": true,
"time": 1.9451780319213867,
"output_preview": " a scalar is deprecated, and will error in future. Ensure you extract a single element from your array before performing this operation. (Deprecated NumPy 1.25.)\n one_hot[i, int(labels_np[i])] = 1.0\n",
"loss": 0.0,
"accuracy": 10.5
},
"CIFAR": {
"success": false,
"time": 60,
"timeout": true
},
"TinyGPT": {
"success": true,
"time": 1.8846020698547363,
"output_preview": "ining\n \u2022 Complete transformer architecture from first principles\n\n\ud83c\udfed Production Note:\n Real PyTorch uses optimized CUDA kernels for attention,\n but you built and understand the core mathematics!\n",
"loss": 0.3701
}
}