Optimization Level 19: Benchmarking

Results:
- Perceptron:  (1.87s) 100.0%
- XOR:  (1.92s) 54.5%
- MNIST:  (2.04s) 7.5%
- CIFAR:  (60.00s)
- TinyGPT:  (1.88s)
This commit is contained in:
Vijay Janapa Reddi
2025-09-28 21:47:56 -04:00
parent 0532abb783
commit 95ba293dd7
2 changed files with 54 additions and 0 deletions

View File

@@ -114,3 +114,23 @@ Testing Optimization Level 18: Caching
[2025-09-28 21:47:18] ✅ Complete in 1.88s
[2025-09-28 21:47:18]
Committing results for Caching...
[2025-09-28 21:47:18] Committed results
[2025-09-28 21:47:18]
Verifying previous optimizations still work...
[2025-09-28 21:47:18] Previous optimizations verified
[2025-09-28 21:47:18]
Testing Optimization Level 19: Benchmarking
[2025-09-28 21:47:18] Description: Module 19: Advanced benchmarking suite
[2025-09-28 21:47:18] ------------------------------------------------------------
[2025-09-28 21:47:18] Testing Perceptron with Benchmarking...
[2025-09-28 21:47:20] ✅ Complete in 1.87s
[2025-09-28 21:47:20] Testing XOR with Benchmarking...
[2025-09-28 21:47:22] ✅ Complete in 1.92s
[2025-09-28 21:47:22] Testing MNIST with Benchmarking...
[2025-09-28 21:47:24] ✅ Complete in 2.04s
[2025-09-28 21:47:24] Testing CIFAR with Benchmarking...
[2025-09-28 21:47:54] ⏱️ Timeout after 60s
[2025-09-28 21:47:54] Testing TinyGPT with Benchmarking...
[2025-09-28 21:47:56] ✅ Complete in 1.88s
[2025-09-28 21:47:56]
Committing results for Benchmarking...

34
results_Benchmarking.json Normal file
View File

@@ -0,0 +1,34 @@
{
"Perceptron": {
"success": true,
"time": 1.8683466911315918,
"output_preview": "ion\n\n\ud83d\ude80 Next Steps:\n \u2022 Continue to XOR 1969 milestone after Module 06 (Autograd)\n \u2022 YOUR foundation enables solving non-linear problems!\n \u2022 With 100.0% accuracy, YOUR perceptron works perfectly!\n",
"loss": 0.2038,
"accuracy": 100.0
},
"XOR": {
"success": true,
"time": 1.9171321392059326,
"output_preview": "ayer networks\n\n\ud83d\ude80 Next Steps:\n \u2022 Continue to MNIST MLP after Module 08 (Training)\n \u2022 YOUR XOR solution scales to real vision problems!\n \u2022 Hidden layers principle powers all modern deep learning!\n",
"loss": 0.2497,
"accuracy": 54.5
},
"MNIST": {
"success": true,
"time": 2.0394182205200195,
"output_preview": " a scalar is deprecated, and will error in future. Ensure you extract a single element from your array before performing this operation. (Deprecated NumPy 1.25.)\n one_hot[i, int(labels_np[i])] = 1.0\n",
"loss": 0.0,
"accuracy": 7.5
},
"CIFAR": {
"success": false,
"time": 60,
"timeout": true
},
"TinyGPT": {
"success": true,
"time": 1.876978874206543,
"output_preview": "ining\n \u2022 Complete transformer architecture from first principles\n\n\ud83c\udfed Production Note:\n Real PyTorch uses optimized CUDA kernels for attention,\n but you built and understand the core mathematics!\n",
"loss": 0.3195
}
}