mirror of
https://github.com/MLSysBook/TinyTorch.git
synced 2026-03-12 00:23:34 -05:00
Update development files: streamline benchmarking and capstone dev modules
- Clean up benchmarking_dev.py implementation - Refine capstone_dev.py development workflow
This commit is contained in:
@@ -5,7 +5,7 @@
|
||||
# extension: .py
|
||||
# format_name: percent
|
||||
# format_version: '1.3'
|
||||
# jupytext_version: 1.18.1
|
||||
# jupytext_version: 1.17.1
|
||||
# kernelspec:
|
||||
# display_name: Python 3 (ipykernel)
|
||||
# language: python
|
||||
@@ -1760,7 +1760,6 @@ def test_module():
|
||||
test_unit_benchmark_suite()
|
||||
test_unit_tinymlperf()
|
||||
test_unit_optimization_comparison()
|
||||
test_unit_normalized_scoring()
|
||||
|
||||
print("\nRunning integration scenarios...")
|
||||
|
||||
|
||||
@@ -5,7 +5,7 @@
|
||||
# extension: .py
|
||||
# format_name: percent
|
||||
# format_version: '1.3'
|
||||
# jupytext_version: 1.18.1
|
||||
# jupytext_version: 1.17.1
|
||||
# kernelspec:
|
||||
# display_name: Python 3 (ipykernel)
|
||||
# language: python
|
||||
@@ -824,191 +824,7 @@ Your competition workflow demonstrates:
|
||||
|
||||
You now understand how ML competitions work - from measurement to submission. The benchmarking tools you built in Module 19 and the optimization techniques from Modules 14-18 come together in Module 20's competition workflow.
|
||||
|
||||
**What's Next:**
|
||||
- Build TinyGPT in Milestone 05 (historical achievement)
|
||||
- Compete in Torch Olympics (Milestone 06) using this workflow
|
||||
- Use `tito olympics submit` to generate your competition entry!
|
||||
"""
|
||||
|
||||
# %% [markdown]
|
||||
"""
|
||||
## 🎯 Module Integration Test
|
||||
|
||||
Final comprehensive test validating all components work together correctly.
|
||||
"""
|
||||
|
||||
# %% nbgrader={"grade": true, "grade_id": "test_module", "locked": true, "points": 20}
|
||||
def test_module():
|
||||
"""
|
||||
Comprehensive test of entire competition module functionality.
|
||||
|
||||
This final test runs before module summary to ensure:
|
||||
- OlympicEvent enum works correctly
|
||||
- calculate_normalized_scores computes correctly
|
||||
- generate_submission creates valid submissions
|
||||
- validate_submission checks requirements properly
|
||||
- Complete workflow demonstration executes
|
||||
"""
|
||||
print("🧪 RUNNING MODULE INTEGRATION TEST")
|
||||
print("=" * 60)
|
||||
|
||||
# Test 1: OlympicEvent enum
|
||||
print("🔬 Testing OlympicEvent enum...")
|
||||
assert OlympicEvent.LATENCY_SPRINT.value == "latency_sprint"
|
||||
assert OlympicEvent.MEMORY_CHALLENGE.value == "memory_challenge"
|
||||
assert OlympicEvent.ALL_AROUND.value == "all_around"
|
||||
print(" ✅ OlympicEvent enum works")
|
||||
|
||||
# Test 2: Normalized scoring
|
||||
print("\n🔬 Testing normalized scoring...")
|
||||
baseline = {'latency': 100.0, 'memory': 12.0, 'accuracy': 0.85}
|
||||
optimized = {'latency': 40.0, 'memory': 3.0, 'accuracy': 0.83}
|
||||
scores = calculate_normalized_scores(baseline, optimized)
|
||||
assert abs(scores['speedup'] - 2.5) < 0.01
|
||||
assert abs(scores['compression_ratio'] - 4.0) < 0.01
|
||||
print(" ✅ Normalized scoring works")
|
||||
|
||||
# Test 3: Submission generation
|
||||
print("\n🔬 Testing submission generation...")
|
||||
submission = generate_submission(
|
||||
baseline_results=baseline,
|
||||
optimized_results=optimized,
|
||||
event=OlympicEvent.LATENCY_SPRINT,
|
||||
athlete_name="TestUser"
|
||||
)
|
||||
assert submission['event'] == 'latency_sprint'
|
||||
assert 'normalized_scores' in submission
|
||||
assert 'system_info' in submission
|
||||
print(" ✅ Submission generation works")
|
||||
|
||||
# Test 4: Submission validation
|
||||
print("\n🔬 Testing submission validation...")
|
||||
validation = validate_submission(submission)
|
||||
assert validation['valid'] == True
|
||||
assert len(validation['checks']) > 0
|
||||
print(" ✅ Submission validation works")
|
||||
|
||||
# Test 5: Complete workflow
|
||||
print("\n🔬 Testing complete workflow...")
|
||||
demonstrate_competition_workflow()
|
||||
print(" ✅ Complete workflow works")
|
||||
|
||||
print("\n" + "=" * 60)
|
||||
print("🎉 ALL COMPETITION MODULE TESTS PASSED!")
|
||||
print("✅ Competition workflow fully functional!")
|
||||
print("📊 Ready to generate submissions!")
|
||||
print("\nRun: tito module complete 20")
|
||||
|
||||
# Call the comprehensive test
|
||||
test_module()
|
||||
|
||||
# %% nbgrader={"grade": false, "grade_id": "main_execution", "solution": false}
|
||||
if __name__ == "__main__":
|
||||
print("🚀 Running TinyTorch Olympics Competition module...")
|
||||
|
||||
# Run the comprehensive test
|
||||
test_module()
|
||||
|
||||
print("\n✅ Competition module ready!")
|
||||
print("📤 Use generate_submission() to create your competition entry!")
|
||||
|
||||
# %% [markdown]
|
||||
"""
|
||||
## 🤔 ML Systems Thinking: Competition Workflow Reflection
|
||||
|
||||
This capstone teaches the workflow of professional ML competitions. Let's reflect on the systems thinking behind competition participation.
|
||||
|
||||
### Question 1: Statistical Confidence
|
||||
You use Module 19's Benchmark harness which runs multiple trials and reports confidence intervals.
|
||||
If baseline latency is 50ms ± 5ms and optimized is 25ms ± 3ms, can you confidently claim improvement?
|
||||
|
||||
**Answer:** [Yes/No] _______
|
||||
|
||||
**Reasoning:** Consider whether confidence intervals overlap and what that means for statistical significance.
|
||||
|
||||
### Question 2: Event Selection Strategy
|
||||
Different Olympic events have different constraints (Latency Sprint: accuracy ≥ 85%, Extreme Push: accuracy ≥ 80%).
|
||||
If your optimization reduces accuracy from 87% to 82%, which events can you still compete in?
|
||||
|
||||
**Answer:** _______
|
||||
|
||||
**Reasoning:** Check which events' accuracy constraints you still meet.
|
||||
|
||||
### Question 3: Normalized Scoring
|
||||
Normalized scores enable fair comparison across hardware. If Baseline A runs on fast GPU (10ms) and Baseline B runs on slow CPU (100ms), both optimized to 5ms:
|
||||
- Which has better absolute time? _______
|
||||
- Which has better speedup? _______
|
||||
- Why does normalized scoring matter? _______
|
||||
|
||||
### Question 4: Submission Validation
|
||||
Your validate_submission() function checks event constraints and flags unrealistic improvements.
|
||||
If someone claims 100× speedup, what should the validation do?
|
||||
|
||||
**Answer:** _______
|
||||
|
||||
**Reasoning:** Consider how to balance catching errors vs allowing legitimate breakthroughs.
|
||||
|
||||
### Question 5: Workflow Integration
|
||||
Module 20 uses Benchmark from Module 19 and optimization techniques from Modules 14-18.
|
||||
What's the key insight about how these modules work together?
|
||||
|
||||
a) Each module is independent
|
||||
b) Module 20 provides workflow that uses tools from other modules
|
||||
c) You need to rebuild everything in Module 20
|
||||
d) Competition is separate from benchmarking
|
||||
|
||||
**Answer:** _______
|
||||
|
||||
**Explanation:** Module 20 teaches workflow and packaging - you use existing tools, not rebuild them.
|
||||
"""
|
||||
|
||||
# %% [markdown]
|
||||
"""
|
||||
## 🎯 MODULE SUMMARY: TinyTorch Olympics - Competition & Submission
|
||||
|
||||
Congratulations! You've completed the capstone module - learning how to participate in professional ML competitions!
|
||||
|
||||
### Key Accomplishments
|
||||
- **Understood competition events** and how to choose the right event for your optimization goals
|
||||
- **Used Benchmark harness** from Module 19 to measure performance with statistical rigor
|
||||
- **Generated standardized submissions** following MLPerf-style format
|
||||
- **Validated submissions** meet competition requirements
|
||||
- **Demonstrated complete workflow** from measurement to submission
|
||||
- All tests pass ✅ (validated by `test_module()`)
|
||||
|
||||
### Systems Insights Gained
|
||||
- **Competition workflow**: How professional ML competitions are structured and participated in
|
||||
- **Submission packaging**: How to format results for fair comparison and validation
|
||||
- **Event constraints**: How different events require different optimization strategies
|
||||
- **Workflow integration**: How to use benchmarking tools (Module 19) + optimization techniques (Modules 14-18)
|
||||
|
||||
### The Complete Journey
|
||||
```
|
||||
Module 01-18: Build ML Framework
|
||||
↓
|
||||
Module 19: Learn Benchmarking Methodology
|
||||
↓
|
||||
Module 20: Learn Competition Workflow
|
||||
↓
|
||||
Milestone 05: Build TinyGPT (Historical Achievement)
|
||||
↓
|
||||
Milestone 06: Torch Olympics (Optimization Competition)
|
||||
```
|
||||
|
||||
### Ready for Competition
|
||||
Your competition workflow demonstrates:
|
||||
- **Professional submission format** following industry standards (MLPerf-style)
|
||||
- **Statistical rigor** using Benchmark harness from Module 19
|
||||
- **Event understanding** knowing which optimizations fit which events
|
||||
- **Validation mindset** ensuring submissions meet requirements before submitting
|
||||
|
||||
**Export with:** `tito module complete 20`
|
||||
|
||||
**Achievement Unlocked:** 🏅 **Competition Ready** - You know how to participate in professional ML competitions!
|
||||
|
||||
You now understand how ML competitions work - from measurement to submission. The benchmarking tools you built in Module 19 and the optimization techniques from Modules 14-18 come together in Module 20's competition workflow.
|
||||
|
||||
**What's Next:**
|
||||
**What's Next:**
|
||||
- Build TinyGPT in Milestone 05 (historical achievement)
|
||||
- Compete in Torch Olympics (Milestone 06) using this workflow
|
||||
- Use `tito olympics submit` to generate your competition entry!
|
||||
|
||||
Reference in New Issue
Block a user