mirror of
https://github.com/MLSysBook/TinyTorch.git
synced 2026-04-28 21:02:45 -05:00
Clean up Module 08: Remove unconditional function calls
Fixed issue where performance analysis functions were called every time the module was imported, instead of only when needed. Changes: - Commented out analyze_dataloader_performance() bare call - Commented out analyze_memory_usage() bare call - Removed redundant test_training_integration() comment These functions are still defined and can be called manually for performance insights, but won't run on every import. The test_module() function still calls all necessary tests when the module is run as __main__. Result: Module imports cleanly without running expensive performance benchmarks unless explicitly requested.
This commit is contained in:
@@ -874,7 +874,7 @@ def analyze_dataloader_performance():
|
||||
print("• Memory usage scales linearly with batch size")
|
||||
print("🚀 Production tip: Balance batch size with GPU memory limits")
|
||||
|
||||
# analyze_dataloader_performance() # Moved to main block
|
||||
# analyze_dataloader_performance() # Optional: Run manually for performance insights
|
||||
|
||||
|
||||
def analyze_memory_usage():
|
||||
@@ -918,7 +918,7 @@ def analyze_memory_usage():
|
||||
print(f" Large batch (512×784): {large_bytes / 1024:.1f} KB")
|
||||
print(f" Ratio: {large_bytes / small_bytes:.1f}×")
|
||||
|
||||
# analyze_memory_usage() # Moved to main block
|
||||
# analyze_memory_usage() # Optional: Run manually for memory insights
|
||||
|
||||
|
||||
# %% [markdown]
|
||||
@@ -999,8 +999,6 @@ def test_training_integration():
|
||||
|
||||
print("✅ Training integration works correctly!")
|
||||
|
||||
# test_training_integration() # Moved to main block
|
||||
|
||||
|
||||
# %% [markdown]
|
||||
"""
|
||||
|
||||
Reference in New Issue
Block a user