mirror of
https://github.com/MLSysBook/TinyTorch.git
synced 2026-05-02 14:07:45 -05:00
⚡ Production: Standardize test naming in optimization and deployment modules
- Compression: test_compression_metrics → test_unit_compression_metrics - Compression: test_magnitude_pruning → test_unit_magnitude_pruning - Compression: test_quantization → test_unit_quantization - Compression: test_distillation → test_unit_distillation - Compression: test_structured_pruning → test_unit_structured_pruning - Compression: test_comprehensive_comparison → test_unit_comprehensive_comparison - Kernels: All test_* → test_unit_* except test_kernel_integration_* → test_module_* - Benchmarking: All test_* → test_unit_* except test_comprehensive_* → test_module_* - MLOps: All test_* → test_unit_* except test_comprehensive_integration → test_module_* - Finalizes test naming standardization across production-ready modules
This commit is contained in:
@@ -448,7 +448,7 @@ Once you implement the `ModelMonitor` class above, run this cell to test it:
|
||||
"""
|
||||
|
||||
# %% nbgrader={"grade": true, "grade_id": "test-model-monitor", "locked": true, "points": 20, "schema_version": 3, "solution": false, "task": false}
|
||||
def test_model_monitor():
|
||||
def test_unit_model_monitor():
|
||||
"""Test ModelMonitor implementation"""
|
||||
print("🔬 Unit Test: Performance Drift Monitor...")
|
||||
|
||||
@@ -718,7 +718,7 @@ Once you implement the `DriftDetector` class above, run this cell to test it:
|
||||
"""
|
||||
|
||||
# %% nbgrader={"grade": true, "grade_id": "test-drift-detector", "locked": true, "points": 20, "schema_version": 3, "solution": false, "task": false}
|
||||
def test_drift_detector():
|
||||
def test_unit_drift_detector():
|
||||
"""Test DriftDetector implementation"""
|
||||
print("🔬 Unit Test: Simple Drift Detection...")
|
||||
|
||||
@@ -1053,7 +1053,7 @@ Once you implement the `RetrainingTrigger` class above, run this cell to test it
|
||||
"""
|
||||
|
||||
# %% nbgrader={"grade": true, "grade_id": "test-retraining-trigger", "locked": true, "points": 25, "schema_version": 3, "solution": false, "task": false}
|
||||
def test_retraining_trigger():
|
||||
def test_unit_retraining_trigger():
|
||||
"""Test RetrainingTrigger implementation"""
|
||||
print("🔬 Unit Test: Retraining Trigger System...")
|
||||
|
||||
@@ -1416,7 +1416,7 @@ Once you implement the `MLOpsPipeline` class above, run this cell to test it:
|
||||
"""
|
||||
|
||||
# %% nbgrader={"grade": true, "grade_id": "test-mlops-pipeline", "locked": true, "points": 35, "schema_version": 3, "solution": false, "task": false}
|
||||
def test_mlops_pipeline():
|
||||
def test_unit_mlops_pipeline():
|
||||
"""Test complete MLOps pipeline"""
|
||||
print("🔬 Unit Test: Complete MLOps Pipeline...")
|
||||
|
||||
@@ -1545,9 +1545,9 @@ Your MLOps skills now enable:
|
||||
"""
|
||||
|
||||
# %% nbgrader={"grade": false, "grade_id": "comprehensive-integration-test", "locked": false, "schema_version": 3, "solution": false, "task": false}
|
||||
def test_comprehensive_integration():
|
||||
def test_module_comprehensive_mlops():
|
||||
"""Test complete integration of all TinyTorch components"""
|
||||
print("🔬 Comprehensive Integration Test: Complete TinyTorch Ecosystem...")
|
||||
print("🔬 Integration Test: Complete TinyTorch Ecosystem...")
|
||||
|
||||
# 1. Create synthetic data (simulating real ML dataset)
|
||||
np.random.seed(42)
|
||||
|
||||
Reference in New Issue
Block a user