Move testing utilities to tito/tools for better software architecture

- Move testing utilities from tinytorch/utils/testing.py to tito/tools/testing.py
- Update all module imports to use tito.tools.testing
- Remove testing utilities from core TinyTorch package
- Testing utilities are development tools, not part of the ML library
- Maintains clean separation between library code and development toolchain
- All tests continue to work correctly with improved architecture
This commit is contained in:
Vijay Janapa Reddi
2025-07-13 21:05:11 -04:00
parent 2a2b5dad1e
commit 5264b6aa68
16 changed files with 1442 additions and 152 deletions

View File

@@ -0,0 +1,188 @@
# NBGrader Standardized Testing Framework
## 🎯 The Perfect Solution
Your suggestion to use **dedicated, locked NBGrader cells** for testing is brilliant! This approach provides:
**Protected Infrastructure** - Students can't break the testing framework
**Consistent Placement** - Same location in every module (before final summary)
**Educational Flow** - Learn → Implement → Test → Reflect
**Professional Standards** - Mirrors real software development practices
**Quality Assurance** - Ensures comprehensive validation of all student work
## 📋 Module Structure
Every TinyTorch module follows this standardized structure:
```
1. 📖 Educational Content & Implementation Guidance
2. 💻 Student Implementation Sections (unlocked)
3. 🧪 Standardized Testing (LOCKED NBGrader cell)
4. 🎯 Module Summary & Takeaways
```
## 🔒 The Locked Testing Cell
### NBGrader Configuration
```python
# %% nbgrader={"grade": false, "grade_id": "standardized-testing", "locked": true, "schema_version": 3, "solution": false, "task": false}
```
### Key Settings Explained:
- **`grade: false`** - Testing cell is not graded (provides feedback only)
- **`locked: true`** - Students cannot modify this cell
- **`solution: false`** - This is not a solution cell
- **`task: false`** - This is not a task for students to complete
### Cell Structure:
```python
# =============================================================================
# STANDARDIZED MODULE TESTING - DO NOT MODIFY
# This cell is locked to ensure consistent testing across all TinyTorch modules
# =============================================================================
from tinytorch.utils.testing import create_test_runner
def test_core_functionality():
"""Test core module functionality."""
# Module-specific tests here
print("✅ Core functionality tests passed!")
def test_edge_cases():
"""Test edge cases and error handling."""
# Edge case tests here
print("✅ Edge case tests passed!")
def test_ml_integration():
"""Test integration with ML workflows."""
# Integration tests here
print("✅ ML integration tests passed!")
# Execute standardized testing
if __name__ == "__main__":
test_runner = create_test_runner("ModuleName")
test_runner.register_test("Core Functionality", test_core_functionality)
test_runner.register_test("Edge Cases", test_edge_cases)
test_runner.register_test("ML Integration", test_ml_integration)
success = test_runner.run_all_tests()
```
## 🎭 Consistent Student Experience
Every module produces **identical testing output**:
```
🔬 Running ModuleName Module Tests...
==================================================
🧪 Testing Core Functionality... ✅ PASSED
🧪 Testing Edge Cases... ✅ PASSED
🧪 Testing ML Integration... ✅ PASSED
============================================================
🎯 MODULENAME MODULE TESTING COMPLETE
============================================================
🎉 CONGRATULATIONS! All tests passed!
✅ ModuleName Module Status: 3/3 tests passed (100%)
📊 Detailed Results:
Core Functionality: ✅ PASSED
Edge Cases: ✅ PASSED
ML Integration: ✅ PASSED
📈 Progress: ModuleName Module ✓ COMPLETE
🚀 Ready for the next module!
```
## 📚 Educational Benefits
### For Students:
1. **Consistent Experience** - Same testing format across all modules
2. **Immediate Feedback** - Clear validation of their implementations
3. **Professional Exposure** - Experience with real testing practices
4. **Protected Learning** - Cannot accidentally break testing infrastructure
5. **Quality Confidence** - Assurance their implementations work correctly
### For Instructors:
1. **Standardized Quality** - Consistent validation across all modules
2. **Protected Infrastructure** - Testing framework cannot be compromised
3. **Easy Maintenance** - Single source of truth for testing format
4. **Educational Focus** - More time on content, less on testing logistics
5. **Scalable Assessment** - Efficient evaluation of student progress
## 🔄 Module Flow
### 1. Educational Introduction
```markdown
# Module X: Topic Name
Learn about [concept] and its importance in ML systems...
```
### 2. Implementation Guidance
```python
# Student implementation sections (UNLOCKED)
# Clear TODOs and guidance for student work
```
### 3. Testing Validation (LOCKED)
```markdown
## 🧪 Module Testing
Time to test your implementation! This section is locked to ensure consistency.
```
### 4. Learning Summary
```markdown
## 🎯 Module Summary: Topic Mastery!
Congratulations! You've successfully implemented...
```
## 🏗️ Implementation Strategy
### Phase 1: Infrastructure
-**Shared testing utilities** - `tinytorch.utils.testing` module
-**NBGrader template** - Standardized cell structure
-**Documentation** - Clear guidelines for implementation
### Phase 2: Module Migration
1. **Add testing section** to each module before final summary
2. **Lock testing cells** with NBGrader configuration
3. **Register module tests** with shared test runner
4. **Validate consistency** across all modules
### Phase 3: Quality Assurance
1. **Test each module** individually for correctness
2. **Verify consistent output** across all modules
3. **Ensure NBGrader compatibility** with locked cells
4. **Document any module-specific considerations**
## 🎯 Benefits Achieved
### Technical Benefits:
- **Zero Code Duplication** - Shared testing infrastructure
- **Perfect Consistency** - Identical output format across modules
- **Protected Quality** - Testing framework cannot be broken
- **Easy Maintenance** - Single point of update for improvements
### Educational Benefits:
- **Professional Standards** - Real-world software development practices
- **Immediate Feedback** - Clear validation of student implementations
- **Consistent Experience** - Same quality across all learning modules
- **Focus on Learning** - Students focus on concepts, not testing setup
### Assessment Benefits:
- **Standardized Evaluation** - Consistent criteria across modules
- **Automated Validation** - Reliable testing of student implementations
- **Quality Assurance** - Comprehensive coverage of learning objectives
- **Scalable Grading** - Efficient instructor workflow
## 🚀 Next Steps
1. **Apply template** to all existing modules
2. **Test NBGrader integration** with locked cells
3. **Validate student experience** across all modules
4. **Document module-specific testing** requirements
This NBGrader standardized testing framework provides the **perfect balance** of consistency, protection, and educational value!

View File

@@ -0,0 +1,174 @@
# NBGrader Testing Cell Template
## 🎯 Standardized Module Structure
Every TinyTorch module should follow this structure for consistent testing:
### 1. Educational Content
```python
# %% [markdown]
"""
# Module X: Topic Name
[Educational content, implementation guidance, etc.]
"""
# %%
#| default_exp core.module_name
# Student implementation sections...
# [Student code here]
```
### 2. Individual Test Functions
```python
# Test functions that students can run during development
def test_feature_1_comprehensive():
"""Test feature 1 functionality comprehensively."""
# Detailed test implementation
assert feature_works()
print("✅ Feature 1 tests passed!")
def test_feature_2_integration():
"""Test feature 2 integration with other components."""
# Integration test implementation
assert integration_works()
print("✅ Feature 2 integration tests passed!")
def test_module_integration():
"""Test overall module integration."""
# Overall integration tests
assert module_works()
print("✅ Module integration tests passed!")
```
### 3. Dedicated Testing Section (Auto-Discovery)
```python
# %% [markdown]
"""
## 🧪 Module Testing
Time to test your implementation! This section uses TinyTorch's standardized testing framework with **automatic test discovery**.
**This testing section is locked** - it provides consistent feedback across all modules and cannot be modified.
"""
# %% nbgrader={"grade": false, "grade_id": "standardized-testing", "locked": true, "schema_version": 3, "solution": false, "task": false}
# =============================================================================
# STANDARDIZED MODULE TESTING - DO NOT MODIFY
# This cell is locked to ensure consistent testing across all TinyTorch modules
# =============================================================================
if __name__ == "__main__":
from tinytorch.utils.testing import run_module_tests_auto
# Automatically discover and run all tests in this module
success = run_module_tests_auto("ModuleName")
```
### 4. Module Summary (After Testing)
```python
# %% [markdown]
"""
## 🎯 Module Summary: [Topic] Mastery!
Congratulations! You've successfully implemented [module topic]:
### What You've Accomplished
✅ **Feature 1**: Description of what was implemented
✅ **Feature 2**: Description of what was implemented
✅ **Integration**: How features work together
### Key Concepts You've Learned
- **Concept 1**: Explanation
- **Concept 2**: Explanation
- **Concept 3**: Explanation
### Next Steps
1. **Export your code**: `tito package nbdev --export module_name`
2. **Test your implementation**: `tito test module_name`
3. **Move to next module**: Brief description of what's next
"""
```
## 🎯 **Critical: Correct Section Ordering**
The order of sections **must** follow this logical flow:
1. **Educational Content** - Students learn the concepts
2. **Implementation Sections** - Students build the functionality
3. **🧪 Module Testing** - Students verify their implementation works
4. **🎯 Module Summary** - Students celebrate success and move forward
### ❌ **Wrong Order (Confusing)**:
```
Implementation → Summary ("Congratulations!") → Testing → "Wait, did it work?"
```
### ✅ **Correct Order (Natural)**:
```
Implementation → Testing → Summary ("Congratulations! It works!") → Next Steps
```
**Why This Matters**:
- Testing **validates** the implementation before celebrating
- Summary **confirms** success after verification
- Natural flow: Build → Test → Celebrate → Advance
- Mirrors real software development practices
## 🔍 Automatic Test Discovery
The new testing framework **automatically discovers** test functions, eliminating manual registration:
### ✅ **Discovered Test Patterns**
The system automatically finds and runs functions matching these patterns:
- `test_*_comprehensive`: Comprehensive testing of individual features
- `test_*_integration`: Integration testing with other components
- `test_*_activation`: Specific activation function tests (ReLU, Sigmoid, etc.)
### ✅ **Benefits**
- **Zero Manual Work**: No need to register functions manually
- **Error Prevention**: Won't miss test functions
- **Consistent Naming**: Enforces good test naming conventions
- **Automatic Ordering**: Tests run in alphabetical order
- **Clean Output**: Standardized reporting format
### ✅ **Example Output**
```
🔍 Auto-discovered 4 test functions
🧪 Running Tensor Module Tests...
==================================================
✅ Tensor Arithmetic: PASSED
✅ Tensor Creation: PASSED
✅ Tensor Integration: PASSED
✅ Tensor Properties: PASSED
==================================================
🎉 All tests passed! (4/4)
✅ Tensor module is working correctly!
```
### ✅ **Safety Features**
- **Pattern Matching**: Only discovers functions matching expected patterns
- **Protected Framework**: NBGrader locked cells prevent student modifications
- **Fallback Support**: Manual registration still available if needed
- **Error Handling**: Graceful handling of malformed test functions
## 📝 Implementation Notes
### Test Function Requirements
1. **Naming Convention**: Must start with `test_` and contain expected patterns
2. **Self-Contained**: Each test should be independent
3. **Clear Output**: Print success messages for educational feedback
4. **Proper Assertions**: Use assert statements for validation
### Module Integration
1. **Single Entry Point**: Each module has one standardized testing entry
2. **Consistent Interface**: Same API across all modules
3. **CLI Integration**: `tito test module_name` uses the auto-discovery
4. **Development Workflow**: Students can run individual tests during development
### Educational Benefits
1. **Immediate Feedback**: Students see results as they develop
2. **Professional Practices**: Mirrors real software development workflows
3. **Consistent Experience**: Same testing approach across all modules
4. **Assessment Ready**: NBGrader can evaluate student implementations

View File

@@ -0,0 +1,142 @@
# TinyTorch Shared Testing Pattern
## 🎯 Problem Solved
Previously, each module had inconsistent test summaries and duplicated formatting code. Now all modules use **shared testing utilities** for:
**Perfect Consistency** - All modules have identical output format
**Zero Code Duplication** - Testing utilities are shared across all modules
**Easy Maintenance** - Changes only need to be made in one place
**Scalable** - Works for any number of modules and tests
## 📋 Usage Pattern
### 1. Import the Shared Utilities
```python
from tinytorch.utils.testing import create_test_runner
```
### 2. Write Your Test Functions
```python
def test_feature_a():
"""Test feature A functionality."""
# Your test code here
assert something_works(), "Feature A should work"
print("✅ Feature A tests passed!")
def test_feature_b():
"""Test feature B functionality."""
# Your test code here
assert something_else_works(), "Feature B should work"
print("✅ Feature B tests passed!")
```
### 3. Register and Run Tests
```python
if __name__ == "__main__":
# Create test runner for this module
test_runner = create_test_runner("YourModule")
# Register all tests
test_runner.register_test("Feature A", test_feature_a)
test_runner.register_test("Feature B", test_feature_b)
# Run all tests with consistent output
success = test_runner.run_all_tests()
```
## 🎭 Standard Output Format
Every module produces **identical output**:
```
🔬 Running YourModule Module Tests...
==================================================
🧪 Testing Feature A... ✅ PASSED
🧪 Testing Feature B... ✅ PASSED
============================================================
🎯 YOURMODULE MODULE TESTING COMPLETE
============================================================
🎉 CONGRATULATIONS! All tests passed!
✅ YourModule Module Status: 2/2 tests passed (100%)
📊 Detailed Results:
Feature A: ✅ PASSED
Feature B: ✅ PASSED
📈 Progress: YourModule Module ✓ COMPLETE
🚀 Ready for the next module!
```
## 🏗️ Architecture
### Shared Utilities Location
- **Main utilities**: `tinytorch/utils/testing.py`
- **Import from**: `from tinytorch.utils.testing import create_test_runner`
### ModuleTestRunner Class
The core class that provides:
- `register_test(name, function)` - Register test functions
- `run_all_tests()` - Execute all tests with consistent output
- Error handling and detailed reporting
## 📈 Migration Guide
To migrate an existing module:
### Before (Inconsistent)
```python
# Old way - inconsistent format
def test_something():
# test code
pass
# Manual summary - different across modules
print("Some tests passed!")
```
### After (Consistent)
```python
# New way - consistent format
from tinytorch.utils.testing import create_test_runner
def test_something():
# test code
print("✅ Something tests passed!")
if __name__ == "__main__":
test_runner = create_test_runner("ModuleName")
test_runner.register_test("Something", test_something)
success = test_runner.run_all_tests()
```
## ✅ Benefits Achieved
1. **Consistency**: All modules have identical testing output
2. **No Duplication**: Testing utilities are shared across modules
3. **Easy Maintenance**: Changes to format only need to be made in one place
4. **Scalable**: Works for any number of tests and modules
5. **Professional**: Clean, standardized output suitable for educational use
6. **Error Handling**: Detailed error reporting for failed tests
## 🚀 Implementation Status
- **✅ Shared utilities created**: `tinytorch/utils/testing.py`
- **✅ Documentation complete**: Usage patterns and examples
- **✅ Testing verified**: Confirmed working with example modules
- **⏳ Migration pending**: Apply pattern to all existing modules
## 🔧 Next Steps
1. **Apply to all modules**: Migrate existing modules to use shared pattern
2. **Test thoroughly**: Ensure all modules work with new pattern
3. **Update documentation**: Module-specific docs reference shared pattern
4. **Commit changes**: Save the improved testing infrastructure
This shared testing pattern eliminates code duplication while ensuring perfect consistency across all TinyTorch modules!

94
docs/testing_pattern.md Normal file
View File

@@ -0,0 +1,94 @@
# TinyTorch Standardized Testing Pattern
## Overview
All TinyTorch modules use a consistent testing pattern that ensures:
- **Consistent output format** across all modules
- **No code duplication** - shared utilities handle formatting
- **Easy test registration** - just register functions and run
- **Comprehensive reporting** - detailed pass/fail breakdown
## Usage Pattern
### 1. Import the Testing Utilities
```python
import sys
import os
# Add utils to path
sys.path.append(os.path.join(os.path.dirname(__file__), '..', 'utils'))
from testing import create_test_runner
```
### 2. Write Your Test Functions
```python
def test_feature_a():
"""Test feature A functionality."""
# Your test code here
assert something_works(), "Feature A should work"
print("✅ Feature A tests passed!")
def test_feature_b():
"""Test feature B functionality."""
# Your test code here
assert something_else_works(), "Feature B should work"
print("✅ Feature B tests passed!")
```
### 3. Register and Run Tests
```python
if __name__ == "__main__":
# Create test runner for this module
test_runner = create_test_runner("YourModule")
# Register all tests
test_runner.register_test("Feature A", test_feature_a)
test_runner.register_test("Feature B", test_feature_b)
# Run all tests with consistent output
success = test_runner.run_all_tests()
```
## Standard Output Format
Every module will produce identical output:
```
🔬 Running YourModule Module Tests...
==================================================
🧪 Testing Feature A... ✅ PASSED
🧪 Testing Feature B... ✅ PASSED
============================================================
🎯 YOURMODULE MODULE TESTING COMPLETE
============================================================
🎉 CONGRATULATIONS! All tests passed!
✅ YourModule Module Status: 2/2 tests passed (100%)
📊 Detailed Results:
Feature A: ✅ PASSED
Feature B: ✅ PASSED
📈 Progress: YourModule Module ✓ COMPLETE
🚀 Ready for the next module!
```
## Benefits
1. **Consistency**: All modules have identical testing output
2. **No Duplication**: Testing utilities are shared across modules
3. **Easy Maintenance**: Changes to format only need to be made in one place
4. **Scalable**: Works for any number of tests and modules
5. **Professional**: Clean, standardized output suitable for educational use
## Implementation
- **Shared utilities**: `modules/source/utils/testing.py`
- **Test registration**: Each module registers its tests
- **Consistent format**: All modules get identical summary output
- **Error handling**: Detailed error reporting for failed tests