Files
TinyTorch/.github/scripts/validate_difficulty_ratings.py
Vijay Janapa Reddi bc3105a969 Add release check workflow and clean up legacy dev files
This commit implements a comprehensive quality assurance system and removes
outdated backup files from the repository.

## Release Check Workflow

Added GitHub Actions workflow for systematic release validation:
- Manual-only workflow (workflow_dispatch) - no automatic PR triggers
- 6 sequential quality gates: educational, implementation, testing, package, documentation, systems
- 13 validation scripts (4 fully implemented, 9 stubs for future work)
- Comprehensive documentation in .github/workflows/README.md
- Release process guide in .github/RELEASE_PROCESS.md

Implemented validators:
- validate_time_estimates.py - Ensures consistency between LEARNING_PATH.md and ABOUT.md files
- validate_difficulty_ratings.py - Validates star rating consistency across modules
- validate_testing_patterns.py - Checks for test_unit_* and test_module() patterns
- check_checkpoints.py - Recommends checkpoint markers for long modules (8+ hours)

## Pedagogical Improvements

Added checkpoint markers to Module 05 (Autograd):
- Checkpoint 1: After computational graph construction (~40% progress)
- Checkpoint 2: After automatic differentiation implementation (~80% progress)
- Helps students track progress through the longest foundational module (8-10 hours)

## Codebase Cleanup

Removed 20 legacy *_dev.py files across all modules:
- Confirmed via export system analysis: only *.py files (without _dev suffix) are used
- Export system explicitly reads from {name}.py (see tito/commands/export.py line 461)
- All _dev.py files were outdated backups not used by the build/export pipeline
- Verified all active .py files contain current implementations with optimizations

This cleanup:
- Eliminates confusion about which files are source of truth
- Reduces repository size
- Makes development workflow clearer (work in modules/XX_name/name.py)

## Formatting Standards Documentation

Documents formatting and style standards discovered through systematic
review of all 20 TinyTorch modules.

### Key Findings

Overall Status: 9/10 (Excellent consistency)
- All 20 modules use correct test_module() naming
- 18/20 modules have proper if __name__ guards
- All modules use proper Jupytext format (no JSON leakage)
- Strong ASCII diagram quality
- All 20 modules missing 🧪 emoji in test_module() docstrings

### Standards Documented

1. Test Function Naming: test_unit_* for units, test_module() for integration
2. if __name__ Guards: Immediate guards after every test/analysis function
3. Emoji Protocol: 🔬 for unit tests, 🧪 for module tests, 📊 for analysis
4. Markdown Formatting: Jupytext format with proper section hierarchy
5. ASCII Diagrams: Box-drawing characters, labeled dimensions, data flow arrows
6. Module Structure: Standard template with 9 sections

### Quick Fixes Identified

- Add 🧪 emoji to test_module() in all 20 modules (~5 min)
- Fix Module 16 if __name__ guards (~15 min)
- Fix Module 08 guard (~5 min)

Total quick fixes: 25 minutes to achieve 10/10 consistency
2025-11-24 14:47:04 -05:00

121 lines
3.3 KiB
Python
Executable File

#!/usr/bin/env python3
"""
Validate difficulty rating consistency across LEARNING_PATH.md and module ABOUT.md files.
"""
import re
import sys
from pathlib import Path
def normalize_difficulty(difficulty_str):
"""Normalize difficulty rating to star count"""
if not difficulty_str:
return None
# Count stars
star_count = difficulty_str.count("")
if star_count > 0:
return star_count
# Handle numeric format
if difficulty_str.isdigit():
return int(difficulty_str)
# Handle "X/4" format
match = re.match(r"(\d+)/4", difficulty_str)
if match:
return int(match.group(1))
return None
def extract_difficulty_from_learning_path(module_num):
"""Extract difficulty rating for a module from LEARNING_PATH.md"""
learning_path = Path("modules/LEARNING_PATH.md")
if not learning_path.exists():
return None
content = learning_path.read_text()
# Pattern: **Module XX: Name** (X-Y hours, ⭐...)
pattern = rf"\*\*Module {module_num:02d}:.*?\*\*\s*\([^,]+,\s*([⭐]+)\)"
match = re.search(pattern, content)
return normalize_difficulty(match.group(1)) if match else None
def extract_difficulty_from_about(module_path):
"""Extract difficulty rating from module ABOUT.md"""
about_file = module_path / "ABOUT.md"
if not about_file.exists():
return None
content = about_file.read_text()
# Pattern: difficulty: "⭐..." or difficulty: X
pattern = r'difficulty:\s*["\']?([⭐\d/]+)["\']?'
match = re.search(pattern, content)
return normalize_difficulty(match.group(1)) if match else None
def main():
"""Validate difficulty ratings across all modules"""
modules_dir = Path("modules")
errors = []
warnings = []
print("⭐ Validating Difficulty Rating Consistency")
print("=" * 60)
# Find all module directories
module_dirs = sorted([d for d in modules_dir.iterdir() if d.is_dir() and d.name[0].isdigit()])
for module_dir in module_dirs:
module_num = int(module_dir.name.split("_")[0])
module_name = module_dir.name
learning_path_diff = extract_difficulty_from_learning_path(module_num)
about_diff = extract_difficulty_from_about(module_dir)
if not about_diff:
warnings.append(f"⚠️ {module_name}: Missing difficulty in ABOUT.md")
continue
if not learning_path_diff:
warnings.append(f"⚠️ {module_name}: Not found in LEARNING_PATH.md")
continue
if learning_path_diff != about_diff:
errors.append(
f"{module_name}: Difficulty mismatch\n"
f" LEARNING_PATH.md: {'' * learning_path_diff}\n"
f" ABOUT.md: {'' * about_diff}"
)
else:
print(f"{module_name}: {'' * about_diff}")
print("\n" + "=" * 60)
# Print warnings
if warnings:
print("\n⚠️ Warnings:")
for warning in warnings:
print(f" {warning}")
# Print errors
if errors:
print("\n❌ Errors Found:")
for error in errors:
print(f" {error}\n")
print(f"\n{len(errors)} difficulty rating inconsistencies found!")
sys.exit(1)
else:
print("\n✅ All difficulty ratings are consistent!")
sys.exit(0)
if __name__ == "__main__":
main()