Merge branch 'improve/modules-01-02-standards' into dev

This commit is contained in:
Vijay Janapa Reddi
2025-09-15 15:23:39 -04:00
5 changed files with 766 additions and 115 deletions

View File

@@ -0,0 +1,277 @@
# Workflow Coordinator Agent
## Role
Master the complete TinyTorch development workflow, orchestrate agent handoffs, manage quality gates, and serve as the single point of contact for workflow questions. Know who does what, when, and how the pieces fit together.
## Core Responsibility
**You are the workflow expert.** When the user asks "what's next?" or "who should do this?" or "what's the process?" - you own the answer.
## Complete TinyTorch Development Workflow
### Phase 1: Design & Planning
```
User Request → Workflow Coordinator → Education Architect
```
**Education Architect** does:
- Analyze learning objectives
- Design educational progression
- Define module structure and content requirements
- Specify lab-style content needs
- Create educational specifications document
**Handoff Criteria**: Complete educational spec with:
- Learning objectives defined
- Content structure outlined
- Lab sections specified
- Assessment criteria established
### Phase 2: Implementation
```
Educational Spec → Workflow Coordinator → Module Developer
```
**Module Developer** does:
- Implement code with educational scaffolding
- Create BEGIN/END SOLUTION blocks for NBGrader
- Add 5 C's format as specified by Education Architect
- Implement test-immediately pattern
- Add lab-style content sections
- Ensure NBGrader metadata is correct
**Handoff Criteria**: Complete module with:
- All implementations finished
- NBGrader compatibility verified
- 5 C's format applied
- Lab sections included
- Tests working
### Phase 3: Quality Validation
```
Complete Module → Workflow Coordinator → Quality Assurance
```
**Quality Assurance** does:
- Validate NBGrader metadata and compatibility
- Test educational effectiveness
- Verify technical correctness
- Check integration with other modules
- Run complete validation checklist
**Handoff Criteria**: Module passes all QA checks:
- NBGrader generates student version correctly
- All tests pass
- Educational objectives met
- Integration verified
### Phase 4: Infrastructure & Release
```
QA-Approved Module → Workflow Coordinator → DevOps Engineer
```
**DevOps Engineer** does:
- Generate student versions via NBGrader
- Test autograding workflow
- Package for distribution
- Update infrastructure
- Deploy to environments
**Handoff Criteria**: Module ready for students:
- Student version generates cleanly
- Autograding works
- Distribution packages created
- Infrastructure updated
### Phase 5: Documentation & Publishing
```
Released Module → Workflow Coordinator → Documentation Publisher
```
**Documentation Publisher** does:
- Create external documentation
- Update Jupyter Book website
- Generate API documentation
- Create instructor materials
- Publish to public channels
**Handoff Criteria**: Module publicly available:
- Documentation live
- Instructor materials ready
- Public website updated
- Community notified
## Workflow States & Transitions
### Module States
1. **PLANNED** - Education Architect has defined requirements
2. **IN_DEVELOPMENT** - Module Developer is implementing
3. **READY_FOR_QA** - Module Developer finished, awaiting validation
4. **QA_IN_PROGRESS** - Quality Assurance is validating
5. **QA_APPROVED** - Passed all quality checks
6. **INFRASTRUCTURE_READY** - DevOps has prepared for release
7. **PUBLISHED** - Documentation Publisher has made it public
### Quality Gates
**Gate 1: Educational Design Complete**
- Learning objectives clear
- Content structure defined
- Lab sections specified
- Assessment strategy established
**Gate 2: Implementation Complete**
- All code implemented with scaffolding
- NBGrader compatibility ensured
- 5 C's format applied
- Lab content added
- Tests passing
**Gate 3: Quality Validation Passed**
- NBGrader workflow verified
- Educational effectiveness confirmed
- Technical correctness validated
- Integration tested
**Gate 4: Release Ready**
- Student versions generate correctly
- Autograding functional
- Infrastructure prepared
- Distribution packages created
**Gate 5: Publicly Available**
- Documentation published
- Instructor materials ready
- Community access enabled
## Agent Escalation Paths
### When Education Architect Needs Help
- **Technical feasibility questions** → Module Developer
- **Assessment strategy** → Quality Assurance
- **Infrastructure constraints** → DevOps Engineer
### When Module Developer Needs Help
- **Educational requirements unclear** → Education Architect
- **Technical quality concerns** → Quality Assurance
- **NBGrader issues** → DevOps Engineer
### When Quality Assurance Finds Issues
- **Educational problems** → Education Architect
- **Implementation bugs** → Module Developer
- **Infrastructure problems** → DevOps Engineer
### When DevOps Engineer Hits Blockers
- **Quality concerns** → Quality Assurance
- **Educational conflicts** → Education Architect
- **Documentation needs** → Documentation Publisher
## Decision Matrix: Who Owns What
| Decision Type | Owner | Consulted | Informed |
|---------------|-------|-----------|----------|
| Learning objectives | Education Architect | All | User |
| Educational format | Education Architect | Module Developer | All |
| Implementation approach | Module Developer | Education Architect | QA, DevOps |
| Code quality standards | Quality Assurance | Module Developer | All |
| NBGrader configuration | DevOps Engineer | Module Developer | QA |
| Release timing | Workflow Coordinator | All | User |
| Documentation structure | Documentation Publisher | Education Architect | All |
## Common Workflow Questions
### "What's the next step?"
Check module state:
- If PLANNED → Module Developer implements
- If READY_FOR_QA → Quality Assurance validates
- If QA_APPROVED → DevOps Engineer prepares release
- If INFRASTRUCTURE_READY → Documentation Publisher creates materials
### "Who should do [task]?"
Reference the RACI matrix above and agent responsibilities.
### "Is module ready for [next phase]?"
Check handoff criteria for current phase completion.
### "Something's blocking progress - who fixes it?"
Use escalation paths based on problem type.
### "User wants to change requirements - what's the process?"
1. Workflow Coordinator assesses impact
2. Education Architect updates educational spec
3. Affected agents re-work their contributions
4. Quality gates reset as needed
## Workflow Commands
### Status Checking
```bash
tito workflow status [module] # Show current state
tito workflow next [module] # Show next step
tito workflow validate [module] # Check gate criteria
```
### Agent Assignment
```bash
tito workflow assign [agent] [module] [task]
tito workflow handoff [from_agent] [to_agent] [module]
```
### Progress Tracking
```bash
tito workflow gates [module] # Show gate status
tito workflow blockers [module] # Show current blockers
tito workflow timeline [module] # Show expected completion
```
## User Interface
### When User Asks Workflow Questions
**You respond with:**
1. Current module state
2. Who's responsible for next action
3. Expected timeline
4. Any blockers or dependencies
5. Clear next steps
### When User Wants to Make Changes
**You guide them through:**
1. Impact assessment
2. Which agents need to be involved
3. What work needs to be redone
4. Updated timeline
5. Process for implementation
## Success Metrics
**Workflow Efficiency:**
- Average time from user request to published module
- Number of handoff delays
- Quality gate pass rate
- Rework frequency
**Agent Productivity:**
- Clear handoff criteria met %
- Escalation resolution time
- Agent utilization rates
- Bottleneck identification
## Your Value Proposition
**To the User:**
- Single point of contact for workflow questions
- Clear visibility into progress and next steps
- Predictable delivery timelines
- Efficient problem resolution
**To the Agents:**
- Clear handoff criteria
- No ambiguity about responsibilities
- Efficient escalation paths
- Focused work without workflow confusion
**To the Project:**
- Consistent quality through process
- Scalable development approach
- Reduced coordination overhead
- Faster time to delivery
You are the **air traffic controller** of TinyTorch development - making sure everything flows smoothly and everyone knows where they're going.

129
FIVE_CS_FORMAT_STANDARD.md Normal file
View File

@@ -0,0 +1,129 @@
# The 5 C's Format Standard for TinyTorch
## Standard Structure
Use this exact format before every major implementation:
```markdown
### Before We Code: The 5 C's
```python
# CONCEPT: What is [Component]?
# Brief, clear definition with analogy to familiar concepts
# CODE STRUCTURE: What We're Building
class ComponentName:
def method1(): # Key method 1
def method2(): # Key method 2
# Properties: .prop1, .prop2
# CONNECTIONS: Real-World Equivalents
# PyTorch equivalent - same concept, production optimized
# TensorFlow equivalent - industry alternative
# NumPy/other relationship - how it relates to known tools
# CONSTRAINTS: Key Implementation Requirements
# - Technical requirement 1 with why it matters
# - Technical requirement 2 with why it matters
# - Technical requirement 3 with why it matters
# CONTEXT: Why This Matters in ML Systems
# Specific applications in ML:
# - Use case 1: How it's used in neural networks
# - Use case 2: How it's used in training
# - Use case 3: How it's used in production
```
**Compelling closing statement about impact.**
```
## Example: Tensor Implementation
```markdown
### Before We Code: The 5 C's
```python
# CONCEPT: What is a Tensor?
# Tensors are N-dimensional arrays that carry data through neural networks.
# Think NumPy arrays with ML superpowers - same math, more capabilities.
# CODE STRUCTURE: What We're Building
class Tensor:
def __init__(self, data): # Create from any data type
def __add__(self, other): # Enable tensor + tensor
def __mul__(self, other): # Enable tensor * tensor
# Properties: .shape, .size, .dtype, .data
# CONNECTIONS: Real-World Equivalents
# torch.Tensor (PyTorch) - same concept, production optimized
# tf.Tensor (TensorFlow) - distributed computing focus
# np.ndarray (NumPy) - we wrap this with ML operations
# CONSTRAINTS: Key Implementation Requirements
# - Handle broadcasting (auto-shape matching for operations)
# - Support multiple data types (float32, int32, etc.)
# - Efficient memory usage (copy only when necessary)
# - Natural math notation (tensor + tensor should just work)
# CONTEXT: Why This Matters in ML Systems
# Every ML operation flows through tensors:
# - Neural networks: All computations operate on tensors
# - Training: Gradients flow through tensor operations
# - Hardware: GPUs optimized for tensor math
# - Production: Millions of tensor ops per second in real systems
```
**You're building the universal language of machine learning.**
```
## Key Design Principles
### 1. Code-Comment Integration
- Present concepts within code structure
- Show exactly where each principle applies
- Feel like practical guidance, not academic theory
### 2. Scannable Format
- Each C is clearly labeled
- Bullet points for easy scanning
- Concise but complete information
### 3. Implementation Focus
- CODE STRUCTURE shows actual methods being built
- CONSTRAINTS are technical requirements, not abstract concepts
- CONTEXT explains specific ML applications
### 4. Professional Connection
- CONNECTIONS always include PyTorch/TensorFlow equivalents
- Show how student code relates to production systems
- Emphasize real-world relevance
### 5. Motivational Closing
- End with compelling statement about impact
- Connect to bigger picture of ML systems
- Build student excitement for implementation
## When to Use
- **Always before major class implementations**
- Before complex algorithms or mathematical concepts
- When introducing new ML paradigms
- Before components that integrate with other modules
## When NOT to Use
- Before simple utility functions
- For minor method implementations within a class
- When students are already familiar with the concept
- For debugging or testing functions
## Implementation Checklist
- [ ] CONCEPT: Clear definition with analogy
- [ ] CODE STRUCTURE: Shows actual methods being built
- [ ] CONNECTIONS: Includes PyTorch/TensorFlow equivalents
- [ ] CONSTRAINTS: Lists 3-4 technical requirements
- [ ] CONTEXT: Explains specific ML applications
- [ ] Compelling closing statement
- [ ] Fits in code comment format
- [ ] Scannable and concise

129
WORKFLOW_SUMMARY.md Normal file
View File

@@ -0,0 +1,129 @@
# TinyTorch Development Workflow
## The Complete Process
### 🎯 **Workflow Coordinator** - Your Single Point of Contact
**When you ask: "What's next?" or "Who does this?" → Talk to Workflow Coordinator**
They know:
- Complete 5-phase workflow
- Who does what when
- Current module status
- Quality gate requirements
- How to escalate issues
## The 5-Phase Workflow
### **Phase 1: DESIGN** → Education Architect
```
User Request → Workflow Coordinator → Education Architect
```
**Delivers**: Educational specifications document
- Learning objectives defined
- Content structure outlined
- Lab sections specified
- Assessment criteria established
### **Phase 2: IMPLEMENTATION** → Module Developer
```
Educational Spec → Workflow Coordinator → Module Developer
```
**Delivers**: Complete module with educational scaffolding
- Code implemented with BEGIN/END SOLUTION blocks
- 5 C's format applied
- Lab-style content added
- Tests working
- NBGrader metadata correct
### **Phase 3: VALIDATION** → Quality Assurance
```
Complete Module → Workflow Coordinator → Quality Assurance
```
**Delivers**: QA-approved module
- NBGrader compatibility verified
- Educational effectiveness confirmed
- Technical correctness validated
- Integration tested
### **Phase 4: RELEASE** → DevOps Engineer
```
QA-Approved Module → Workflow Coordinator → DevOps Engineer
```
**Delivers**: Student-ready release
- Student versions generated via NBGrader
- Autograding workflow tested
- Distribution packages created
- Infrastructure updated
### **Phase 5: PUBLISHING** → Documentation Publisher
```
Released Module → Workflow Coordinator → Documentation Publisher
```
**Delivers**: Public documentation
- Jupyter Book website updated
- Instructor materials created
- API documentation generated
- Community access enabled
## Quality Gates
**Gate 1**: Educational design complete ✅
**Gate 2**: Implementation complete ✅
**Gate 3**: Quality validation passed ✅
**Gate 4**: Release ready ✅
**Gate 5**: Publicly available ✅
## Who You Talk To
### **General workflow questions** → Workflow Coordinator
- "What's the next step?"
- "Who should do this task?"
- "What's blocking progress?"
- "When will this be done?"
### **Educational design questions** → Education Architect
- "How should we structure learning?"
- "What lab content is needed?"
- "Are learning objectives clear?"
### **Implementation questions** → Module Developer
- "How should this be coded?"
- "Is NBGrader setup correct?"
- "Are tests sufficient?"
### **Quality concerns** → Quality Assurance
- "Does this meet standards?"
- "Will students be able to learn from this?"
- "Is integration working?"
### **Release issues** → DevOps Engineer
- "Can students access this?"
- "Is autograding working?"
- "Are packages building?"
### **Documentation needs** → Documentation Publisher
- "Is this ready for public use?"
- "Do instructors have what they need?"
- "Is the website updated?"
## The Answer to Your Question
**Q: "What's the workflow once a module is generated?"**
**A: Education Architect reviews first, then it flows through the pipeline:**
```
Module Developer creates → Education Architect reviews educational design →
Quality Assurance validates → DevOps Engineer prepares release →
Documentation Publisher makes it public
```
**Your dedicated workflow agent**: **Workflow Coordinator** - they know the complete flow and can answer any process questions.
## Current Module Status Example
**Module 01 & 02**: Currently in Phase 2 (Implementation) with improvements being made
**Next**: Move to Phase 3 (Quality Assurance validation)
**Who handles**: Workflow Coordinator orchestrates the handoff
**You always talk to Workflow Coordinator for "what's next" questions!**

View File

@@ -61,7 +61,7 @@ import psutil
import os
from typing import Dict, Any
# %% nbgrader={"grade": false, "grade_id": "setup-imports", "locked": false, "schema_version": 3, "solution": false, "task": false}
# %% nbgrader={"grade": false, "grade_id": "setup-verification", "locked": false, "schema_version": 3, "solution": false, "task": false}
print("🔥 TinyTorch Setup Module")
print(f"Python version: {sys.version_info.major}.{sys.version_info.minor}")
print(f"Platform: {platform.system()}")
@@ -160,10 +160,6 @@ Connects to broader ML engineering:
Let's start configuring your TinyTorch system!
"""
# %% [markdown]
"""
## 🔧 DEVELOPMENT
"""
# %% [markdown]
"""
@@ -209,6 +205,49 @@ Your **personal information** identifies you as the developer and configures you
Now let's implement your personal configuration!
"""
# %% [markdown]
"""
### Before We Code: The 5 C's
```python
# CONCEPT: What is Personal Information Configuration?
# Developer identity configuration that identifies you as the creator and
# configures your TinyTorch installation. Think Git commit attribution -
# every professional system needs to know who built it.
# CODE STRUCTURE: What We're Building
def personal_info() -> Dict[str, str]: # Returns developer identity
return { # Dictionary with required fields
'developer': 'Your Name', # Your actual name
'email': 'your@domain.com', # Contact information
'institution': 'Your Place', # Affiliation
'system_name': 'YourName-Dev', # Unique system identifier
'version': '1.0.0' # Configuration version
}
# CONNECTIONS: Real-World Equivalents
# Git commits - author name and email in every commit
# Docker images - maintainer information in container metadata
# Python packages - author info in setup.py and pyproject.toml
# Model cards - creator information for ML models
# CONSTRAINTS: Key Implementation Requirements
# - Use actual information (not placeholder text)
# - Email must be valid format (contains @ and domain)
# - System name should be unique and descriptive
# - All values must be strings, version stays '1.0.0'
# CONTEXT: Why This Matters in ML Systems
# Professional ML development requires attribution:
# - Model ownership: Who built this neural network?
# - Collaboration: Others can contact you about issues
# - Professional standards: Industry practice for all software
# - System customization: Makes your TinyTorch installation unique
```
**You're establishing your identity in the ML systems world.**
"""
# %% nbgrader={"grade": false, "grade_id": "personal-info", "locked": false, "schema_version": 3, "solution": true, "task": false}
#| export
def personal_info() -> Dict[str, str]:
@@ -230,10 +269,10 @@ def personal_info() -> Dict[str, str]:
EXAMPLE OUTPUT:
{
'developer': 'Vijay Janapa Reddi',
'email': 'vj@eecs.harvard.edu',
'institution': 'Harvard University',
'system_name': 'VJ-TinyTorch-Dev',
'developer': 'Student Name',
'email': 'student@university.edu',
'institution': 'University Name',
'system_name': 'StudentName-TinyTorch-Dev',
'version': '1.0.0'
}
@@ -252,14 +291,58 @@ def personal_info() -> Dict[str, str]:
"""
### BEGIN SOLUTION
return {
'developer': 'Vijay Janapa Reddi',
'email': 'vj@eecs.harvard.edu',
'institution': 'Harvard University',
'system_name': 'VJ-TinyTorch-Dev',
'developer': 'Student Name',
'email': 'student@university.edu',
'institution': 'University Name',
'system_name': 'StudentName-TinyTorch-Dev',
'version': '1.0.0'
}
### END SOLUTION
# %% [markdown]
"""
### 🧪 Unit Test: Personal Information Configuration
This test validates your `personal_info()` function implementation, ensuring it returns properly formatted developer information for system attribution and collaboration.
"""
# %% nbgrader={"grade": true, "grade_id": "test-personal-info-immediate", "locked": true, "points": 5, "schema_version": 3, "solution": false, "task": false}
def test_unit_personal_info_basic():
"""Test personal_info function implementation."""
print("🔬 Unit Test: Personal Information...")
# Test personal_info function
personal = personal_info()
# Test return type
assert isinstance(personal, dict), "personal_info should return a dictionary"
# Test required keys
required_keys = ['developer', 'email', 'institution', 'system_name', 'version']
for key in required_keys:
assert key in personal, f"Dictionary should have '{key}' key"
# Test non-empty values
for key, value in personal.items():
assert isinstance(value, str), f"Value for '{key}' should be a string"
assert len(value) > 0, f"Value for '{key}' cannot be empty"
# Test email format
assert '@' in personal['email'], "Email should contain @ symbol"
assert '.' in personal['email'], "Email should contain domain"
# Test version format
assert personal['version'] == '1.0.0', "Version should be '1.0.0'"
# Test system name (should be unique/personalized)
assert len(personal['system_name']) > 5, "System name should be descriptive"
print("✅ Personal info function tests passed!")
print(f"✅ TinyTorch configured for: {personal['developer']}")
# Run the test
test_unit_personal_info_basic()
# %% [markdown]
"""
## Step 3: System Information Queries
@@ -339,6 +422,49 @@ memory_gb = round(memory_bytes / (1024**3), 1)
Now let's implement system information queries!
"""
# %% [markdown]
"""
### Before We Code: The 5 C's
```python
# CONCEPT: What is System Information?
# Hardware and software environment detection for ML systems.
# Think computer specifications for gaming - ML needs to know what
# resources are available for optimal performance.
# CODE STRUCTURE: What We're Building
def system_info() -> Dict[str, Any]: # Queries system specs
return { # Hardware/software details
'python_version': '3.9.7', # Python compatibility
'platform': 'Darwin', # Operating system
'architecture': 'arm64', # CPU architecture
'cpu_count': 8, # Parallel processing cores
'memory_gb': 16.0 # Available RAM
}
# CONNECTIONS: Real-World Equivalents
# torch.get_num_threads() (PyTorch) - uses CPU count for optimization
# tf.config.list_physical_devices() (TensorFlow) - queries hardware
# psutil.cpu_count() (System monitoring) - same underlying queries
# MLflow system tracking - documents environment for reproducibility
# CONSTRAINTS: Key Implementation Requirements
# - Use actual system queries (not hardcoded values)
# - Convert memory from bytes to GB for readability
# - Round memory to 1 decimal place for clean output
# - Return proper data types (strings, int, float)
# CONTEXT: Why This Matters in ML Systems
# Hardware awareness enables performance optimization:
# - Training: More CPU cores = faster data processing
# - Memory: Determines maximum model and batch sizes
# - Debugging: System specs help troubleshoot performance issues
# - Reproducibility: Document exact environment for experiment tracking
```
**You're building hardware-aware ML systems that adapt to their environment.**
"""
# %% nbgrader={"grade": false, "grade_id": "system-info", "locked": false, "schema_version": 3, "solution": true, "task": false}
#| export
def system_info() -> Dict[str, Any]:
@@ -412,6 +538,51 @@ def system_info() -> Dict[str, Any]:
}
### END SOLUTION
# %% [markdown]
"""
### 🧪 Unit Test: System Information Query
This test validates your `system_info()` function implementation, ensuring it accurately detects and reports hardware and software specifications for performance optimization and debugging.
"""
# %% nbgrader={"grade": true, "grade_id": "test-system-info-immediate", "locked": true, "points": 5, "schema_version": 3, "solution": false, "task": false}
def test_unit_system_info_basic():
"""Test system_info function implementation."""
print("🔬 Unit Test: System Information...")
# Test system_info function
sys_info = system_info()
# Test return type
assert isinstance(sys_info, dict), "system_info should return a dictionary"
# Test required keys
required_keys = ['python_version', 'platform', 'architecture', 'cpu_count', 'memory_gb']
for key in required_keys:
assert key in sys_info, f"Dictionary should have '{key}' key"
# Test data types
assert isinstance(sys_info['python_version'], str), "python_version should be string"
assert isinstance(sys_info['platform'], str), "platform should be string"
assert isinstance(sys_info['architecture'], str), "architecture should be string"
assert isinstance(sys_info['cpu_count'], int), "cpu_count should be integer"
assert isinstance(sys_info['memory_gb'], (int, float)), "memory_gb should be number"
# Test reasonable values
assert sys_info['cpu_count'] > 0, "CPU count should be positive"
assert sys_info['memory_gb'] > 0, "Memory should be positive"
assert len(sys_info['python_version']) > 0, "Python version should not be empty"
# Test that values are actually queried (not hardcoded)
actual_version = f"{sys.version_info.major}.{sys.version_info.minor}.{sys.version_info.micro}"
assert sys_info['python_version'] == actual_version, "Python version should match actual system"
print("✅ System info function tests passed!")
print(f"✅ Python: {sys_info['python_version']} on {sys_info['platform']}")
# Run the test
test_unit_system_info_basic()
# %% [markdown]
"""
## 🧪 Testing Your Configuration Functions
@@ -451,100 +622,11 @@ Now let's test your configuration functions!
# %% [markdown]
"""
### 🧪 Test Your Configuration Functions
### 🎯 Additional Comprehensive Tests
Once you implement both functions above, run this cell to test them:
These comprehensive tests validate that your configuration functions work together and integrate properly with the TinyTorch system.
"""
# %% [markdown]
"""
### 🧪 Unit Test: Personal Information Configuration
This test validates your `personal_info()` function implementation, ensuring it returns properly formatted developer information for system attribution and collaboration.
"""
# %%
def test_unit_personal_info_basic():
"""Test personal_info function implementation."""
print("🔬 Unit Test: Personal Information...")
# Test personal_info function
personal = personal_info()
# Test return type
assert isinstance(personal, dict), "personal_info should return a dictionary"
# Test required keys
required_keys = ['developer', 'email', 'institution', 'system_name', 'version']
for key in required_keys:
assert key in personal, f"Dictionary should have '{key}' key"
# Test non-empty values
for key, value in personal.items():
assert isinstance(value, str), f"Value for '{key}' should be a string"
assert len(value) > 0, f"Value for '{key}' cannot be empty"
# Test email format
assert '@' in personal['email'], "Email should contain @ symbol"
assert '.' in personal['email'], "Email should contain domain"
# Test version format
assert personal['version'] == '1.0.0', "Version should be '1.0.0'"
# Test system name (should be unique/personalized)
assert len(personal['system_name']) > 5, "System name should be descriptive"
print("✅ Personal info function tests passed!")
print(f"✅ TinyTorch configured for: {personal['developer']}")
# Run the test
test_unit_personal_info_basic()
# %% [markdown]
"""
### 🧪 Unit Test: System Information Query
This test validates your `system_info()` function implementation, ensuring it accurately detects and reports hardware and software specifications for performance optimization and debugging.
"""
# %%
def test_unit_system_info_basic():
"""Test system_info function implementation."""
print("🔬 Unit Test: System Information...")
# Test system_info function
sys_info = system_info()
# Test return type
assert isinstance(sys_info, dict), "system_info should return a dictionary"
# Test required keys
required_keys = ['python_version', 'platform', 'architecture', 'cpu_count', 'memory_gb']
for key in required_keys:
assert key in sys_info, f"Dictionary should have '{key}' key"
# Test data types
assert isinstance(sys_info['python_version'], str), "python_version should be string"
assert isinstance(sys_info['platform'], str), "platform should be string"
assert isinstance(sys_info['architecture'], str), "architecture should be string"
assert isinstance(sys_info['cpu_count'], int), "cpu_count should be integer"
assert isinstance(sys_info['memory_gb'], (int, float)), "memory_gb should be number"
# Test reasonable values
assert sys_info['cpu_count'] > 0, "CPU count should be positive"
assert sys_info['memory_gb'] > 0, "Memory should be positive"
assert len(sys_info['python_version']) > 0, "Python version should not be empty"
# Test that values are actually queried (not hardcoded)
actual_version = f"{sys.version_info.major}.{sys.version_info.minor}.{sys.version_info.micro}"
assert sys_info['python_version'] == actual_version, "Python version should match actual system"
print("✅ System info function tests passed!")
print(f"✅ Python: {sys_info['python_version']} on {sys_info['platform']}")
# Run the test
test_unit_system_info_basic()
# %% [markdown]
"""
## 🎯 MODULE SUMMARY: Setup Configuration

View File

@@ -62,10 +62,6 @@ from tinytorch.core.layers import Dense, Conv2D
- **Foundation:** Every other module depends on Tensor
"""
# %% [markdown]
"""
## 🔧 DEVELOPMENT
"""
# %% [markdown]
"""
@@ -329,6 +325,44 @@ By implementing this Tensor class, you'll learn:
Let's implement our tensor foundation!
"""
# %% [markdown]
"""
### Before We Code: The 5 C's
```python
# CONCEPT: What is a Tensor?
# Tensors are N-dimensional arrays that carry data through neural networks.
# Think NumPy arrays with ML superpowers - same math, more capabilities.
# CODE STRUCTURE: What We're Building
class Tensor:
def __init__(self, data): # Create from any data type
def __add__(self, other): # Enable tensor + tensor
def __mul__(self, other): # Enable tensor * tensor
# Properties: .shape, .size, .dtype, .data
# CONNECTIONS: Real-World Equivalents
# torch.Tensor (PyTorch) - same concept, production optimized
# tf.Tensor (TensorFlow) - distributed computing focus
# np.ndarray (NumPy) - we wrap this with ML operations
# CONSTRAINTS: Key Implementation Requirements
# - Handle broadcasting (auto-shape matching for operations)
# - Support multiple data types (float32, int32, etc.)
# - Efficient memory usage (copy only when necessary)
# - Natural math notation (tensor + tensor should just work)
# CONTEXT: Why This Matters in ML Systems
# Every ML operation flows through tensors:
# - Neural networks: All computations operate on tensors
# - Training: Gradients flow through tensor operations
# - Hardware: GPUs optimized for tensor math
# - Production: Millions of tensor ops per second in real systems
```
**You're building the universal language of machine learning.**
"""
# %% nbgrader={"grade": false, "grade_id": "tensor-class", "locked": false, "schema_version": 3, "solution": true, "task": false}
#| export
class Tensor:
@@ -656,7 +690,7 @@ Let's test your tensor creation implementation right away! This gives you immedi
**This is a unit test** - it tests one specific function (tensor creation) in isolation.
"""
# %% nbgrader={"grade": true, "grade_id": "test-tensor-creation-immediate", "locked": true, "points": 5, "schema_version": 3, "solution": false, "task": false}
# %% nbgrader={"grade": true, "grade_id": "test_unit_tensor_creation_immediate", "locked": true, "points": 5, "schema_version": 3, "solution": false, "task": false}
# Test tensor creation immediately after implementation
print("🔬 Unit Test: Tensor Creation...")
@@ -698,7 +732,7 @@ Now let's test that your tensor properties work correctly. This tests the @prope
**This is a unit test** - it tests specific properties (shape, size, dtype, data) in isolation.
"""
# %% nbgrader={"grade": true, "grade_id": "test-tensor-properties-immediate", "locked": true, "points": 5, "schema_version": 3, "solution": false, "task": false}
# %% nbgrader={"grade": true, "grade_id": "test_unit_tensor_properties_immediate", "locked": true, "points": 5, "schema_version": 3, "solution": false, "task": false}
# Test tensor properties immediately after implementation
print("🔬 Unit Test: Tensor Properties...")
@@ -744,7 +778,7 @@ Let's test your tensor arithmetic operations. This tests the __add__, __mul__, _
**This is a unit test** - it tests specific arithmetic operations in isolation.
"""
# %% nbgrader={"grade": true, "grade_id": "test-tensor-arithmetic-immediate", "locked": true, "points": 5, "schema_version": 3, "solution": false, "task": false}
# %% nbgrader={"grade": true, "grade_id": "test_unit_tensor_arithmetic_immediate", "locked": true, "points": 5, "schema_version": 3, "solution": false, "task": false}
# Test tensor arithmetic immediately after implementation
print("🔬 Unit Test: Tensor Arithmetic...")
@@ -827,7 +861,7 @@ Congratulations! You've successfully implemented the core Tensor class for TinyT
This test validates your `Tensor` class constructor, ensuring it correctly handles scalars, vectors, matrices, and higher-dimensional arrays with proper shape detection.
"""
# %%
# %% nbgrader={"grade": true, "grade_id": "test_unit_tensor_creation", "locked": true, "points": 5, "schema_version": 3, "solution": false, "task": false}
def test_unit_tensor_creation():
"""Comprehensive test of tensor creation with all data types and shapes."""
print("🔬 Testing comprehensive tensor creation...")
@@ -855,7 +889,7 @@ test_unit_tensor_creation()
This test validates your tensor property methods (shape, size, dtype, data), ensuring they correctly reflect the tensor's dimensional structure and data characteristics.
"""
# %%
# %% nbgrader={"grade": true, "grade_id": "test_unit_tensor_properties", "locked": true, "points": 5, "schema_version": 3, "solution": false, "task": false}
def test_unit_tensor_properties():
"""Comprehensive test of tensor properties (shape, size, dtype, data access)."""
print("🔬 Testing comprehensive tensor properties...")
@@ -885,7 +919,7 @@ test_unit_tensor_properties()
This test validates your tensor arithmetic implementation (addition, multiplication, subtraction, division) and operator overloading, ensuring mathematical operations work correctly with proper broadcasting.
"""
# %%
# %% nbgrader={"grade": true, "grade_id": "test_unit_tensor_arithmetic", "locked": true, "points": 5, "schema_version": 3, "solution": false, "task": false}
def test_unit_tensor_arithmetic():
"""Comprehensive test of tensor arithmetic operations."""
print("🔬 Testing comprehensive tensor arithmetic...")
@@ -917,7 +951,7 @@ def test_unit_tensor_arithmetic():
# Run the test
test_unit_tensor_arithmetic()
# %%
# %% nbgrader={"grade": true, "grade_id": "test_module_tensor_numpy_integration", "locked": true, "points": 10, "schema_version": 3, "solution": false, "task": false}
def test_module_tensor_numpy_integration():
"""
Integration test for tensor operations with NumPy arrays.