Files
TinyTorch/tinytorch/core/setup.py
Vijay Janapa Reddi cdbdba0b35 Comprehensive TinyTorch framework evaluation and analysis
Assessment Results:
- 75% real implementation vs 25% educational scaffolding
- Working end-to-end training on CIFAR-10 dataset
- Comprehensive architecture coverage (MLPs, CNNs, Attention)
- Production-oriented features (MLOps, profiling, compression)
- Professional development workflow with CLI tools

Key Findings:
- Students build functional ML framework from scratch
- Real datasets and meaningful evaluation capabilities
- Progressive complexity through 16-module structure
- Systems engineering principles throughout
- Ready for serious ML systems education

Gaps Identified:
- GPU acceleration and distributed training
- Advanced optimizers and model serialization
- Some memory optimization opportunities

Recommendation: Excellent foundation for ML systems engineering education
2025-09-16 22:41:07 -04:00

152 lines
5.4 KiB
Python
Generated

# AUTOGENERATED! DO NOT EDIT! File to edit: ../../modules/source/01_setup/setup_dev.ipynb.
# %% auto 0
__all__ = ['personal_info', 'system_info']
# %% ../../modules/source/01_setup/setup_dev.ipynb 1
import sys
import platform
import psutil
from typing import Dict, Any
# %% ../../modules/source/01_setup/setup_dev.ipynb 7
def personal_info() -> Dict[str, str]:
"""
Return personal information for this TinyTorch installation.
This function configures your personal TinyTorch installation with your identity.
It's the foundation of proper ML engineering practices - every system needs
to know who built it and how to contact them.
TODO: Implement personal information configuration.
STEP-BY-STEP IMPLEMENTATION:
1. Create a dictionary with your personal details
2. Include all required keys: developer, email, institution, system_name, version
3. Use your actual information (not placeholder text)
4. Make system_name unique and descriptive
5. Keep version as '1.0.0' for now
EXAMPLE USAGE:
```python
# Get your personal configuration
info = personal_info()
print(info['developer']) # Expected: "Your Name" (not placeholder)
print(info['email']) # Expected: "you@domain.com" (valid email)
print(info['system_name']) # Expected: "YourName-Dev" (unique identifier)
print(info) # Expected: Complete dict with 5 fields
# Output: {
# 'developer': 'Your Name',
# 'email': 'you@domain.com',
# 'institution': 'Your Institution',
# 'system_name': 'YourName-TinyTorch-Dev',
# 'version': '1.0.0'
# }
```
IMPLEMENTATION HINTS:
- Replace the example with your real information
- Use a descriptive system_name (e.g., 'YourName-TinyTorch-Dev')
- Keep email format valid (contains @ and domain)
- Make sure all values are strings
- Consider how this info will be used in debugging and collaboration
LEARNING CONNECTIONS:
- This is like the 'author' field in Git commits
- Similar to maintainer info in Docker images
- Parallels author info in Python packages
- Foundation for professional ML development
"""
### BEGIN SOLUTION
return {
'developer': 'Student Name',
'email': 'student@university.edu',
'institution': 'University Name',
'system_name': 'StudentName-TinyTorch-Dev',
'version': '1.0.0'
}
### END SOLUTION
# %% ../../modules/source/01_setup/setup_dev.ipynb 12
def system_info() -> Dict[str, Any]:
"""
Query and return system information for this TinyTorch installation.
This function gathers crucial hardware and software information that affects
ML performance, compatibility, and debugging. It's the foundation of
hardware-aware ML systems.
TODO: Implement system information queries.
STEP-BY-STEP IMPLEMENTATION:
1. Get Python version using sys.version_info
2. Get platform using platform.system()
3. Get architecture using platform.machine()
4. Get CPU count using psutil.cpu_count()
5. Get memory using psutil.virtual_memory().total
6. Convert memory from bytes to GB (divide by 1024^3)
7. Return all information in a dictionary
EXAMPLE USAGE:
```python
# Query system information
sys_info = system_info()
print(f"Python: {sys_info['python_version']}") # Expected: "3.x.x"
print(f"Platform: {sys_info['platform']}") # Expected: "Darwin"/"Linux"/"Windows"
print(f"CPUs: {sys_info['cpu_count']}") # Expected: 4, 8, 16, etc.
print(f"Memory: {sys_info['memory_gb']} GB") # Expected: 8.0, 16.0, 32.0, etc.
# Full output example:
print(sys_info)
# Expected: {
# 'python_version': '3.9.7',
# 'platform': 'Darwin',
# 'architecture': 'arm64',
# 'cpu_count': 8,
# 'memory_gb': 16.0
# }
```
IMPLEMENTATION HINTS:
- Use f-string formatting for Python version: f"{major}.{minor}.{micro}"
- Memory conversion: bytes / (1024^3) = GB
- Round memory to 1 decimal place for readability
- Make sure data types are correct (strings for text, int for cpu_count, float for memory_gb)
LEARNING CONNECTIONS:
- This is like `torch.cuda.is_available()` in PyTorch
- Similar to system info in MLflow experiment tracking
- Parallels hardware detection in TensorFlow
- Foundation for performance optimization in ML systems
PERFORMANCE IMPLICATIONS:
- cpu_count affects parallel processing capabilities
- memory_gb determines maximum model and batch sizes
- platform affects file system and process management
- architecture influences numerical precision and optimization
"""
### BEGIN SOLUTION
# Get Python version
version_info = sys.version_info
python_version = f"{version_info.major}.{version_info.minor}.{version_info.micro}"
# Get platform information
platform_name = platform.system()
architecture = platform.machine()
# Get CPU information
cpu_count = psutil.cpu_count()
# Get memory information (convert bytes to GB)
memory_bytes = psutil.virtual_memory().total
memory_gb = round(memory_bytes / (1024**3), 1)
return {
'python_version': python_version,
'platform': platform_name,
'architecture': architecture,
'cpu_count': cpu_count,
'memory_gb': memory_gb
}
### END SOLUTION