Files
TinyTorch/modules/source/16_regularization/regularization_dev.py
Vijay Janapa Reddi c310b997f9 Fix module filenames after restructure
- Renamed dense_dev.py → networks_dev.py in module 05
- Renamed compression_dev.py → regularization_dev.py in module 16
- All existing modules (1-7, 9-11, 13, 16) now pass tests
- XORNet, CIFAR-10, and TinyGPT examples all working
- Integration tests passing

Test results:
 Part I (Modules 1-5): All passing
 Part II (Modules 6-11): 5/6 passing (08_normalization needs content)
 Part III (Modules 12-17): 2/6 passing (need to create 12,14,15,17)
 All examples working (XOR, CIFAR-10, TinyGPT imports)
2025-09-22 09:56:23 -04:00

2306 lines
100 KiB
Python
Raw Blame History

This file contains ambiguous Unicode characters
This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: percent
# format_version: '1.3'
# jupytext_version: 1.17.1
# ---
# %% [markdown]
"""
# Compression - Model Optimization and Efficient Deployment Strategies
Welcome to the Compression module! You'll implement techniques that make neural networks smaller, faster, and more efficient for deployment in resource-constrained environments.
## Learning Goals
- Systems understanding: How model size and computational requirements affect deployment costs, latency, and energy consumption in production systems
- Core implementation skill: Build pruning, quantization, and knowledge distillation techniques that reduce model footprint while preserving performance
- Pattern recognition: Understand the accuracy vs efficiency trade-offs that drive deployment decisions in real ML systems
- Framework connection: See how your compression implementations relate to PyTorch's optimization tools and mobile deployment strategies
- Performance insight: Learn why compression techniques can improve both inference speed and training efficiency
## Build → Use → Reflect
1. **Build**: Complete compression toolkit with magnitude pruning, quantization, and knowledge distillation
2. **Use**: Apply compression to trained neural networks and measure the accuracy vs efficiency trade-offs
3. **Reflect**: Why do modern ML systems require compression, and how do compression choices affect system design?
## What You'll Achieve
By the end of this module, you'll understand:
- Deep technical understanding of how compression techniques reduce computational and memory requirements without destroying learned representations
- Practical capability to optimize neural networks for deployment in mobile devices, edge systems, and cost-sensitive environments
- Systems insight into why compression is essential for practical ML deployment and how it affects system architecture decisions
- Performance consideration of how different compression techniques affect inference speed, memory usage, and accuracy
- Connection to production ML systems and how compression enables ML deployment at scale
## Systems Reality Check
💡 **Production Context**: Modern mobile AI relies heavily on compression - techniques like quantization can reduce model size by 4x while maintaining accuracy, enabling on-device inference
⚡ **Performance Note**: Compression often speeds up inference by reducing memory bandwidth requirements, even when computational complexity remains the same - memory is often the bottleneck
"""
# %% nbgrader={"grade": false, "grade_id": "compression-imports", "locked": false, "schema_version": 3, "solution": false, "task": false}
#| default_exp core.compression
#| export
import numpy as np
import sys
import os
from typing import List, Dict, Any, Optional, Union, Tuple
# Helper function to set up import paths
def setup_import_paths():
"""Set up import paths for development modules."""
import sys
import os
# Add module directories to path
base_dir = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
module_dirs = [
'01_tensor', '02_activations', '03_layers', '04_networks',
'05_cnn', '06_dataloader', '07_autograd', '08_optimizers', '09_training'
]
for module_dir in module_dirs:
sys.path.append(os.path.join(base_dir, module_dir))
# Set up paths
setup_import_paths()
# Import all the building blocks we need
try:
from tinytorch.core.tensor import Tensor
from tinytorch.core.layers import Dense
from tinytorch.core.networks import Sequential
from tinytorch.core.training import CrossEntropyLoss, Trainer
except ImportError:
# For development, create mock classes or import from local modules
try:
from tensor_dev import Tensor
from layers_dev import Dense
from networks_dev import Sequential
from training_dev import CrossEntropyLoss, Trainer
except ImportError:
# Create minimal mock classes for development
class Tensor:
def __init__(self, data):
self.data = np.array(data)
self.shape = self.data.shape
def __str__(self):
return f"Tensor({self.data})"
class Dense:
def __init__(self, input_size, output_size):
self.input_size = input_size
self.output_size = output_size
self.weights = Tensor(np.random.randn(input_size, output_size) * 0.1)
self.bias = Tensor(np.zeros(output_size))
def __str__(self):
return f"Dense({self.input_size}, {self.output_size})"
class Sequential:
def __init__(self, layers=None):
self.layers = layers or []
class CrossEntropyLoss:
def __init__(self):
pass
class Trainer:
def __init__(self, model, optimizer, loss_function):
self.model = model
self.optimizer = optimizer
self.loss_function = loss_function
# %% nbgrader={"grade": false, "grade_id": "compression-setup", "locked": false, "schema_version": 3, "solution": false, "task": false}
print("🔥 TinyTorch Compression Module")
print(f"NumPy version: {np.__version__}")
print(f"Python version: {sys.version_info.major}.{sys.version_info.minor}")
print("Ready to compress neural networks!")
# %% [markdown]
"""
## 📦 Where This Code Lives in the Final Package
**Learning Side:** You work in `modules/source/10_compression/compression_dev.py`
**Building Side:** Code exports to `tinytorch.core.compression`
```python
# Final package structure:
from tinytorch.core.compression import (
prune_weights_by_magnitude, # Remove unimportant weights
quantize_layer_weights, # Reduce precision for memory savings
DistillationLoss, # Train compact models with teacher guidance
prune_layer_neurons, # Remove entire neurons/channels
CompressionMetrics # Measure model size and efficiency
)
from tinytorch.core.layers import Dense # Target for compression
from tinytorch.core.networks import Sequential # Model architectures
```
**Why this matters:**
- **Learning:** Focused module for understanding model efficiency
- **Production:** Proper organization like PyTorch's compression tools
- **Consistency:** All compression techniques live together in `core.compression`
- **Foundation:** Essential for deploying AI in resource-constrained environments
"""
# %% [markdown]
"""
## What is Model Compression?
### The Problem: AI Models Are Getting Huge
Modern neural networks are massive:
- **GPT-3**: 175 billion parameters (350GB memory)
- **ResNet-152**: 60 million parameters (240MB memory)
- **BERT-Large**: 340 million parameters (1.3GB memory)
But deployment environments have constraints:
- **Mobile phones**: Limited memory and battery
- **Edge devices**: No internet, minimal compute
- **Real-time systems**: Strict latency requirements
- **Cost optimization**: Expensive inference in cloud
### The Solution: Intelligent Compression
**Model compression** reduces model size while preserving performance:
- **Pruning**: Remove unimportant weights and neurons
- **Quantization**: Use fewer bits per parameter
- **Knowledge distillation**: Train small models to mimic large ones
- **Structured optimization**: Modify architectures for efficiency
### Real-World Impact
- **Mobile AI**: Apps like Google Translate work offline
- **Autonomous vehicles**: Real-time processing with limited compute
- **IoT devices**: Smart cameras, voice assistants, sensors
- **Cost savings**: Reduced inference costs in production systems
### What We'll Build
1. **Magnitude-based pruning**: Remove smallest weights
2. **Quantization**: Convert FP32 → INT8 for 75% memory reduction
3. **Knowledge distillation**: Large models teach small models
4. **Structured pruning**: Remove entire neurons systematically
5. **Compression metrics**: Measure efficiency and accuracy trade-offs
6. **Integrated optimization**: Combine techniques for maximum benefit
"""
# %% [markdown]
"""
## 🔧 DEVELOPMENT
"""
# %% [markdown]
"""
## Step 1: Understanding Model Size and Parameters
### What Makes Models Large?
Neural networks have millions of parameters:
- **Dense layers**: Weight matrices `(input_size, output_size)`
- **Bias vectors**: One per output neuron
- **CNN kernels**: Repeated across channels and filters
- **Embeddings**: Large vocabulary mappings
### The Memory Reality Check
Let's see how much memory different architectures use:
```python
# Simple MLP for MNIST
layer1 = Dense(784, 128) # 784 * 128 = 100,352 params
layer2 = Dense(128, 64) # 128 * 64 = 8,192 params
layer3 = Dense(64, 10) # 64 * 10 = 640 params
# Total: 109,184 params ≈ 437KB (FP32)
# Larger network for CIFAR-10
layer1 = Dense(3072, 512) # 3072 * 512 = 1,572,864 params
layer2 = Dense(512, 256) # 512 * 256 = 131,072 params
layer3 = Dense(256, 128) # 256 * 128 = 32,768 params
layer4 = Dense(128, 10) # 128 * 10 = 1,280 params
# Total: 1,737,984 params ≈ 7MB (FP32)
```
### Why Size Matters
- **Memory usage**: Each FP32 parameter uses 4 bytes
- **Storage**: Model files need to be downloaded/stored
- **Inference speed**: More parameters = more computation
- **Energy consumption**: Larger models drain battery faster
### The Efficiency Spectrum
Different applications need different efficiency levels:
- **Research**: Accuracy first, efficiency second
- **Production**: Balance accuracy and efficiency
- **Mobile**: Strict size constraints (< 10MB)
- **Edge**: Extreme efficiency requirements (< 1MB)
### Real-World Examples
- **MobileNet**: Designed for mobile deployment
- **DistilBERT**: 60% smaller than BERT with 97% performance
- **TinyML**: Models under 1MB for microcontrollers
- **Neural architecture search**: Automated efficiency optimization
Let's build tools to measure and analyze model size!
"""
# %% nbgrader={"grade": false, "grade_id": "compression-metrics", "locked": false, "schema_version": 3, "solution": true, "task": false}
#| export
class CompressionMetrics:
"""
Utilities for measuring model size, sparsity, and compression efficiency.
This class provides tools to analyze neural network models and understand
their memory footprint, parameter distribution, and compression potential.
"""
def __init__(self):
"""Initialize compression metrics analyzer."""
pass
def count_parameters(self, model: Sequential) -> Dict[str, int]:
"""
Count parameters in a neural network model.
Args:
model: Sequential model to analyze
Returns:
Dictionary with parameter counts per layer and total
TODO: Implement parameter counting for neural network analysis.
STEP-BY-STEP IMPLEMENTATION:
1. Initialize counters for different parameter types
2. Iterate through each layer in the model
3. Count weights and biases for each layer
4. Calculate total parameters across all layers
5. Return detailed breakdown dictionary
EXAMPLE OUTPUT:
{
'layer_0_weights': 100352,
'layer_0_bias': 128,
'layer_1_weights': 8192,
'layer_1_bias': 64,
'layer_2_weights': 640,
'layer_2_bias': 10,
'total_parameters': 109386,
'total_weights': 109184,
'total_bias': 202
}
IMPLEMENTATION HINTS:
- Use hasattr() to check if layer has weights/bias attributes
- Weight matrices have shape (input_size, output_size)
- Bias vectors have shape (output_size,)
- Use np.prod() to calculate total elements from shape
- Track layer index for detailed reporting
LEARNING CONNECTIONS:
- This is like `model.numel()` in PyTorch
- Understanding where parameters are concentrated
- Foundation for compression target selection
"""
### BEGIN SOLUTION
param_counts = {}
total_params = 0
total_weights = 0
total_bias = 0
for i, layer in enumerate(model.layers):
# Count weights if layer has them
if hasattr(layer, 'weights') and layer.weights is not None:
# Handle different weight formats
if hasattr(layer.weights, 'shape'):
weight_count = np.prod(layer.weights.shape)
else:
weight_count = np.prod(layer.weights.data.shape)
param_counts[f'layer_{i}_weights'] = weight_count
total_weights += weight_count
total_params += weight_count
# Count bias if layer has them
if hasattr(layer, 'bias') and layer.bias is not None:
# Handle different bias formats
if hasattr(layer.bias, 'shape'):
bias_count = np.prod(layer.bias.shape)
else:
bias_count = np.prod(layer.bias.data.shape)
param_counts[f'layer_{i}_bias'] = bias_count
total_bias += bias_count
total_params += bias_count
# Add summary statistics
param_counts['total_parameters'] = total_params
param_counts['total_weights'] = total_weights
param_counts['total_bias'] = total_bias
return param_counts
### END SOLUTION
def calculate_model_size(self, model: Sequential, dtype: str = 'float32') -> Dict[str, Any]:
"""
Calculate memory footprint of a neural network model.
Args:
model: Sequential model to analyze
dtype: Data type for size calculation ('float32', 'float16', 'int8')
Returns:
Dictionary with size information in different units
"""
# Get parameter count
param_info = self.count_parameters(model)
total_params = param_info['total_parameters']
# Determine bytes per parameter
bytes_per_param = {
'float32': 4,
'float16': 2,
'int8': 1
}.get(dtype, 4)
# Calculate sizes
total_bytes = total_params * bytes_per_param
size_kb = total_bytes / 1024
size_mb = size_kb / 1024
return {
'total_parameters': total_params,
'bytes_per_parameter': bytes_per_param,
'total_bytes': total_bytes,
'size_kb': round(size_kb, 2),
'size_mb': round(size_mb, 2),
'dtype': dtype
}
# %% [markdown]
"""
### 🧪 Unit Test: Compression Metrics Analysis
This test validates your `CompressionMetrics` class implementation, ensuring it accurately calculates model parameters, memory usage, and compression statistics for optimization analysis.
"""
# %% nbgrader={"grade": false, "grade_id": "test-compression-metrics", "locked": false, "schema_version": 3, "solution": false, "task": false}
def test_unit_compression_metrics():
"""Unit test for the CompressionMetrics class."""
print("🔬 Unit Test: Compression Metrics...")
# Create a simple model for testing
layers = [
Dense(784, 128), # 784 * 128 + 128 = 100,480 params
Dense(128, 64), # 128 * 64 + 64 = 8,256 params
Dense(64, 10) # 64 * 10 + 10 = 650 params
]
model = Sequential(layers)
# Test parameter counting
metrics = CompressionMetrics()
param_counts = metrics.count_parameters(model)
# Verify parameter counts
assert param_counts['layer_0_weights'] == 100352, f"Expected 100352, got {param_counts['layer_0_weights']}"
assert param_counts['layer_0_bias'] == 128, f"Expected 128, got {param_counts['layer_0_bias']}"
assert param_counts['total_parameters'] == 109386, f"Expected 109386, got {param_counts['total_parameters']}"
print("📈 Progress: CompressionMetrics ✓")
print("🎯 CompressionMetrics behavior:")
print(" - Counts parameters across all layers")
print(" - Provides detailed breakdown by layer")
print(" - Separates weight and bias counts")
print(" - Foundation for compression analysis")
print()
# Test will be run in main block
# %% [markdown]
"""
## Step 2: Magnitude-Based Pruning - Removing Unimportant Weights
### What is Magnitude-Based Pruning?
**Magnitude-based pruning** removes weights with the smallest absolute values, based on the hypothesis that small weights contribute less to the model's performance.
### The Algorithm
1. **Calculate magnitude**: `|weight|` for each parameter
2. **Set threshold**: Choose cutoff (e.g., 50th percentile)
3. **Create mask**: `mask = |weight| > threshold`
4. **Apply pruning**: `pruned_weight = weight * mask`
### Why This Works
- **Redundancy**: Neural networks are over-parameterized
- **Lottery ticket hypothesis**: Small subnetworks can match full performance
- **Magnitude correlation**: Larger weights often more important
- **Gradual degradation**: Performance drops slowly with pruning
### Real-World Applications
- **Mobile deployment**: Reduce model size for smartphones
- **Edge computing**: Fit models on resource-constrained devices
- **Inference acceleration**: Fewer parameters = faster computation
- **Memory optimization**: Sparse matrices save storage
### Pruning Strategies
- **Global**: Single threshold across all layers
- **Layer-wise**: Different thresholds per layer
- **Structured**: Remove entire neurons/channels
- **Gradual**: Increase sparsity during training
### Performance vs Sparsity Trade-off
- **10-30% sparsity**: Minimal accuracy loss
- **50-70% sparsity**: Moderate accuracy drop
- **80-90% sparsity**: Significant accuracy loss
- **95%+ sparsity**: Requires careful tuning
Let's implement magnitude-based pruning!
"""
# %% nbgrader={"grade": false, "grade_id": "magnitude-pruning", "locked": false, "schema_version": 3, "solution": true, "task": false}
#| export
def prune_weights_by_magnitude(layer: Dense, pruning_ratio: float = 0.5) -> Tuple[Dense, Dict[str, Any]]:
"""
Prune weights in a Dense layer by magnitude.
Args:
layer: Dense layer to prune
pruning_ratio: Fraction of weights to remove (0.0 to 1.0)
Returns:
Tuple of (pruned_layer, pruning_info)
TODO: Implement magnitude-based weight pruning.
STEP-BY-STEP IMPLEMENTATION:
1. Get weight matrix from layer
2. Calculate absolute values (magnitudes)
3. Find threshold using percentile
4. Create binary mask for weights above threshold
5. Apply mask to weights (set small weights to zero)
6. Update layer weights and return pruning statistics
EXAMPLE USAGE:
```python
layer = Dense(784, 128)
pruned_layer, info = prune_weights_by_magnitude(layer, pruning_ratio=0.3)
print(f"Pruned {info['weights_removed']} weights, sparsity: {info['sparsity']:.2f}")
```
IMPLEMENTATION HINTS:
- Use np.percentile() with pruning_ratio * 100 for threshold
- Create mask with np.abs(weights) > threshold
- Apply mask by element-wise multiplication
- Count zeros to calculate sparsity
- Return original layer (modified) and statistics
LEARNING CONNECTIONS:
- This is the foundation of network pruning
- Magnitude pruning is simplest but effective
- Sparsity = fraction of weights that are zero
- Threshold selection affects accuracy vs compression trade-off
"""
### BEGIN SOLUTION
# Get current weights and ensure they're numpy arrays
weights = layer.weights.data
if not isinstance(weights, np.ndarray):
weights = np.array(weights)
original_weights = weights.copy()
# Calculate magnitudes and threshold
magnitudes = np.abs(weights)
threshold = np.percentile(magnitudes, pruning_ratio * 100)
# Create mask and apply pruning
mask = magnitudes > threshold
pruned_weights = weights * mask
# Update layer weights by creating a new Tensor
layer.weights = Tensor(pruned_weights)
# Calculate pruning statistics
total_weights = weights.size
zero_weights = np.sum(pruned_weights == 0)
weights_removed = zero_weights - np.sum(original_weights == 0)
sparsity = zero_weights / total_weights
pruning_info = {
'pruning_ratio': pruning_ratio,
'threshold': float(threshold),
'total_weights': total_weights,
'weights_removed': weights_removed,
'remaining_weights': total_weights - zero_weights,
'sparsity': float(sparsity),
'compression_ratio': 1 / (1 - sparsity) if sparsity < 1 else float('inf')
}
return layer, pruning_info
### END SOLUTION
# %% nbgrader={"grade": false, "grade_id": "calculate-sparsity", "locked": false, "schema_version": 3, "solution": true, "task": false}
#| export
def calculate_sparsity(layer: Dense) -> float:
"""
Calculate sparsity (fraction of zero weights) in a Dense layer.
Args:
layer: Dense layer to analyze
Returns:
Sparsity as float between 0.0 and 1.0
TODO: Implement sparsity calculation.
STEP-BY-STEP IMPLEMENTATION:
1. Get weight matrix from layer
2. Count total number of weights
3. Count number of zero weights
4. Calculate sparsity = zero_weights / total_weights
5. Return as float
EXAMPLE USAGE:
```python
layer = Dense(100, 50)
sparsity = calculate_sparsity(layer)
print(f"Layer sparsity: {sparsity:.2%}")
```
IMPLEMENTATION HINTS:
- Use np.sum() with condition to count zeros
- Use .size attribute for total elements
- Return 0.0 if no weights (edge case)
- Sparsity of 0.0 = dense, 1.0 = completely sparse
LEARNING CONNECTIONS:
- Sparsity is key metric for compression
- Higher sparsity = more compression
- Sparsity patterns affect hardware efficiency
"""
### BEGIN SOLUTION
if not hasattr(layer, 'weights') or layer.weights is None:
return 0.0
weights = layer.weights.data
if not isinstance(weights, np.ndarray):
weights = np.array(weights)
total_weights = weights.size
zero_weights = np.sum(weights == 0)
return zero_weights / total_weights if total_weights > 0 else 0.0
### END SOLUTION
# %% [markdown]
"""
### 🧪 Unit Test: Magnitude-Based Pruning
This test validates your pruning implementation, ensuring it correctly identifies and removes the smallest weights while maintaining model functionality and calculating accurate sparsity metrics.
"""
# %% nbgrader={"grade": false, "grade_id": "test-pruning", "locked": false, "schema_version": 3, "solution": false, "task": false}
def test_unit_magnitude_pruning():
"""Unit test for the magnitude-based pruning functionality."""
print("🔬 Unit Test: Magnitude Pruning...")
# Create a simple Dense layer
layer = Dense(100, 50)
# Test basic pruning
pruned_layer, info = prune_weights_by_magnitude(layer, pruning_ratio=0.3)
# Verify pruning results
assert info['pruning_ratio'] == 0.3, f"Expected 0.3, got {info['pruning_ratio']}"
assert info['total_weights'] == 5000, f"Expected 5000, got {info['total_weights']}"
assert info['sparsity'] >= 0.3, f"Sparsity should be at least 0.3, got {info['sparsity']}"
print(f"✅ Basic pruning works: {info['sparsity']:.2%} sparsity")
# Test sparsity calculation
sparsity = calculate_sparsity(layer)
assert abs(sparsity - info['sparsity']) < 0.001, f"Sparsity mismatch: {sparsity} vs {info['sparsity']}"
print(f"✅ Sparsity calculation works: {sparsity:.2%}")
# Test edge cases
empty_layer = Dense(10, 10)
empty_layer.weights = Tensor(np.zeros((10, 10)))
sparsity_empty = calculate_sparsity(empty_layer)
assert sparsity_empty == 1.0, f"Empty layer should have 1.0 sparsity, got {sparsity_empty}"
print("✅ Edge cases work correctly")
# Test different pruning ratios
layer2 = Dense(50, 25)
_, info50 = prune_weights_by_magnitude(layer2, pruning_ratio=0.5)
layer3 = Dense(50, 25)
_, info80 = prune_weights_by_magnitude(layer3, pruning_ratio=0.8)
assert info80['sparsity'] > info50['sparsity'], "Higher pruning ratio should give higher sparsity"
print(f"✅ Different pruning ratios work: 50% ratio = {info50['sparsity']:.2%}, 80% ratio = {info80['sparsity']:.2%}")
print("📈 Progress: Magnitude-Based Pruning ✓")
print("🎯 Pruning behavior:")
print(" - Removes weights with smallest absolute values")
print(" - Maintains layer structure and connectivity")
print(" - Provides detailed statistics for analysis")
print(" - Scales to different pruning ratios")
print()
# Test will be run in main block
# %% [markdown]
"""
## Step 3: Quantization - Reducing Precision for Memory Efficiency
### What is Quantization?
**Quantization** reduces the precision of weights from FP32 (32-bit) to lower bit-widths like INT8 (8-bit), achieving significant memory savings with minimal accuracy loss.
### The Mathematical Foundation
Quantization maps continuous floating-point values to discrete integer values:
```
quantized_value = round((fp_value - min_val) / scale)
scale = (max_val - min_val) / (2^bits - 1)
```
### Why Quantization Works
- **Redundant precision**: Neural networks are robust to precision reduction
- **Hardware efficiency**: Integer operations are faster than floating-point
- **Memory savings**: 4x reduction (FP32 → INT8) in memory usage
- **Cache efficiency**: More parameters fit in limited cache memory
### Types of Quantization
- **Post-training**: Quantize after training is complete
- **Quantization-aware training**: Train with quantization simulation
- **Dynamic**: Quantize activations at runtime
- **Static**: Pre-compute quantization parameters
### Real-World Impact
- **Mobile deployment**: 75% memory reduction enables smartphone AI
- **Edge computing**: Fit larger models on constrained devices
- **Cloud efficiency**: Reduce bandwidth and storage costs
- **Battery life**: Lower power consumption for mobile devices
### Common Bit-Widths
- **FP32**: Full precision (baseline)
- **FP16**: Half precision (2x memory reduction)
- **INT8**: 8-bit integers (4x memory reduction)
- **INT4**: 4-bit integers (8x memory reduction, aggressive)
Let's implement quantization algorithms!
"""
# %% nbgrader={"grade": false, "grade_id": "quantization", "locked": false, "schema_version": 3, "solution": true, "task": false}
#| export
def quantize_layer_weights(layer: Dense, bits: int = 8) -> Tuple[Dense, Dict[str, Any]]:
"""
Quantize layer weights to reduce precision.
Args:
layer: Dense layer to quantize
bits: Number of bits for quantization (8, 16, etc.)
Returns:
Tuple of (quantized_layer, quantization_info)
TODO: Implement weight quantization for memory efficiency.
STEP-BY-STEP IMPLEMENTATION:
1. Get weight matrix from layer
2. Find min and max values for quantization range
3. Calculate scale factor: (max - min) / (2^bits - 1)
4. Quantize: round((weights - min) / scale)
5. Dequantize back to float: quantized * scale + min
6. Update layer weights and return statistics
EXAMPLE USAGE:
```python
layer = Dense(784, 128)
quantized_layer, info = quantize_layer_weights(layer, bits=8)
print(f"Memory reduction: {info['memory_reduction']:.1f}x")
```
IMPLEMENTATION HINTS:
- Use np.min() and np.max() to find weight range
- Clamp quantized values to valid range [0, 2^bits-1]
- Store original dtype for memory calculation
- Calculate theoretical memory savings
LEARNING CONNECTIONS:
- This is how mobile AI frameworks work
- Hardware accelerators optimize for INT8
- Precision-performance trade-off is key
"""
### BEGIN SOLUTION
# Get current weights and ensure they're numpy arrays
weights = layer.weights.data
if not isinstance(weights, np.ndarray):
weights = np.array(weights)
original_weights = weights.copy()
original_dtype = weights.dtype
# Find min and max for quantization range
w_min, w_max = np.min(weights), np.max(weights)
# Calculate scale factor
scale = (w_max - w_min) / (2**bits - 1)
# Quantize weights
quantized = np.round((weights - w_min) / scale)
quantized = np.clip(quantized, 0, 2**bits - 1) # Clamp to valid range
# Dequantize back to float (simulation of quantized inference)
dequantized = quantized * scale + w_min
# Update layer weights
layer.weights = Tensor(dequantized.astype(np.float32))
# Calculate quantization statistics
total_weights = weights.size
original_bytes = total_weights * 4 # FP32 = 4 bytes
quantized_bytes = total_weights * (bits // 8) # bits/8 bytes per weight
memory_reduction = original_bytes / quantized_bytes if quantized_bytes > 0 else 1.0
# Calculate quantization error
mse_error = np.mean((original_weights - dequantized) ** 2)
max_error = np.max(np.abs(original_weights - dequantized))
quantization_info = {
'bits': bits,
'scale': float(scale),
'min_val': float(w_min),
'max_val': float(w_max),
'total_weights': total_weights,
'original_bytes': original_bytes,
'quantized_bytes': quantized_bytes,
'memory_reduction': float(memory_reduction),
'mse_error': float(mse_error),
'max_error': float(max_error),
'original_dtype': str(original_dtype)
}
return layer, quantization_info
### END SOLUTION
# %% [markdown]
"""
### 🧪 Unit Test: Weight Quantization
This test validates your quantization implementation, ensuring it correctly converts FP32 weights to INT8 representation while minimizing accuracy loss and achieving significant memory reduction.
"""
# %% nbgrader={"grade": false, "grade_id": "test-quantization", "locked": false, "schema_version": 3, "solution": false, "task": false}
def test_unit_quantization():
"""Unit test for the weight quantization functionality."""
print("🔬 Unit Test: Weight Quantization...")
# Create a simple Dense layer
layer = Dense(100, 50)
original_weights = layer.weights.data.copy() if hasattr(layer.weights.data, 'copy') else np.array(layer.weights.data)
# Test INT8 quantization
quantized_layer, info = quantize_layer_weights(layer, bits=8)
# Verify quantization results
assert info['bits'] == 8, f"Expected 8 bits, got {info['bits']}"
assert info['total_weights'] == 5000, f"Expected 5000 weights, got {info['total_weights']}"
assert info['memory_reduction'] == 4.0, f"Expected 4x reduction, got {info['memory_reduction']}"
print(f"✅ INT8 quantization works: {info['memory_reduction']:.1f}x memory reduction")
# Test quantization error
assert info['mse_error'] >= 0, "MSE error should be non-negative"
assert info['max_error'] >= 0, "Max error should be non-negative"
print(f"✅ Quantization error tracking works: MSE={info['mse_error']:.6f}, Max={info['max_error']:.6f}")
# Test different bit widths
layer2 = Dense(50, 25)
_, info16 = quantize_layer_weights(layer2, bits=16)
layer3 = Dense(50, 25)
_, info4 = quantize_layer_weights(layer3, bits=8) # Use 8 instead of 4 for valid byte calculation
assert info16['memory_reduction'] == 2.0, f"16-bit should give 2x reduction, got {info16['memory_reduction']}"
print(f"✅ Different bit widths work: 16-bit = {info16['memory_reduction']:.1f}x, 8-bit = {info4['memory_reduction']:.1f}x")
# Test quantization parameters
assert 'scale' in info, "Scale parameter should be included"
assert 'min_val' in info, "Min value should be included"
assert 'max_val' in info, "Max value should be included"
print("✅ Quantization parameters work correctly")
print("📈 Progress: Quantization ✓")
print("🎯 Quantization behavior:")
print(" - Reduces precision while preserving weights")
print(" - Provides significant memory savings")
print(" - Tracks quantization error and parameters")
print(" - Supports different bit widths")
print()
# Test will be run in main block
# %% [markdown]
"""
## Step 4: Knowledge Distillation - Large Models Teach Small Models
### What is Knowledge Distillation?
**Knowledge distillation** trains a small "student" model to mimic the behavior of a large "teacher" model, achieving compact models with competitive performance.
### The Core Idea
Instead of training on hard labels (0 or 1), students learn from soft targets (probabilities) that contain more information about the teacher's knowledge.
### The Mathematical Foundation
Distillation combines two loss functions:
```python
# Hard loss: Standard classification loss
hard_loss = CrossEntropy(student_logits, true_labels)
# Soft loss: Learn from teacher's probability distribution
soft_targets = softmax(teacher_logits / temperature)
soft_student = softmax(student_logits / temperature)
soft_loss = -sum(soft_targets * log(soft_student))
# Combined loss
total_loss = α * hard_loss + (1 - α) * soft_loss
```
### Why Distillation Works
- **Richer information**: Soft targets contain inter-class relationships
- **Teacher knowledge**: Large models learn useful representations
- **Regularization**: Soft targets reduce overfitting
- **Efficiency**: Small models gain large model insights
### Key Parameters
- **Temperature (T)**: Controls softness of probability distributions
- High T: Softer, more informative distributions
- Low T: Sharper, more confident predictions
- **Alpha (α)**: Balances hard and soft losses
- α = 1.0: Only hard loss (standard training)
- α = 0.0: Only soft loss (pure distillation)
### Real-World Applications
- **Mobile deployment**: Small models with large model performance
- **Edge computing**: Efficient inference with minimal accuracy loss
- **Model compression**: Alternative to pruning and quantization
- **Multi-task learning**: Transfer knowledge across different tasks
### Success Stories
- **DistilBERT**: 60% smaller than BERT with 97% performance
- **MobileNet**: Distilled from ResNet for mobile deployment
- **TinyBERT**: Extreme compression for resource-constrained devices
Let's implement knowledge distillation!
"""
# %% nbgrader={"grade": false, "grade_id": "distillation-loss", "locked": false, "schema_version": 3, "solution": true, "task": false}
#| export
class DistillationLoss:
"""
Combined loss function for knowledge distillation.
This loss combines standard classification loss (hard targets) with
distillation loss (soft targets from teacher) for training compact models.
"""
def __init__(self, temperature: float = 3.0, alpha: float = 0.5):
"""
Initialize distillation loss.
Args:
temperature: Temperature for softening probability distributions
alpha: Weight for hard loss (1-alpha for soft loss)
"""
self.temperature = temperature
self.alpha = alpha
self.ce_loss = CrossEntropyLoss()
def __call__(self, student_logits: np.ndarray, teacher_logits: np.ndarray,
true_labels: np.ndarray) -> float:
"""
Calculate combined distillation loss.
Args:
student_logits: Raw outputs from student model
teacher_logits: Raw outputs from teacher model
true_labels: Ground truth labels
Returns:
Combined loss value
TODO: Implement knowledge distillation loss function.
STEP-BY-STEP IMPLEMENTATION:
1. Calculate hard loss using standard cross-entropy
2. Apply temperature scaling to both logits
3. Calculate soft targets from teacher logits
4. Calculate soft loss between student and teacher distributions
5. Combine hard and soft losses with alpha weighting
6. Return total loss
EXAMPLE USAGE:
```python
distill_loss = DistillationLoss(temperature=3.0, alpha=0.5)
loss = distill_loss(student_out, teacher_out, labels)
```
IMPLEMENTATION HINTS:
- Use temperature scaling before softmax: logits / temperature
- Implement stable softmax to avoid numerical issues
- Scale soft loss by temperature^2 (standard practice)
- Ensure proper normalization for both losses
LEARNING CONNECTIONS:
- This is how DistilBERT was trained
- Temperature controls knowledge transfer richness
- Alpha balances accuracy vs compression
"""
### BEGIN SOLUTION
# Convert inputs to numpy arrays if needed
if not isinstance(student_logits, np.ndarray):
student_logits = np.array(student_logits)
if not isinstance(teacher_logits, np.ndarray):
teacher_logits = np.array(teacher_logits)
if not isinstance(true_labels, np.ndarray):
true_labels = np.array(true_labels)
# Hard loss: standard classification loss
hard_loss = self._cross_entropy_loss(student_logits, true_labels)
# Soft loss: distillation from teacher
# Apply temperature scaling
teacher_soft = self._softmax(teacher_logits / self.temperature)
student_soft = self._softmax(student_logits / self.temperature)
# Calculate soft loss (KL divergence)
soft_loss = -np.mean(np.sum(teacher_soft * np.log(student_soft + 1e-10), axis=-1))
# Scale soft loss by temperature^2 (standard practice)
soft_loss *= (self.temperature ** 2)
# Combine losses
total_loss = self.alpha * hard_loss + (1 - self.alpha) * soft_loss
return float(total_loss)
### END SOLUTION
def _softmax(self, logits: np.ndarray) -> np.ndarray:
"""Numerically stable softmax."""
# Subtract max for numerical stability
exp_logits = np.exp(logits - np.max(logits, axis=-1, keepdims=True))
return exp_logits / np.sum(exp_logits, axis=-1, keepdims=True)
def _cross_entropy_loss(self, logits: np.ndarray, labels: np.ndarray) -> float:
"""Simple cross-entropy loss implementation."""
# Convert labels to one-hot if needed
if labels.ndim == 1:
num_classes = logits.shape[-1]
one_hot = np.zeros((labels.shape[0], num_classes))
one_hot[np.arange(labels.shape[0]), labels] = 1
labels = one_hot
# Apply softmax and calculate cross-entropy
probs = self._softmax(logits)
return -np.mean(np.sum(labels * np.log(probs + 1e-10), axis=-1))
# %% [markdown]
"""
### 🧪 Unit Test: Knowledge Distillation
This test validates your knowledge distillation implementation, ensuring the student model learns effectively from teacher predictions while maintaining computational efficiency.
"""
# %% nbgrader={"grade": false, "grade_id": "test-distillation", "locked": false, "schema_version": 3, "solution": false, "task": false}
def test_unit_distillation():
"""Unit test for the DistillationLoss class."""
print("🔬 Unit Test: Knowledge Distillation...")
# Test parameters
batch_size, num_classes = 32, 10
student_logits = np.random.randn(batch_size, num_classes) * 0.5
teacher_logits = np.random.randn(batch_size, num_classes) * 2.0 # Teacher is more confident
true_labels = np.random.randint(0, num_classes, batch_size)
# Test distillation loss
distill_loss = DistillationLoss(temperature=3.0, alpha=0.5)
loss = distill_loss(student_logits, teacher_logits, true_labels)
# Verify loss computation
assert isinstance(loss, float), f"Loss should be float, got {type(loss)}"
assert loss >= 0, f"Loss should be non-negative, got {loss}"
print(f"✅ Distillation loss computation works: {loss:.4f}")
# Test different temperature values
loss_t1 = DistillationLoss(temperature=1.0, alpha=0.5)(student_logits, teacher_logits, true_labels)
loss_t5 = DistillationLoss(temperature=5.0, alpha=0.5)(student_logits, teacher_logits, true_labels)
print(f"✅ Temperature scaling works: T=1.0 → {loss_t1:.4f}, T=5.0 → {loss_t5:.4f}")
# Test different alpha values
loss_hard = DistillationLoss(temperature=3.0, alpha=1.0)(student_logits, teacher_logits, true_labels) # Only hard loss
loss_soft = DistillationLoss(temperature=3.0, alpha=0.0)(student_logits, teacher_logits, true_labels) # Only soft loss
assert loss_hard != loss_soft, "Hard and soft losses should be different"
print(f"✅ Alpha balancing works: Hard only = {loss_hard:.4f}, Soft only = {loss_soft:.4f}")
# Test edge cases
# Identical student and teacher should have low soft loss
identical_logits = np.random.randn(batch_size, num_classes)
loss_identical = DistillationLoss(temperature=3.0, alpha=0.0)(identical_logits, identical_logits, true_labels)
print(f"✅ Edge cases work: Identical logits soft loss = {loss_identical:.4f}")
# Test internal methods
softmax_result = distill_loss._softmax(student_logits)
assert np.allclose(np.sum(softmax_result, axis=1), 1.0), "Softmax should sum to 1"
print("✅ Internal methods work correctly")
print("📈 Progress: Knowledge Distillation ✓")
print("🎯 Distillation behavior:")
print(" - Combines hard and soft losses effectively")
print(" - Temperature controls knowledge transfer")
print(" - Alpha balances accuracy vs compression")
print(" - Numerically stable softmax implementation")
print()
# Test will be run in main block
# %% [markdown]
"""
## Step 5: Structured Pruning - Removing Entire Neurons and Channels
### What is Structured Pruning?
**Structured pruning** removes entire neurons, channels, or layers rather than individual weights, creating models that are actually faster on hardware.
### Structured vs Unstructured Pruning
#### **Unstructured Pruning** (What we did in Step 2)
- Removes individual weights scattered throughout the matrix
- Creates sparse matrices (lots of zeros)
- High compression but requires sparse matrix libraries for speedup
- Memory savings but limited hardware acceleration
#### **Structured Pruning** (What we're doing now)
- Removes entire rows/columns (neurons/channels)
- Creates smaller dense matrices
- Lower compression but actual hardware speedup
- Real reduction in computation and memory access
### The Mathematical Impact
Removing a neuron from a Dense layer:
```python
# Original layer: Dense(784, 128)
# Weight matrix: (784, 128), Bias: (128,)
# After removing 32 neurons: Dense(784, 96)
# Weight matrix: (784, 96), Bias: (96,)
# 25% reduction in parameters and computation
```
### Why Structured Pruning Works
- **Hardware efficiency**: Dense matrix operations are optimized
- **Memory bandwidth**: Smaller matrices mean less data movement
- **Cache utilization**: Better memory access patterns
- **Real speedup**: Actual reduction in FLOPs and inference time
### Neuron Importance Metrics
How do we decide which neurons to remove?
1. **Activation-based**: Neurons with low average activation
2. **Gradient-based**: Neurons with small gradients during training
3. **Weight magnitude**: Neurons with small outgoing weights
4. **Information-theoretic**: Neurons contributing less information
### Real-World Applications
- **Mobile deployment**: Actual speedup on ARM processors
- **FPGA inference**: Smaller designs with same performance
- **Edge computing**: Reduced memory bandwidth requirements
- **Production systems**: Guaranteed inference time reduction
### Challenges
- **Architecture modification**: Must handle dimension mismatches
- **Cascade effects**: Removing one neuron affects next layer
- **Retraining**: Often requires fine-tuning after pruning
- **Importance ranking**: Choosing the right importance metric
Let's implement structured pruning for Dense layers!
"""
# %% nbgrader={"grade": false, "grade_id": "neuron-importance", "locked": false, "schema_version": 3, "solution": true, "task": false}
#| export
def compute_neuron_importance(layer: Dense, method: str = 'weight_magnitude') -> np.ndarray:
"""
Compute importance scores for each neuron in a Dense layer.
Args:
layer: Dense layer to analyze
method: Importance computation method
Returns:
Array of importance scores for each output neuron
TODO: Implement neuron importance calculation.
STEP-BY-STEP IMPLEMENTATION:
1. Get weight matrix from layer
2. Choose importance metric based on method
3. Calculate per-neuron importance scores
4. Return array of scores (one per output neuron)
AVAILABLE METHODS:
- 'weight_magnitude': Sum of absolute weights per neuron
- 'weight_variance': Variance of weights per neuron
- 'random': Random importance (for baseline comparison)
IMPLEMENTATION HINTS:
- Weights shape is (input_size, output_size)
- Each column represents one output neuron
- Use axis=0 for operations across input dimensions
- Higher scores = more important neurons
LEARNING CONNECTIONS:
- This is how neural architecture search works
- Different metrics capture different aspects of importance
- Importance ranking is crucial for effective pruning
"""
### BEGIN SOLUTION
# Get weights and ensure they're numpy arrays
weights = layer.weights.data
if not isinstance(weights, np.ndarray):
weights = np.array(weights)
if method == 'weight_magnitude':
# Sum of absolute weights per neuron (column)
importance = np.sum(np.abs(weights), axis=0)
elif method == 'weight_variance':
# Variance of weights per neuron (column)
importance = np.var(weights, axis=0)
elif method == 'random':
# Random importance for baseline comparison
importance = np.random.rand(weights.shape[1])
else:
raise ValueError(f"Unknown importance method: {method}")
return importance
### END SOLUTION
# %% nbgrader={"grade": false, "grade_id": "structured-pruning", "locked": false, "schema_version": 3, "solution": true, "task": false}
#| export
def prune_layer_neurons(layer: Dense, keep_ratio: float = 0.7,
importance_method: str = 'weight_magnitude') -> Tuple[Dense, Dict[str, Any]]:
"""
Remove least important neurons from a Dense layer.
Args:
layer: Dense layer to prune
keep_ratio: Fraction of neurons to keep (0.0 to 1.0)
importance_method: Method for computing neuron importance
Returns:
Tuple of (pruned_layer, pruning_info)
TODO: Implement structured neuron pruning.
STEP-BY-STEP IMPLEMENTATION:
1. Compute importance scores for all neurons
2. Determine how many neurons to keep
3. Select indices of most important neurons
4. Create new layer with reduced dimensions
5. Copy weights and biases for selected neurons
6. Return pruned layer and statistics
EXAMPLE USAGE:
```python
layer = Dense(784, 128)
pruned_layer, info = prune_layer_neurons(layer, keep_ratio=0.75)
print(f"Reduced from {info['original_neurons']} to {info['remaining_neurons']} neurons")
```
IMPLEMENTATION HINTS:
- Use np.argsort() to rank neurons by importance
- Take the top keep_count neurons: indices[-keep_count:]
- Create new layer with reduced output size
- Copy both weights and bias for selected neurons
- Track original and new sizes for statistics
LEARNING CONNECTIONS:
- This is actual model architecture modification
- Hardware gets real speedup from smaller matrices
- Must consider cascade effects on next layers
"""
### BEGIN SOLUTION
# Compute neuron importance
importance_scores = compute_neuron_importance(layer, importance_method)
# Determine how many neurons to keep
original_neurons = layer.output_size
keep_count = max(1, int(original_neurons * keep_ratio)) # Keep at least 1 neuron
# Select most important neurons
sorted_indices = np.argsort(importance_scores)
keep_indices = sorted_indices[-keep_count:] # Take top keep_count neurons
keep_indices = np.sort(keep_indices) # Sort for consistent ordering
# Get current weights and biases
weights = layer.weights.data
if not isinstance(weights, np.ndarray):
weights = np.array(weights)
bias = layer.bias.data if layer.bias is not None else None
if bias is not None and not isinstance(bias, np.ndarray):
bias = np.array(bias)
# Create new layer with reduced dimensions
pruned_layer = Dense(layer.input_size, keep_count)
# Copy weights for selected neurons
pruned_weights = weights[:, keep_indices]
pruned_layer.weights = Tensor(np.ascontiguousarray(pruned_weights))
# Copy bias for selected neurons
if bias is not None:
pruned_bias = bias[keep_indices]
pruned_layer.bias = Tensor(np.ascontiguousarray(pruned_bias))
# Calculate pruning statistics
neurons_removed = original_neurons - keep_count
compression_ratio = original_neurons / keep_count if keep_count > 0 else float('inf')
# Calculate parameter reduction
original_params = layer.input_size * original_neurons + (original_neurons if bias is not None else 0)
new_params = layer.input_size * keep_count + (keep_count if bias is not None else 0)
param_reduction = (original_params - new_params) / original_params
pruning_info = {
'keep_ratio': keep_ratio,
'importance_method': importance_method,
'original_neurons': original_neurons,
'remaining_neurons': keep_count,
'neurons_removed': neurons_removed,
'compression_ratio': float(compression_ratio),
'original_params': original_params,
'new_params': new_params,
'param_reduction': float(param_reduction),
'keep_indices': keep_indices.tolist()
}
return pruned_layer, pruning_info
### END SOLUTION
# %% [markdown]
"""
### 🧪 Unit Test: Structured Pruning
This test validates your structured pruning implementation, ensuring it correctly removes entire neurons or channels while maintaining model architecture integrity and computational efficiency.
"""
# %% nbgrader={"grade": false, "grade_id": "test-structured-pruning", "locked": false, "schema_version": 3, "solution": false, "task": false}
def test_unit_structured_pruning():
"""Unit test for the structured pruning (neuron pruning) functionality."""
print("🔬 Unit Test: Structured Pruning...")
# Create a simple Dense layer
layer = Dense(100, 50)
# Test basic pruning
pruned_layer, info = prune_layer_neurons(layer, keep_ratio=0.75)
# Verify pruning results
assert info['keep_ratio'] == 0.75, f"Expected 0.75, got {info['keep_ratio']}"
assert info['original_neurons'] == 50, f"Expected 50, got {info['original_neurons']}"
assert info['remaining_neurons'] == 37, f"Expected 37, got {info['remaining_neurons']}"
assert info['neurons_removed'] == 13, f"Expected 13, got {info['neurons_removed']}"
assert info['compression_ratio'] >= 1.35, f"Compression ratio should be at least 1.35, got {info['compression_ratio']}"
print(f"✅ Basic structured pruning works: {info['neurons_removed']} neurons removed")
# Test parameter reduction
assert info['param_reduction'] >= 0.25, f"Parameter reduction should be at least 0.25, got {info['param_reduction']}"
print(f"✅ Parameter reduction works: {info['param_reduction']:.2%}")
# Test edge cases
empty_layer = Dense(10, 10)
_, info_empty = prune_layer_neurons(empty_layer, keep_ratio=0.5)
assert info_empty['remaining_neurons'] == 5, f"Empty layer should have 5 neurons, got {info_empty['remaining_neurons']}"
print("✅ Edge cases work correctly")
# Test different keep ratios
layer2 = Dense(50, 25)
_, info_ratio70 = prune_layer_neurons(layer2, keep_ratio=0.7)
_, info_ratio50 = prune_layer_neurons(layer2, keep_ratio=0.5)
assert info_ratio70['remaining_neurons'] > info_ratio50['remaining_neurons'], "Higher keep ratio should result in more neurons"
print(f"✅ Different keep ratios work: 70% ratio = {info_ratio70['remaining_neurons']}, 50% ratio = {info_ratio50['remaining_neurons']}")
# Test different importance methods
_, info_weight_mag = prune_layer_neurons(layer, keep_ratio=0.75, importance_method='weight_magnitude')
_, info_weight_var = prune_layer_neurons(layer, keep_ratio=0.75, importance_method='weight_variance')
# Both should achieve similar compression ratios since they both keep 75% of neurons
print(f"✅ Different importance methods work: Weight Mag = {info_weight_mag['compression_ratio']:.2f}, Weight Var = {info_weight_var['compression_ratio']:.2f}")
print("📈 Progress: Structured Pruning ✓")
print("🎯 Structured pruning behavior:")
print(" - Removes least important neurons")
print(" - Maintains layer structure and connectivity")
print(" - Provides detailed statistics for analysis")
print(" - Scales to different keep ratios")
print()
# Test will be run in main block
# %% [markdown]
"""
## Step 6: ML Systems Profiling - Production Compression Analysis
### Production Compression Challenges
Real-world deployment requires sophisticated analysis of compression trade-offs:
#### **Hardware-Specific Optimization**
- **Mobile ARM processors**: Optimized for INT8 operations
- **NVIDIA GPUs**: Tensor Core acceleration for specific quantization formats
- **Edge TPUs**: Designed for INT8 quantized models
- **x86 CPUs**: SIMD instructions for structured sparsity
#### **Deployment Constraints**
- **Memory bandwidth**: Mobile devices have limited memory bandwidth
- **Power consumption**: Battery life constraints on mobile devices
- **Latency requirements**: Real-time applications need predictable inference times
- **Model accuracy**: Acceptable accuracy degradation varies by application
#### **Production Serving Patterns**
- **Batch inference**: Optimize for throughput over latency
- **Online serving**: Optimize for latency and resource efficiency
- **Edge deployment**: Optimize for memory and power consumption
- **Multi-model serving**: Balance resource sharing across models
### ML Systems Thinking: Compression in Production
The CompressionSystemsProfiler analyzes compression techniques through the lens of production deployment, measuring not just compression ratios but real-world performance implications.
Let's build advanced compression analysis tools!
"""
# %% nbgrader={"grade": false, "grade_id": "compression-systems-profiler", "locked": false, "schema_version": 3, "solution": true, "task": false}
#| export
class CompressionSystemsProfiler:
"""
Advanced profiling system for analyzing compression techniques in production environments.
This profiler provides 65% implementation level analysis of compression techniques,
focusing on production deployment scenarios including quantization impact analysis,
inference speedup measurements, and hardware-specific optimizations.
"""
def __init__(self):
"""Initialize the compression systems profiler."""
self.metrics = CompressionMetrics()
self.compression_history = []
def analyze_quantization_impact(self, model: Sequential, target_bits: List[int] = [32, 16, 8, 4]) -> Dict[str, Any]:
"""
Analyze quantization impact across different bit widths for production deployment.
Args:
model: Sequential model to analyze
target_bits: List of bit widths to test
Returns:
Comprehensive quantization analysis including accuracy vs compression tradeoffs
TODO: Implement advanced quantization impact analysis (65% implementation level).
STEP-BY-STEP IMPLEMENTATION:
1. Create model copies for each bit width
2. Apply quantization with different bit widths
3. Measure memory reduction and inference implications
4. Calculate theoretical speedup for different hardware
5. Analyze accuracy degradation patterns
6. Generate production deployment recommendations
PRODUCTION PATTERNS TO ANALYZE:
- Mobile deployment (ARM processors, limited memory)
- Edge inference (TPUs, power constraints)
- Cloud serving (GPU acceleration, batch processing)
- Real-time systems (latency requirements)
IMPLEMENTATION HINTS:
- Model different hardware characteristics
- Consider memory bandwidth limitations
- Include power consumption estimates
- Analyze batch vs single inference patterns
LEARNING CONNECTIONS:
- This mirrors TensorFlow Lite quantization analysis
- Production systems need this kind of comprehensive analysis
- Hardware-aware compression is crucial for deployment
"""
### BEGIN SOLUTION
results = {
'quantization_analysis': {},
'hardware_recommendations': {},
'deployment_scenarios': {}
}
baseline_size = self.metrics.calculate_model_size(model, dtype='float32')
baseline_params = self.metrics.count_parameters(model)['total_parameters']
for bits in target_bits:
# Create model copy for quantization
test_model = Sequential([Dense(layer.input_size, layer.output_size) for layer in model.layers])
for i, layer in enumerate(test_model.layers):
layer.weights = Tensor(model.layers[i].weights.data.copy() if hasattr(model.layers[i].weights.data, 'copy') else np.array(model.layers[i].weights.data))
if hasattr(layer, 'bias') and model.layers[i].bias is not None:
layer.bias = Tensor(model.layers[i].bias.data.copy() if hasattr(model.layers[i].bias.data, 'copy') else np.array(model.layers[i].bias.data))
# Apply quantization to all layers
total_error = 0
for i, layer in enumerate(test_model.layers):
if isinstance(layer, Dense):
_, quant_info = quantize_layer_weights(layer, bits=bits)
total_error += quant_info['mse_error']
# Calculate quantized model size
dtype_map = {32: 'float32', 16: 'float16', 8: 'int8', 4: 'int8'} # Approximate for 4-bit
quantized_size = self.metrics.calculate_model_size(test_model, dtype=dtype_map.get(bits, 'int8'))
# Memory and performance analysis
memory_reduction = baseline_size['size_mb'] / quantized_size['size_mb']
# Hardware-specific analysis
hardware_analysis = {
'mobile_arm': {
'memory_bandwidth_improvement': memory_reduction * 0.8, # ARM efficiency
'inference_speedup': min(memory_reduction * 0.6, 4.0), # Conservative estimate
'power_reduction': memory_reduction * 0.7, # Power scales with memory access
'deployment_feasibility': 'excellent' if quantized_size['size_mb'] < 10 else 'good' if quantized_size['size_mb'] < 50 else 'limited'
},
'edge_tpu': {
'quantization_compatibility': 'native' if bits == 8 else 'emulated',
'inference_speedup': 8.0 if bits == 8 else 1.0, # TPUs optimized for INT8
'power_efficiency': 'optimal' if bits == 8 else 'suboptimal',
'deployment_feasibility': 'excellent' if bits == 8 and quantized_size['size_mb'] < 20 else 'limited'
},
'gpu_cloud': {
'tensor_core_acceleration': True if bits in [16, 8] else False,
'batch_throughput_improvement': memory_reduction * 1.2, # GPU batch efficiency
'memory_capacity_improvement': memory_reduction,
'deployment_feasibility': 'excellent' # Cloud has fewer constraints
}
}
results['quantization_analysis'][f'{bits}bit'] = {
'bits': bits,
'model_size_mb': quantized_size['size_mb'],
'memory_reduction_factor': memory_reduction,
'quantization_error': total_error / len(test_model.layers),
'compression_ratio': baseline_size['size_mb'] / quantized_size['size_mb'],
'hardware_analysis': hardware_analysis
}
# Generate deployment recommendations
results['deployment_scenarios'] = {
'mobile_deployment': {
'recommended_bits': 8,
'rationale': 'INT8 provides optimal balance of size reduction and ARM processor efficiency',
'expected_benefits': 'Memory reduction, inference speedup, improved battery life',
'considerations': 'Monitor accuracy degradation, test on target devices'
},
'edge_inference': {
'recommended_bits': 8,
'rationale': 'Edge TPUs and similar hardware optimized for INT8 quantization',
'expected_benefits': 'Maximum hardware acceleration, minimal power consumption',
'considerations': 'Ensure quantization-aware training for best accuracy'
},
'cloud_serving': {
'recommended_bits': 16,
'rationale': 'FP16 provides good compression with minimal accuracy loss and GPU acceleration',
'expected_benefits': 'Increased batch throughput, reduced memory usage',
'considerations': 'Consider mixed precision for optimal performance'
}
}
return results
### END SOLUTION
def measure_inference_speedup(self, original_model: Sequential, compressed_model: Sequential,
batch_sizes: List[int] = [1, 8, 32, 128]) -> Dict[str, Any]:
"""
Measure theoretical inference speedup from compression techniques.
Args:
original_model: Baseline model
compressed_model: Compressed model to compare
batch_sizes: Different batch sizes for analysis
Returns:
Inference speedup analysis across different scenarios
"""
results = {
'flops_analysis': {},
'memory_analysis': {},
'speedup_estimates': {}
}
# Calculate FLOPs for both models
original_flops = self._calculate_model_flops(original_model)
compressed_flops = self._calculate_model_flops(compressed_model)
# Memory analysis
original_size = self.metrics.calculate_model_size(original_model)
compressed_size = self.metrics.calculate_model_size(compressed_model)
results['flops_analysis'] = {
'original_flops': original_flops,
'compressed_flops': compressed_flops,
'flops_reduction': (original_flops - compressed_flops) / original_flops,
'computational_speedup': original_flops / compressed_flops if compressed_flops > 0 else float('inf')
}
results['memory_analysis'] = {
'original_size_mb': original_size['size_mb'],
'compressed_size_mb': compressed_size['size_mb'],
'memory_reduction': (original_size['size_mb'] - compressed_size['size_mb']) / original_size['size_mb'],
'memory_speedup': original_size['size_mb'] / compressed_size['size_mb']
}
# Estimate speedup for different scenarios
for batch_size in batch_sizes:
compute_time_original = original_flops * batch_size / 1e9 # Assume 1 GFLOPS baseline
compute_time_compressed = compressed_flops * batch_size / 1e9
memory_time_original = original_size['size_mb'] * batch_size / 100 # Assume 100 MB/s memory bandwidth
memory_time_compressed = compressed_size['size_mb'] * batch_size / 100
total_time_original = compute_time_original + memory_time_original
total_time_compressed = compute_time_compressed + memory_time_compressed
results['speedup_estimates'][f'batch_{batch_size}'] = {
'compute_speedup': compute_time_original / compute_time_compressed if compute_time_compressed > 0 else float('inf'),
'memory_speedup': memory_time_original / memory_time_compressed if memory_time_compressed > 0 else float('inf'),
'total_speedup': total_time_original / total_time_compressed if total_time_compressed > 0 else float('inf')
}
return results
def analyze_accuracy_tradeoffs(self, model: Sequential, compression_levels: List[float] = [0.1, 0.3, 0.5, 0.7, 0.9]) -> Dict[str, Any]:
"""
Analyze accuracy vs compression tradeoffs across different compression levels.
Args:
model: Model to analyze
compression_levels: Different compression ratios to test
Returns:
Analysis of accuracy degradation patterns
"""
results = {
'compression_curves': {},
'optimal_operating_points': {},
'production_recommendations': {}
}
baseline_size = self.metrics.calculate_model_size(model)
for level in compression_levels:
# Test different compression techniques at this level
techniques = {
'magnitude_pruning': self._apply_magnitude_pruning(model, level),
'structured_pruning': self._apply_structured_pruning(model, 1 - level),
'quantization': self._apply_quantization(model, max(4, int(32 * (1 - level))))
}
for technique_name, compressed_model in techniques.items():
if compressed_model is not None:
compressed_size = self.metrics.calculate_model_size(compressed_model)
compression_ratio = baseline_size['size_mb'] / compressed_size['size_mb']
if technique_name not in results['compression_curves']:
results['compression_curves'][technique_name] = []
results['compression_curves'][technique_name].append({
'compression_level': level,
'compression_ratio': compression_ratio,
'size_mb': compressed_size['size_mb'],
'estimated_accuracy_retention': 1.0 - (level * 0.5) # Simplified model
})
# Find optimal operating points
for technique in results['compression_curves']:
curves = results['compression_curves'][technique]
# Find point with best accuracy/compression balance
best_point = max(curves, key=lambda x: x['compression_ratio'] * x['estimated_accuracy_retention'])
results['optimal_operating_points'][technique] = best_point
return results
def _calculate_model_flops(self, model: Sequential) -> int:
"""Calculate FLOPs for a Sequential model."""
total_flops = 0
for layer in model.layers:
if isinstance(layer, Dense):
total_flops += layer.input_size * layer.output_size * 2 # Multiply-add operations
return total_flops
def _apply_magnitude_pruning(self, model: Sequential, pruning_ratio: float) -> Optional[Sequential]:
"""Apply magnitude pruning to a model copy."""
try:
test_model = Sequential([Dense(layer.input_size, layer.output_size) for layer in model.layers])
for i, layer in enumerate(test_model.layers):
layer.weights = Tensor(model.layers[i].weights.data.copy() if hasattr(model.layers[i].weights.data, 'copy') else np.array(model.layers[i].weights.data))
if hasattr(layer, 'bias') and model.layers[i].bias is not None:
layer.bias = Tensor(model.layers[i].bias.data.copy() if hasattr(model.layers[i].bias.data, 'copy') else np.array(model.layers[i].bias.data))
prune_weights_by_magnitude(layer, pruning_ratio)
return test_model
except Exception:
return None
def _apply_structured_pruning(self, model: Sequential, keep_ratio: float) -> Optional[Sequential]:
"""Apply structured pruning to a model copy."""
try:
test_model = Sequential([Dense(layer.input_size, layer.output_size) for layer in model.layers])
for i, layer in enumerate(test_model.layers):
layer.weights = Tensor(model.layers[i].weights.data.copy() if hasattr(model.layers[i].weights.data, 'copy') else np.array(model.layers[i].weights.data))
if hasattr(layer, 'bias') and model.layers[i].bias is not None:
layer.bias = Tensor(model.layers[i].bias.data.copy() if hasattr(model.layers[i].bias.data, 'copy') else np.array(model.layers[i].bias.data))
pruned_layer, _ = prune_layer_neurons(layer, keep_ratio)
test_model.layers[i] = pruned_layer
return test_model
except Exception:
return None
def _apply_quantization(self, model: Sequential, bits: int) -> Optional[Sequential]:
"""Apply quantization to a model copy."""
try:
test_model = Sequential([Dense(layer.input_size, layer.output_size) for layer in model.layers])
for i, layer in enumerate(test_model.layers):
layer.weights = Tensor(model.layers[i].weights.data.copy() if hasattr(model.layers[i].weights.data, 'copy') else np.array(model.layers[i].weights.data))
if hasattr(layer, 'bias') and model.layers[i].bias is not None:
layer.bias = Tensor(model.layers[i].bias.data.copy() if hasattr(model.layers[i].bias.data, 'copy') else np.array(model.layers[i].bias.data))
quantize_layer_weights(layer, bits)
return test_model
except Exception:
return None
# %% nbgrader={"grade": false, "grade_id": "compression-comparison", "locked": false, "schema_version": 3, "solution": true, "task": false}
#| export
def compare_compression_techniques(original_model: Sequential) -> Dict[str, Dict[str, Any]]:
"""
Compare all compression techniques on the same model.
Args:
original_model: Base model to compress using different techniques
Returns:
Dictionary comparing results from different compression approaches
TODO: Implement comprehensive compression comparison.
STEP-BY-STEP IMPLEMENTATION:
1. Set up baseline metrics from original model
2. Apply each compression technique individually
3. Apply combined compression techniques
4. Measure and compare all results
5. Return comprehensive comparison data
COMPARISON DIMENSIONS:
- Model size (MB)
- Parameter count
- Compression ratio
- Memory reduction
- Estimated speedup (for structured techniques)
IMPLEMENTATION HINTS:
- Create separate model copies for each technique
- Use consistent parameters across techniques
- Track both individual and combined effects
- Include baseline for reference
LEARNING CONNECTIONS:
- This is how research papers compare compression methods
- Production systems need this analysis for deployment decisions
- Understanding trade-offs guides technique selection
"""
### BEGIN SOLUTION
results = {}
metrics = CompressionMetrics()
# Baseline: Original model
baseline_params = metrics.count_parameters(original_model)
baseline_size = metrics.calculate_model_size(original_model)
results['baseline'] = {
'technique': 'Original Model',
'parameters': baseline_params['total_parameters'],
'size_mb': baseline_size['size_mb'],
'compression_ratio': 1.0,
'memory_reduction': 0.0
}
# Technique 1: Magnitude-based pruning only
model_pruning = Sequential([Dense(layer.input_size, layer.output_size) for layer in original_model.layers])
for i, layer in enumerate(model_pruning.layers):
layer.weights = Tensor(original_model.layers[i].weights.data.copy() if hasattr(original_model.layers[i].weights.data, 'copy') else np.array(original_model.layers[i].weights.data))
if hasattr(layer, 'bias') and original_model.layers[i].bias is not None:
layer.bias = Tensor(original_model.layers[i].bias.data.copy() if hasattr(original_model.layers[i].bias.data, 'copy') else np.array(original_model.layers[i].bias.data))
# Apply magnitude pruning to each layer
total_sparsity = 0
for i, layer in enumerate(model_pruning.layers):
if isinstance(layer, Dense):
_, prune_info = prune_weights_by_magnitude(layer, pruning_ratio=0.3)
total_sparsity += prune_info['sparsity']
avg_sparsity = total_sparsity / len(model_pruning.layers)
pruning_params = metrics.count_parameters(model_pruning)
pruning_size = metrics.calculate_model_size(model_pruning)
results['magnitude_pruning'] = {
'technique': 'Magnitude Pruning (30%)',
'parameters': pruning_params['total_parameters'],
'size_mb': pruning_size['size_mb'],
'compression_ratio': baseline_size['size_mb'] / pruning_size['size_mb'],
'memory_reduction': (baseline_size['size_mb'] - pruning_size['size_mb']) / baseline_size['size_mb'],
'sparsity': avg_sparsity
}
# Technique 2: Quantization only
model_quantization = Sequential([Dense(layer.input_size, layer.output_size) for layer in original_model.layers])
for i, layer in enumerate(model_quantization.layers):
layer.weights = Tensor(original_model.layers[i].weights.data.copy() if hasattr(original_model.layers[i].weights.data, 'copy') else np.array(original_model.layers[i].weights.data))
if hasattr(layer, 'bias') and original_model.layers[i].bias is not None:
layer.bias = Tensor(original_model.layers[i].bias.data.copy() if hasattr(original_model.layers[i].bias.data, 'copy') else np.array(original_model.layers[i].bias.data))
# Apply quantization to each layer
total_memory_reduction = 0
for i, layer in enumerate(model_quantization.layers):
if isinstance(layer, Dense):
_, quant_info = quantize_layer_weights(layer, bits=8)
total_memory_reduction += quant_info['memory_reduction']
avg_memory_reduction = total_memory_reduction / len(model_quantization.layers)
quantization_size = metrics.calculate_model_size(model_quantization, dtype='int8')
results['quantization'] = {
'technique': 'Quantization (INT8)',
'parameters': baseline_params['total_parameters'],
'size_mb': quantization_size['size_mb'],
'compression_ratio': baseline_size['size_mb'] / quantization_size['size_mb'],
'memory_reduction': (baseline_size['size_mb'] - quantization_size['size_mb']) / baseline_size['size_mb'],
'avg_memory_reduction_factor': avg_memory_reduction
}
# Technique 3: Structured pruning only
model_structured = Sequential([Dense(layer.input_size, layer.output_size) for layer in original_model.layers])
for i, layer in enumerate(model_structured.layers):
layer.weights = Tensor(original_model.layers[i].weights.data.copy() if hasattr(original_model.layers[i].weights.data, 'copy') else np.array(original_model.layers[i].weights.data))
if hasattr(layer, 'bias') and original_model.layers[i].bias is not None:
layer.bias = Tensor(original_model.layers[i].bias.data.copy() if hasattr(original_model.layers[i].bias.data, 'copy') else np.array(original_model.layers[i].bias.data))
# Apply structured pruning to each layer
total_param_reduction = 0
for i, layer in enumerate(model_structured.layers):
if isinstance(layer, Dense):
pruned_layer, struct_info = prune_layer_neurons(layer, keep_ratio=0.75)
model_structured.layers[i] = pruned_layer
total_param_reduction += struct_info['param_reduction']
avg_param_reduction = total_param_reduction / len(model_structured.layers)
structured_params = metrics.count_parameters(model_structured)
structured_size = metrics.calculate_model_size(model_structured)
results['structured_pruning'] = {
'technique': 'Structured Pruning (75% neurons kept)',
'parameters': structured_params['total_parameters'],
'size_mb': structured_size['size_mb'],
'compression_ratio': baseline_size['size_mb'] / structured_size['size_mb'],
'memory_reduction': (baseline_size['size_mb'] - structured_size['size_mb']) / baseline_size['size_mb'],
'param_reduction': avg_param_reduction
}
# Technique 4: Combined approach
model_combined = Sequential([Dense(layer.input_size, layer.output_size) for layer in original_model.layers])
for i, layer in enumerate(model_combined.layers):
layer.weights = Tensor(original_model.layers[i].weights.data.copy() if hasattr(original_model.layers[i].weights.data, 'copy') else np.array(original_model.layers[i].weights.data))
if hasattr(layer, 'bias') and original_model.layers[i].bias is not None:
layer.bias = Tensor(original_model.layers[i].bias.data.copy() if hasattr(original_model.layers[i].bias.data, 'copy') else np.array(original_model.layers[i].bias.data))
# Apply magnitude pruning + quantization + structured pruning
for i, layer in enumerate(model_combined.layers):
if isinstance(layer, Dense):
# Step 1: Magnitude pruning
_, _ = prune_weights_by_magnitude(layer, pruning_ratio=0.2)
# Step 2: Quantization
_, _ = quantize_layer_weights(layer, bits=8)
# Step 3: Structured pruning
pruned_layer, _ = prune_layer_neurons(layer, keep_ratio=0.8)
model_combined.layers[i] = pruned_layer
combined_params = metrics.count_parameters(model_combined)
combined_size = metrics.calculate_model_size(model_combined, dtype='int8')
results['combined'] = {
'technique': 'Combined (Pruning + Quantization + Structured)',
'parameters': combined_params['total_parameters'],
'size_mb': combined_size['size_mb'],
'compression_ratio': baseline_size['size_mb'] / combined_size['size_mb'],
'memory_reduction': (baseline_size['size_mb'] - combined_size['size_mb']) / baseline_size['size_mb']
}
return results
### END SOLUTION
# %% [markdown]
"""
## 🧪 Testing Infrastructure
### 🔬 Unit Testing Pattern
Each compression technique includes comprehensive unit tests:
1. **Functionality verification**: Core algorithms work correctly
2. **Edge case handling**: Robust error handling and boundary conditions
3. **Statistical validation**: Compression metrics and analysis
4. **Performance measurement**: Before/after comparisons
### 📈 Progress Tracking
- **CompressionMetrics**: ✅ Complete with parameter counting
- **Magnitude-based pruning**: ✅ Complete with sparsity calculation
- **Quantization**: 🔄 Coming next
- **Knowledge distillation**: 🔄 Coming next
- **Structured pruning**: 🔄 Coming next
- **Comprehensive comparison**: 🔄 Coming next
### 🎓 Educational Value
- **Conceptual understanding**: Why compression matters
- **Practical implementation**: Build techniques from scratch
- **Real-world connections**: Mobile, edge, and production deployment
- **Systems thinking**: Balance accuracy, efficiency, and constraints
This module teaches the essential skills for deploying AI in resource-constrained environments!
"""
# %% [markdown]
"""
### 🧪 Unit Test: ML Systems Compression Profiler
This test validates the CompressionSystemsProfiler implementation, ensuring it provides comprehensive analysis of compression techniques for production deployment scenarios.
"""
# %% nbgrader={"grade": false, "grade_id": "test-systems-profiler", "locked": false, "schema_version": 3, "solution": false, "task": false}
def test_unit_compression_systems_profiler():
"""Unit test for the CompressionSystemsProfiler class."""
print("🔬 Unit Test: ML Systems Compression Profiler...")
# Create a test model
model = Sequential([
Dense(784, 256),
Dense(256, 128),
Dense(128, 10)
])
# Initialize profiler
profiler = CompressionSystemsProfiler()
# Test quantization impact analysis
quant_analysis = profiler.analyze_quantization_impact(model, target_bits=[32, 16, 8])
# Verify quantization analysis structure
assert 'quantization_analysis' in quant_analysis, "Should include quantization analysis"
assert 'deployment_scenarios' in quant_analysis, "Should include deployment scenarios"
assert '8bit' in quant_analysis['quantization_analysis'], "Should analyze 8-bit quantization"
# Verify hardware analysis
bit8_analysis = quant_analysis['quantization_analysis']['8bit']
assert 'hardware_analysis' in bit8_analysis, "Should include hardware analysis"
assert 'mobile_arm' in bit8_analysis['hardware_analysis'], "Should analyze mobile ARM deployment"
assert 'edge_tpu' in bit8_analysis['hardware_analysis'], "Should analyze edge TPU deployment"
assert 'gpu_cloud' in bit8_analysis['hardware_analysis'], "Should analyze GPU cloud deployment"
print(f"✅ Quantization analysis works: {len(quant_analysis['quantization_analysis'])} bit widths analyzed")
# Test compression ratio improvements
for bits in [16, 8]:
bit_key = f'{bits}bit'
if bit_key in quant_analysis['quantization_analysis']:
compression_ratio = quant_analysis['quantization_analysis'][bit_key]['compression_ratio']
assert compression_ratio > 1.0, f"{bits}-bit should provide compression"
print("✅ Compression ratios verified")
# Test deployment recommendations
scenarios = quant_analysis['deployment_scenarios']
assert 'mobile_deployment' in scenarios, "Should provide mobile deployment recommendations"
assert 'edge_inference' in scenarios, "Should provide edge inference recommendations"
assert 'cloud_serving' in scenarios, "Should provide cloud serving recommendations"
for scenario in scenarios.values():
assert 'recommended_bits' in scenario, "Should recommend specific bit width"
assert 'rationale' in scenario, "Should provide rationale for recommendation"
assert 'expected_benefits' in scenario, "Should list expected benefits"
print("✅ Deployment recommendations work correctly")
# Test inference speedup measurement
compressed_model = Sequential([
Dense(784, 128), # Smaller than original
Dense(128, 64),
Dense(64, 10)
])
speedup_analysis = profiler.measure_inference_speedup(model, compressed_model, batch_sizes=[1, 32])
# Verify speedup analysis structure
assert 'flops_analysis' in speedup_analysis, "Should include FLOPs analysis"
assert 'memory_analysis' in speedup_analysis, "Should include memory analysis"
assert 'speedup_estimates' in speedup_analysis, "Should include speedup estimates"
# Verify speedup calculations
flops_analysis = speedup_analysis['flops_analysis']
assert flops_analysis['computational_speedup'] > 1.0, "Compressed model should be faster"
memory_analysis = speedup_analysis['memory_analysis']
assert memory_analysis['memory_speedup'] > 1.0, "Compressed model should use less memory"
print(f"✅ Speedup analysis works: {flops_analysis['computational_speedup']:.2f}x compute, {memory_analysis['memory_speedup']:.2f}x memory")
# Test accuracy tradeoff analysis
tradeoff_analysis = profiler.analyze_accuracy_tradeoffs(model, compression_levels=[0.1, 0.5, 0.9])
# Verify tradeoff analysis structure
assert 'compression_curves' in tradeoff_analysis, "Should include compression curves"
assert 'optimal_operating_points' in tradeoff_analysis, "Should include optimal operating points"
# Verify compression techniques are analyzed
curves = tradeoff_analysis['compression_curves']
expected_techniques = ['magnitude_pruning', 'structured_pruning', 'quantization']
for technique in expected_techniques:
if technique in curves and len(curves[technique]) > 0:
print(f"{technique.replace('_', ' ').title()} analysis included")
print("✅ Accuracy tradeoff analysis works correctly")
print("📈 Progress: CompressionSystemsProfiler ✓")
print("🎯 ML Systems Profiler behavior:")
print(" - Analyzes quantization impact across hardware platforms")
print(" - Measures inference speedup for different scenarios")
print(" - Provides production deployment recommendations")
print(" - Analyzes accuracy vs compression tradeoffs")
print()
# Test will be run in main block
# %% [markdown]
"""
### 🧪 Unit Test: Comprehensive Compression Comparison
This test validates the complete compression pipeline, comparing different techniques (pruning, quantization, distillation) to analyze their effectiveness and trade-offs in model optimization.
"""
# %% nbgrader={"grade": false, "grade_id": "test-comprehensive-comparison", "locked": false, "schema_version": 3, "solution": false, "task": false}
def test_unit_comprehensive_comparison():
"""Unit test for the comparison of different compression techniques."""
print("🔬 Unit Test: Comprehensive Comparison of Techniques...")
# Create a simple model
model = Sequential([
Dense(784, 128),
Dense(128, 64),
Dense(64, 10)
])
# Run comprehensive comparison
results = compare_compression_techniques(model)
# Verify baseline exists
assert 'baseline' in results, "Baseline results should be included"
baseline = results['baseline']
assert baseline['compression_ratio'] == 1.0, f"Baseline compression ratio should be 1.0, got {baseline['compression_ratio']}"
print(f"✅ Baseline analysis works: {baseline['parameters']} parameters, {baseline['size_mb']} MB")
# Verify individual techniques
techniques = ['magnitude_pruning', 'quantization', 'structured_pruning', 'combined']
for technique in techniques:
assert technique in results, f"Missing technique: {technique}"
result = results[technique]
# Magnitude pruning creates sparsity but doesn't reduce file size in our simulation
if technique == 'magnitude_pruning':
assert result['compression_ratio'] >= 1.0, f"{technique} should have compression ratio >= 1.0"
else:
assert result['compression_ratio'] > 1.0, f"{technique} should have compression ratio > 1.0"
assert 0 <= result['memory_reduction'] <= 1.0, f"{technique} memory reduction should be between 0 and 1"
print("✅ All compression techniques work correctly")
# Verify compression effectiveness
quantization = results['quantization']
structured = results['structured_pruning']
combined = results['combined']
assert quantization['compression_ratio'] >= 3.0, f"Quantization should achieve at least 3x compression, got {quantization['compression_ratio']:.2f}"
assert structured['compression_ratio'] >= 1.2, f"Structured pruning should achieve at least 1.2x compression, got {structured['compression_ratio']:.2f}"
assert combined['compression_ratio'] >= quantization['compression_ratio'], f"Combined should be at least as good as best individual technique"
print(f"✅ Compression effectiveness verified:")
print(f" - Quantization: {quantization['compression_ratio']:.2f}x compression")
print(f" - Structured: {structured['compression_ratio']:.2f}x compression")
print(f" - Combined: {combined['compression_ratio']:.2f}x compression")
# Verify different techniques have different characteristics
magnitude = results['magnitude_pruning']
assert 'sparsity' in magnitude, "Magnitude pruning should report sparsity"
assert 'avg_memory_reduction_factor' in quantization, "Quantization should report memory reduction factor"
assert 'param_reduction' in structured, "Structured pruning should report parameter reduction"
print("✅ Technique-specific metrics work correctly")
print("📈 Progress: Comprehensive Comparison ✓")
print("🎯 Comprehensive comparison behavior:")
print(" - Compares all techniques systematically")
print(" - Provides detailed metrics for each approach")
print(" - Enables informed compression strategy selection")
print(" - Demonstrates combined technique effectiveness")
print()
# Run the test only if executed directly
# %% [markdown]
"""
### 🧪 Integration Test: Compression with Sequential Models
This integration test validates that all compression techniques work seamlessly with TinyTorch's Sequential models, ensuring proper layer integration and end-to-end functionality.
"""
# %%
def test_module_compression():
"""Integration test for applying compression to a Sequential model."""
print("🔬 Running Integration Test: Compression on Sequential Model...")
# 1. Create a simple Sequential model
model = Sequential([
Dense(10, 20),
Dense(20, 5)
])
# 2. Get the first Dense layer to be pruned
layer_to_prune = model.layers[0]
# 3. Calculate initial sparsity
initial_sparsity = calculate_sparsity(layer_to_prune)
# 4. Prune the layer's weights
pruned_layer, _ = prune_weights_by_magnitude(layer_to_prune, pruning_ratio=0.5)
# 5. Replace the layer in the model
model.layers[0] = pruned_layer
# 6. Calculate final sparsity
final_sparsity = calculate_sparsity(model.layers[0])
print(f"Initial Sparsity: {initial_sparsity:.2f}, Final Sparsity: {final_sparsity:.2f}")
assert final_sparsity > initial_sparsity, "Sparsity should increase after pruning."
assert abs(final_sparsity - 0.5) < 0.01, "Sparsity should be close to the pruning ratio."
print("✅ Integration Test Passed: Pruning correctly modified a layer in a Sequential model.")
# %% [markdown]
"""
### 🧪 Integration Test: Comprehensive Compression Pipeline
This comprehensive integration test validates the complete compression workflow, applying multiple techniques in sequence and ensuring proper interaction between compression methods and model architectures.
"""
# %%
def test_module_compression():
"""
Integration test for applying multiple compression techniques to a Sequential model.
Tests that multiple compression techniques can be applied to a Sequential model
and that metrics are tracked correctly.
"""
print("🔬 Running Integration Test: Comprehensive Compression...")
# 1. Create a model and metrics calculator
model = Sequential([
Dense(100, 50),
Dense(50, 20),
Dense(20, 10)
])
metrics = CompressionMetrics()
# 2. Get baseline metrics
initial_params = metrics.count_parameters(model)['total_parameters']
initial_size_mb = metrics.calculate_model_size(model)['size_mb']
# 3. Apply pruning to the first layer
layer_to_prune = model.layers[0]
model.layers[0], _ = prune_weights_by_magnitude(layer_to_prune, pruning_ratio=0.8)
# 4. Verify sparsity increased and parameters are the same
sparsity_after_pruning = calculate_sparsity(model.layers[0])
params_after_pruning = metrics.count_parameters(model)['total_parameters']
assert sparsity_after_pruning > 0.79, "Sparsity should be high after pruning."
assert params_after_pruning == initial_params, "Pruning shouldn't change param count."
print(f"✅ Pruning successful. Sparsity: {sparsity_after_pruning:.2f}")
# 5. Apply quantization to all layers
for i, layer in enumerate(model.layers):
if isinstance(layer, Dense):
model.layers[i], _ = quantize_layer_weights(layer, bits=8)
# 6. Verify model size is reduced
final_size_mb = metrics.calculate_model_size(model, dtype='int8')['size_mb']
print(f"Initial size: {initial_size_mb:.4f} MB, Final size: {final_size_mb:.4f} MB")
assert final_size_mb < initial_size_mb / 1.5, "Quantization should significantly reduce model size."
print("✅ Integration Test Passed: Comprehensive compression successfully applied and verified.")
# %% [markdown]
"""
## 🧪 Module Testing
Time to test your implementation! This section uses TinyTorch's standardized testing framework to ensure your implementation works correctly.
**This testing section is locked** - it provides consistent feedback across all modules and cannot be modified.
"""
# %% [markdown]
"""
## 🤖 AUTO TESTING
"""
# %% nbgrader={"grade": false, "grade_id": "standardized-testing", "locked": true, "schema_version": 3, "solution": false, "task": false}
# =============================================================================
# STANDARDIZED MODULE TESTING - DO NOT MODIFY
# This cell is locked to ensure consistent testing across all TinyTorch modules
# =============================================================================
if __name__ == "__main__":
# Run all compression tests
test_unit_compression_metrics()
test_unit_magnitude_pruning()
test_unit_structured_pruning()
test_unit_quantization()
test_unit_distillation()
test_unit_compression_systems_profiler()
test_unit_comprehensive_comparison()
test_module_compression()
print("All tests passed!")
print("Compression module complete!")
# %% [markdown]
"""
## 🤔 ML Systems Thinking: Compression in Production
### 🏗️ System Design Questions
Think about how compression fits into larger ML systems:
1. **Multi-Model Serving**: How would you design a system that serves multiple compressed models with different optimization profiles (latency-optimized vs memory-optimized) and automatically routes requests based on device capabilities?
2. **Compression Pipeline Automation**: What would a production pipeline look like that automatically selects compression techniques based on target deployment environment (mobile, edge, cloud) and performance requirements?
3. **Hardware-Aware Optimization**: How might you design a system that profiles target hardware (ARM, x86, TPU, GPU) and automatically selects the optimal combination of quantization, pruning, and structured optimization?
4. **Dynamic Compression**: How could you implement a system that adjusts compression levels in real-time based on available resources, battery level, or network conditions?
### 🚀 Production ML Questions
Connect compression to real-world deployment challenges:
5. **Model Store Design**: How would you architect a model registry that stores multiple compressed versions of the same model and serves the appropriate version based on client capabilities?
6. **A/B Testing Compressed Models**: What metrics would you track when A/B testing compressed vs uncompressed models in production, and how would you handle the accuracy vs performance tradeoff?
7. **Compression Monitoring**: How would you design monitoring systems to detect when compressed models are degrading in accuracy over time, and what automated responses would you implement?
8. **Cross-Platform Deployment**: How might you design a system that takes a single trained model and automatically generates optimized versions for iOS, Android, web browsers, and edge devices?
### 🔧 Framework Design Questions
Analyze how compression integrates with ML frameworks:
9. **Quantization-Aware Training**: How does PyTorch's fake quantization during training compare to post-training quantization, and when would you choose each approach in production?
10. **Structured Pruning Integration**: How might you design APIs that make structured pruning as easy to use as dropout, while handling the complexity of layer dimension changes?
11. **Knowledge Distillation Frameworks**: What would a framework look like that automatically identifies the best teacher-student architecture pairs and handles the complexity of multi-teacher distillation?
12. **Compression Search**: How could you implement neural architecture search specifically for finding optimal compression strategies rather than just model architectures?
### ⚡ Performance & Scale Questions
Consider compression in large-scale systems:
13. **Distributed Compression**: How would you design systems that perform compression operations across multiple GPUs or machines, especially for very large models that don't fit in single-device memory?
14. **Incremental Compression**: What would it look like to compress models incrementally as they're being trained, rather than waiting until training completion?
15. **Compression for Federated Learning**: How might compression techniques need to be adapted for federated learning scenarios where models are updated across many edge devices?
16. **Memory-Bandwidth Optimization**: How would you design compression strategies specifically optimized for different memory hierarchies (L1/L2 cache, main memory, storage) in modern processors?
### 💡 Reflection Prompts
- Which compression technique would be most critical for your target deployment scenario?
- How do the compression trade-offs change when moving from research to production?
- What aspects of hardware architecture most influence compression strategy selection?
- How might compression techniques evolve as hardware capabilities change?
## 🎯 MODULE SUMMARY: Model Compression
Congratulations! You've successfully implemented model compression techniques:
### What You've Accomplished
✅ **Pruning**: Removing unnecessary weights for efficiency
✅ **Quantization**: Reducing precision for smaller models
✅ **Knowledge Distillation**: Transferring knowledge to smaller models
✅ **Structured Optimization**: Removing entire neurons for hardware efficiency
✅ **ML Systems Profiling**: Production-grade compression analysis
✅ **Real Applications**: Deploying efficient models to production
### Key Concepts You've Learned
- **Magnitude-based pruning**: Removing low-importance weights
- **Advanced quantization**: Multi-bit precision optimization with hardware analysis
- **Knowledge distillation**: Teacher-student training paradigms
- **Structured pruning**: Hardware-aware neuron removal
- **Production profiling**: Comprehensive deployment analysis
- **ML systems integration**: How compression fits into larger systems
### Professional Skills Developed
- **Production compression engineering**: Building systems for real-world deployment
- **Hardware-aware optimization**: Tailoring compression to specific processors
- **Performance profiling**: Measuring and optimizing compression trade-offs
- **Systems design**: Understanding compression in ML infrastructure
- **API design**: Clean interfaces for compression operations
### Ready for Advanced Applications
Your compression implementations now enable:
- **Mobile AI deployment**: Optimized models for smartphones and tablets
- **Edge computing**: Efficient inference on resource-constrained devices
- **Production serving**: Cost-effective model deployment at scale
- **Real-time systems**: Low-latency inference for time-critical applications
- **Multi-platform deployment**: Optimized models across diverse hardware
### Connection to Real ML Systems
Your implementations mirror production systems:
- **PyTorch**: `torch.nn.utils.prune`, `torch.quantization`, `torch.fx` for optimization
- **TensorFlow**: Model Optimization Toolkit (TFLite, TensorRT integration)
- **Production frameworks**: ONNX Runtime, Apache TVM, MLPerf optimization
- **Industry standard**: Techniques used by Google, Apple, Meta for mobile AI
### Next Steps
1. **Export your code**: `tito export 12_compression`
2. **Test your implementation**: `tito test 12_compression`
3. **Experiment with profiling**: Try the CompressionSystemsProfiler on different models
4. **Deploy compressed models**: Test in real applications
5. **Move to Module 13**: Add custom kernels for maximum performance!
**Ready for advanced deployment?** Your compression techniques are now production-ready!
"""