mirror of
https://github.com/MLSysBook/TinyTorch.git
synced 2026-05-08 13:08:45 -05:00
Major Accomplishments: • Rebuilt all 20 modules with comprehensive explanations before each function • Fixed explanatory placement: detailed explanations before implementations, brief descriptions before tests • Enhanced all modules with ASCII diagrams for visual learning • Comprehensive individual module testing and validation • Created milestone directory structure with working examples • Fixed critical Module 01 indentation error (methods were outside Tensor class) Module Status: ✅ Modules 01-07: Fully working (Tensor → Training pipeline) ✅ Milestone 1: Perceptron - ACHIEVED (95% accuracy on 2D data) ✅ Milestone 2: MLP - ACHIEVED (complete training with autograd) ⚠️ Modules 08-20: Mixed results (import dependencies need fixes) Educational Impact: • Students can now learn complete ML pipeline from tensors to training • Clear progression: basic operations → neural networks → optimization • Explanatory sections provide proper context before implementation • Working milestones demonstrate practical ML capabilities Next Steps: • Fix import dependencies in advanced modules (9, 11, 12, 17-20) • Debug timeout issues in modules 14, 15 • First 7 modules provide solid foundation for immediate educational use(https://claude.ai/code)
2235 lines
100 KiB
Plaintext
2235 lines
100 KiB
Plaintext
{
|
||
"cells": [
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "822c53e7",
|
||
"metadata": {
|
||
"cell_marker": "\"\"\""
|
||
},
|
||
"source": [
|
||
"# Compression - Neural Network Pruning for Edge Deployment\n",
|
||
"\n",
|
||
"Welcome to the Compression module! You'll implement pruning techniques that remove 70% of neural network parameters while maintaining accuracy, enabling deployment on resource-constrained edge devices.\n",
|
||
"\n",
|
||
"## Connection from Quantization (Module 17)\n",
|
||
"In Module 17, you learned quantization - reducing precision from FP32 to INT8. But even quantized models can be too large for edge devices! Compression attacks the problem differently: instead of making numbers smaller, we **remove numbers entirely** through strategic pruning.\n",
|
||
"\n",
|
||
"## Learning Goals\n",
|
||
"- Systems understanding: How neural network redundancy enables massive parameter reduction without accuracy loss\n",
|
||
"- Core implementation skill: Build magnitude-based pruning systems that identify and remove unimportant weights\n",
|
||
"- Pattern recognition: Understand when structured vs unstructured pruning optimizes for different hardware constraints\n",
|
||
"- Framework connection: See how your implementation mirrors production sparse inference systems\n",
|
||
"- Performance insight: Learn why 70% sparsity often provides optimal accuracy vs size tradeoffs\n",
|
||
"\n",
|
||
"## Build → Profile → Optimize\n",
|
||
"1. **Build**: Magnitude-based pruners that remove small weights, discover massive redundancy in neural networks\n",
|
||
"2. **Profile**: Measure model size reduction, accuracy impact, and sparse computation efficiency\n",
|
||
"3. **Optimize**: Implement structured pruning for hardware-friendly sparsity patterns\n",
|
||
"\n",
|
||
"## What You'll Achieve\n",
|
||
"By the end of this module, you'll understand:\n",
|
||
"- Deep technical understanding of how neural networks contain massive redundancy that can be exploited for compression\n",
|
||
"- Practical capability to prune real CNNs and MLPs while maintaining 95%+ of original accuracy\n",
|
||
"- Systems insight into why pruning enables deployment scenarios impossible with dense models\n",
|
||
"- Performance consideration of when sparse computation provides real speedups vs theoretical ones\n",
|
||
"- Connection to production systems where pruning enables edge AI applications\n",
|
||
"\n",
|
||
"## Systems Reality Check\n",
|
||
"💡 **Production Context**: Apple's Neural Engine, Google's Edge TPU, and mobile inference frameworks heavily rely on sparsity for efficient computation\n",
|
||
"⚡ **Performance Note**: 70% sparsity provides 3-5x model compression with <2% accuracy loss, but speedup depends on hardware sparse computation support"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"id": "5f1bc48b",
|
||
"metadata": {
|
||
"nbgrader": {
|
||
"grade": false,
|
||
"grade_id": "compression-imports",
|
||
"locked": false,
|
||
"schema_version": 3,
|
||
"solution": false,
|
||
"task": false
|
||
}
|
||
},
|
||
"outputs": [],
|
||
"source": [
|
||
"#| default_exp optimization.prune\n",
|
||
"\n",
|
||
"#| export\n",
|
||
"import numpy as np\n",
|
||
"import matplotlib.pyplot as plt\n",
|
||
"import sys\n",
|
||
"from typing import Tuple, Optional, Dict, Any, List\n",
|
||
"from dataclasses import dataclass"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "df5e40f2",
|
||
"metadata": {
|
||
"cell_marker": "\"\"\"",
|
||
"lines_to_next_cell": 1
|
||
},
|
||
"source": [
|
||
"## Part 1: Understanding Neural Network Redundancy\n",
|
||
"\n",
|
||
"Before implementing pruning, let's understand the fundamental insight: **neural networks are massively over-parametrized**. Most weights contribute little to the final output and can be removed without significant accuracy loss.\n",
|
||
"\n",
|
||
"### The Redundancy Discovery\n",
|
||
"- **Research insight**: Networks often have 80-90% redundant parameters\n",
|
||
"- **Lottery Ticket Hypothesis**: Sparse subnetworks can match dense network performance\n",
|
||
"- **Practical reality**: 70% sparsity typically loses <2% accuracy\n",
|
||
"- **Systems opportunity**: Massive compression enables edge deployment"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"id": "2a11964c",
|
||
"metadata": {
|
||
"lines_to_next_cell": 1,
|
||
"nbgrader": {
|
||
"grade": false,
|
||
"grade_id": "redundancy-analysis",
|
||
"locked": false,
|
||
"schema_version": 3,
|
||
"solution": true,
|
||
"task": false
|
||
}
|
||
},
|
||
"outputs": [],
|
||
"source": [
|
||
"#| export\n",
|
||
"def analyze_weight_redundancy(weights: np.ndarray, title: str = \"Weight Analysis\"):\n",
|
||
" \"\"\"\n",
|
||
" Analyze weight distributions to understand pruning opportunities.\n",
|
||
" \n",
|
||
" This function reveals the natural sparsity and redundancy patterns\n",
|
||
" in neural network weights that make pruning effective.\n",
|
||
" \"\"\"\n",
|
||
" # Flatten weights for analysis\n",
|
||
" w_flat = weights.flatten()\n",
|
||
" w_abs = np.abs(w_flat)\n",
|
||
" \n",
|
||
" print(f\"📊 {title}\")\n",
|
||
" print(\"=\" * 50)\n",
|
||
" print(f\"Total parameters: {len(w_flat):,}\")\n",
|
||
" print(f\"Mean absolute weight: {w_abs.mean():.6f}\")\n",
|
||
" print(f\"Weight standard deviation: {w_abs.std():.6f}\")\n",
|
||
" \n",
|
||
" # Analyze weight distribution percentiles\n",
|
||
" percentiles = [50, 70, 80, 90, 95, 99]\n",
|
||
" print(f\"\\nWeight Magnitude Percentiles:\")\n",
|
||
" for p in percentiles:\n",
|
||
" val = np.percentile(w_abs, p)\n",
|
||
" smaller_count = np.sum(w_abs <= val)\n",
|
||
" print(f\" {p:2d}%: {val:.6f} ({smaller_count:,} weights ≤ this value)\")\n",
|
||
" \n",
|
||
" # Show natural sparsity (near-zero weights)\n",
|
||
" zero_threshold = w_abs.mean() * 0.1 # 10% of mean as \"near-zero\"\n",
|
||
" near_zero_count = np.sum(w_abs <= zero_threshold)\n",
|
||
" natural_sparsity = near_zero_count / len(w_flat) * 100\n",
|
||
" \n",
|
||
" print(f\"\\nNatural Sparsity Analysis:\")\n",
|
||
" print(f\" Threshold (10% of mean): {zero_threshold:.6f}\")\n",
|
||
" print(f\" Near-zero weights: {near_zero_count:,} ({natural_sparsity:.1f}%)\")\n",
|
||
" print(f\" Already sparse without pruning!\")\n",
|
||
" \n",
|
||
" return {\n",
|
||
" 'total_params': len(w_flat),\n",
|
||
" 'mean_abs': w_abs.mean(),\n",
|
||
" 'std': w_abs.std(),\n",
|
||
" 'natural_sparsity': natural_sparsity,\n",
|
||
" 'percentiles': {p: np.percentile(w_abs, p) for p in percentiles}\n",
|
||
" }"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "8f7df3ed",
|
||
"metadata": {
|
||
"cell_marker": "\"\"\"",
|
||
"lines_to_next_cell": 1
|
||
},
|
||
"source": [
|
||
"### Test: Weight Redundancy Analysis\n",
|
||
"\n",
|
||
"Let's verify our redundancy analysis works on realistic neural network weights."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"id": "b153cb7d",
|
||
"metadata": {
|
||
"nbgrader": {
|
||
"grade": true,
|
||
"grade_id": "test-redundancy-analysis",
|
||
"locked": false,
|
||
"points": 5,
|
||
"schema_version": 3,
|
||
"solution": false,
|
||
"task": false
|
||
}
|
||
},
|
||
"outputs": [],
|
||
"source": [
|
||
"def test_redundancy_analysis():\n",
|
||
" \"\"\"Test weight redundancy analysis on sample networks.\"\"\"\n",
|
||
" print(\"Testing weight redundancy analysis...\")\n",
|
||
" \n",
|
||
" # Create realistic CNN weights with natural sparsity\n",
|
||
" np.random.seed(42)\n",
|
||
" conv_weights = np.random.normal(0, 0.02, (64, 32, 3, 3)) # Conv layer\n",
|
||
" fc_weights = np.random.normal(0, 0.01, (1000, 512)) # FC layer\n",
|
||
" \n",
|
||
" # Analyze both layer types\n",
|
||
" conv_stats = analyze_weight_redundancy(conv_weights, \"Conv2D Layer Weights\")\n",
|
||
" fc_stats = analyze_weight_redundancy(fc_weights, \"Dense Layer Weights\")\n",
|
||
" \n",
|
||
" # Verify analysis produces reasonable results\n",
|
||
" assert conv_stats['total_params'] == 64*32*3*3, \"Conv param count mismatch\"\n",
|
||
" assert fc_stats['total_params'] == 1000*512, \"FC param count mismatch\"\n",
|
||
" assert conv_stats['natural_sparsity'] > 0, \"Should detect some natural sparsity\"\n",
|
||
" assert fc_stats['natural_sparsity'] > 0, \"Should detect some natural sparsity\"\n",
|
||
" \n",
|
||
" print(\"✅ Weight redundancy analysis test passed!\")\n",
|
||
"\n",
|
||
"test_redundancy_analysis()"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "92721059",
|
||
"metadata": {
|
||
"cell_marker": "\"\"\"",
|
||
"lines_to_next_cell": 1
|
||
},
|
||
"source": [
|
||
"## Part 2: Magnitude-Based Pruning - The Foundation\n",
|
||
"\n",
|
||
"The simplest and most effective pruning technique: **remove the smallest weights**. The intuition is that small weights contribute little to the network's computation, so removing them should have minimal impact on accuracy.\n",
|
||
"\n",
|
||
"### Magnitude Pruning Algorithm\n",
|
||
"1. **Calculate importance**: Use absolute weight magnitude as importance metric\n",
|
||
"2. **Rank weights**: Sort all weights by absolute value\n",
|
||
"3. **Set threshold**: Choose magnitude threshold for desired sparsity level\n",
|
||
"4. **Create mask**: Zero out weights below threshold\n",
|
||
"5. **Apply mask**: Element-wise multiplication to enforce sparsity"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"id": "850f7f52",
|
||
"metadata": {
|
||
"lines_to_next_cell": 1,
|
||
"nbgrader": {
|
||
"grade": false,
|
||
"grade_id": "magnitude-pruning",
|
||
"locked": false,
|
||
"schema_version": 3,
|
||
"solution": true,
|
||
"task": false
|
||
}
|
||
},
|
||
"outputs": [],
|
||
"source": [
|
||
"#| export\n",
|
||
"class MagnitudePruner:\n",
|
||
" \"\"\"\n",
|
||
" Magnitude-based pruning for neural network compression.\n",
|
||
" \n",
|
||
" This class implements the core pruning algorithm used in production\n",
|
||
" systems: remove weights with smallest absolute values.\n",
|
||
" \"\"\"\n",
|
||
" \n",
|
||
" def __init__(self):\n",
|
||
" # BEGIN SOLUTION\n",
|
||
" self.pruning_masks = {}\n",
|
||
" self.original_weights = {}\n",
|
||
" self.pruning_stats = {}\n",
|
||
" # END SOLUTION\n",
|
||
" \n",
|
||
" def calculate_threshold(self, weights: np.ndarray, sparsity: float) -> float:\n",
|
||
" \"\"\"\n",
|
||
" Calculate magnitude threshold for desired sparsity level.\n",
|
||
" \n",
|
||
" Args:\n",
|
||
" weights: Network weights to analyze\n",
|
||
" sparsity: Fraction of weights to remove (0.0 to 1.0)\n",
|
||
" \n",
|
||
" Returns:\n",
|
||
" threshold: Magnitude below which weights should be pruned\n",
|
||
" \"\"\"\n",
|
||
" # BEGIN SOLUTION\n",
|
||
" # Flatten weights and get absolute values\n",
|
||
" w_flat = weights.flatten()\n",
|
||
" w_abs = np.abs(w_flat)\n",
|
||
" \n",
|
||
" # Calculate percentile threshold\n",
|
||
" # sparsity=0.7 means remove 70% of weights (keep top 30%)\n",
|
||
" percentile = sparsity * 100\n",
|
||
" threshold = np.percentile(w_abs, percentile)\n",
|
||
" \n",
|
||
" return threshold\n",
|
||
" # END SOLUTION\n",
|
||
" \n",
|
||
" def create_mask(self, weights: np.ndarray, threshold: float) -> np.ndarray:\n",
|
||
" \"\"\"\n",
|
||
" Create binary mask for pruning weights below threshold.\n",
|
||
" \n",
|
||
" Args:\n",
|
||
" weights: Original weights\n",
|
||
" threshold: Magnitude threshold for pruning\n",
|
||
" \n",
|
||
" Returns:\n",
|
||
" mask: Binary mask (1=keep, 0=prune)\n",
|
||
" \"\"\"\n",
|
||
" # BEGIN SOLUTION\n",
|
||
" # Create mask: keep weights with absolute value >= threshold\n",
|
||
" mask = (np.abs(weights) >= threshold).astype(np.float32)\n",
|
||
" return mask\n",
|
||
" # END SOLUTION\n",
|
||
" \n",
|
||
" def prune(self, weights: np.ndarray, sparsity: float = 0.7) -> Tuple[np.ndarray, np.ndarray, Dict]:\n",
|
||
" \"\"\"\n",
|
||
" Prune network weights using magnitude-based pruning.\n",
|
||
" \n",
|
||
" Args:\n",
|
||
" weights: Original dense weights\n",
|
||
" sparsity: Fraction of weights to prune (default: 70%)\n",
|
||
" \n",
|
||
" Returns:\n",
|
||
" pruned_weights: Weights with small values set to zero\n",
|
||
" mask: Binary pruning mask\n",
|
||
" stats: Pruning statistics\n",
|
||
" \"\"\"\n",
|
||
" # BEGIN SOLUTION\n",
|
||
" # Store original weights\n",
|
||
" original_shape = weights.shape\n",
|
||
" original_size = weights.size\n",
|
||
" \n",
|
||
" # Calculate threshold for desired sparsity\n",
|
||
" threshold = self.calculate_threshold(weights, sparsity)\n",
|
||
" \n",
|
||
" # Create pruning mask\n",
|
||
" mask = self.create_mask(weights, threshold)\n",
|
||
" \n",
|
||
" # Apply pruning\n",
|
||
" pruned_weights = weights * mask\n",
|
||
" \n",
|
||
" # Calculate statistics\n",
|
||
" actual_sparsity = np.sum(mask == 0) / mask.size\n",
|
||
" remaining_params = np.sum(mask == 1)\n",
|
||
" compression_ratio = original_size / remaining_params if remaining_params > 0 else float('inf')\n",
|
||
" \n",
|
||
" stats = {\n",
|
||
" 'target_sparsity': sparsity,\n",
|
||
" 'actual_sparsity': actual_sparsity,\n",
|
||
" 'threshold': threshold,\n",
|
||
" 'original_params': original_size,\n",
|
||
" 'remaining_params': int(remaining_params),\n",
|
||
" 'pruned_params': int(original_size - remaining_params),\n",
|
||
" 'compression_ratio': compression_ratio\n",
|
||
" }\n",
|
||
" \n",
|
||
" return pruned_weights, mask, stats\n",
|
||
" # END SOLUTION\n",
|
||
" \n",
|
||
" def measure_accuracy_impact(self, original_weights: np.ndarray, pruned_weights: np.ndarray) -> Dict:\n",
|
||
" \"\"\"\n",
|
||
" Measure the impact of pruning on weight statistics.\n",
|
||
" \n",
|
||
" This gives us a proxy for accuracy impact before running full evaluation.\n",
|
||
" \"\"\"\n",
|
||
" # BEGIN SOLUTION\n",
|
||
" # Calculate difference statistics\n",
|
||
" weight_diff = np.abs(original_weights - pruned_weights)\n",
|
||
" \n",
|
||
" # Normalize by original weight magnitude for relative comparison\n",
|
||
" original_abs = np.abs(original_weights)\n",
|
||
" relative_error = weight_diff / (original_abs + 1e-8) # Avoid division by zero\n",
|
||
" \n",
|
||
" return {\n",
|
||
" 'mean_absolute_error': weight_diff.mean(),\n",
|
||
" 'max_absolute_error': weight_diff.max(),\n",
|
||
" 'mean_relative_error': relative_error.mean(),\n",
|
||
" 'weight_norm_preservation': np.linalg.norm(pruned_weights) / np.linalg.norm(original_weights)\n",
|
||
" }\n",
|
||
" # END SOLUTION"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "824d7184",
|
||
"metadata": {
|
||
"cell_marker": "\"\"\"",
|
||
"lines_to_next_cell": 1
|
||
},
|
||
"source": [
|
||
"### Test: Magnitude-Based Pruning Implementation\n",
|
||
"\n",
|
||
"Let's verify our magnitude pruning works correctly."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"id": "94fe2b37",
|
||
"metadata": {
|
||
"nbgrader": {
|
||
"grade": true,
|
||
"grade_id": "test-magnitude-pruning",
|
||
"locked": false,
|
||
"points": 15,
|
||
"schema_version": 3,
|
||
"solution": false,
|
||
"task": false
|
||
}
|
||
},
|
||
"outputs": [],
|
||
"source": [
|
||
"def test_magnitude_pruning():\n",
|
||
" \"\"\"Test magnitude-based pruning implementation.\"\"\"\n",
|
||
" print(\"Testing magnitude-based pruning...\")\n",
|
||
" \n",
|
||
" pruner = MagnitudePruner()\n",
|
||
" \n",
|
||
" # Test case 1: Simple weights with known distribution\n",
|
||
" weights = np.array([\n",
|
||
" [0.5, 0.1, 0.8],\n",
|
||
" [0.05, 0.9, 0.2],\n",
|
||
" [0.3, 0.02, 0.7]\n",
|
||
" ])\n",
|
||
" \n",
|
||
" # Test 50% sparsity (should keep 4.5 ≈ 4-5 weights)\n",
|
||
" pruned, mask, stats = pruner.prune(weights, sparsity=0.5)\n",
|
||
" \n",
|
||
" print(f\"Original weights:\")\n",
|
||
" print(weights)\n",
|
||
" print(f\"Pruning mask:\")\n",
|
||
" print(mask)\n",
|
||
" print(f\"Pruned weights:\")\n",
|
||
" print(pruned)\n",
|
||
" print(f\"Statistics: {stats}\")\n",
|
||
" \n",
|
||
" # Verify sparsity is approximately correct\n",
|
||
" actual_sparsity = stats['actual_sparsity']\n",
|
||
" assert 0.4 <= actual_sparsity <= 0.6, f\"Sparsity should be ~50%, got {actual_sparsity:.1%}\"\n",
|
||
" \n",
|
||
" # Verify mask is binary\n",
|
||
" assert np.all((mask == 0) | (mask == 1)), \"Mask should be binary\"\n",
|
||
" \n",
|
||
" # Verify pruned weights match mask\n",
|
||
" expected_pruned = weights * mask\n",
|
||
" np.testing.assert_array_equal(pruned, expected_pruned, \"Pruned weights should match mask application\")\n",
|
||
" \n",
|
||
" # Test case 2: High sparsity pruning\n",
|
||
" large_weights = np.random.normal(0, 0.1, (100, 50))\n",
|
||
" pruned_large, mask_large, stats_large = pruner.prune(large_weights, sparsity=0.8)\n",
|
||
" \n",
|
||
" assert 0.75 <= stats_large['actual_sparsity'] <= 0.85, \"High sparsity should be approximately correct\"\n",
|
||
" assert stats_large['compression_ratio'] >= 4.0, \"80% sparsity should give ~5x compression\"\n",
|
||
" \n",
|
||
" # Test accuracy impact measurement\n",
|
||
" accuracy_impact = pruner.measure_accuracy_impact(large_weights, pruned_large)\n",
|
||
" assert 'mean_relative_error' in accuracy_impact, \"Should measure relative error\"\n",
|
||
" assert accuracy_impact['weight_norm_preservation'] > 0, \"Should preserve some weight norm\"\n",
|
||
" \n",
|
||
" print(\"✅ Magnitude-based pruning test passed!\")\n",
|
||
"\n",
|
||
"test_magnitude_pruning()"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "d362f652",
|
||
"metadata": {
|
||
"cell_marker": "\"\"\"",
|
||
"lines_to_next_cell": 1
|
||
},
|
||
"source": [
|
||
"## Part 3: Structured vs Unstructured Pruning\n",
|
||
"\n",
|
||
"So far we've implemented **unstructured pruning** - removing individual weights anywhere. But this creates irregular sparsity patterns that are hard for hardware to accelerate. **Structured pruning** removes entire channels, filters, or blocks - creating regular patterns that map well to hardware.\n",
|
||
"\n",
|
||
"### Structured Pruning Benefits:\n",
|
||
"- **Hardware friendly**: Regular patterns enable efficient sparse computation\n",
|
||
"- **Memory layout**: Removes entire rows/columns, reducing memory footprint \n",
|
||
"- **Inference speed**: Actually accelerates computation (vs theoretical speedup)\n",
|
||
"- **Implementation simple**: No special sparse kernels needed"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"id": "1f8b15a4",
|
||
"metadata": {
|
||
"lines_to_next_cell": 1,
|
||
"nbgrader": {
|
||
"grade": false,
|
||
"grade_id": "structured-pruning",
|
||
"locked": false,
|
||
"schema_version": 3,
|
||
"solution": true,
|
||
"task": false
|
||
}
|
||
},
|
||
"outputs": [],
|
||
"source": [
|
||
"#| export\n",
|
||
"def prune_conv_filters(conv_weights: np.ndarray, sparsity: float = 0.5) -> Tuple[np.ndarray, List[int], Dict]:\n",
|
||
" \"\"\"\n",
|
||
" Structured pruning for convolutional layers - remove entire filters.\n",
|
||
" \n",
|
||
" Args:\n",
|
||
" conv_weights: Conv weights shaped (out_channels, in_channels, H, W)\n",
|
||
" sparsity: Fraction of filters to remove\n",
|
||
" \n",
|
||
" Returns:\n",
|
||
" pruned_weights: Weights with filters removed\n",
|
||
" kept_filters: Indices of filters that were kept\n",
|
||
" stats: Pruning statistics\n",
|
||
" \"\"\"\n",
|
||
" # BEGIN SOLUTION\n",
|
||
" # Calculate importance score for each output filter\n",
|
||
" # Use L2 norm of entire filter as importance measure\n",
|
||
" out_channels = conv_weights.shape[0]\n",
|
||
" filter_norms = []\n",
|
||
" \n",
|
||
" for i in range(out_channels):\n",
|
||
" filter_weights = conv_weights[i] # Shape: (in_channels, H, W)\n",
|
||
" l2_norm = np.linalg.norm(filter_weights)\n",
|
||
" filter_norms.append(l2_norm)\n",
|
||
" \n",
|
||
" filter_norms = np.array(filter_norms)\n",
|
||
" \n",
|
||
" # Determine how many filters to keep\n",
|
||
" num_filters_to_keep = int(out_channels * (1 - sparsity))\n",
|
||
" num_filters_to_keep = max(1, num_filters_to_keep) # Keep at least 1 filter\n",
|
||
" \n",
|
||
" # Find indices of top filters to keep\n",
|
||
" top_filter_indices = np.argsort(filter_norms)[-num_filters_to_keep:]\n",
|
||
" top_filter_indices.sort() # Keep original ordering\n",
|
||
" \n",
|
||
" # Create pruned weights by selecting only top filters\n",
|
||
" pruned_weights = conv_weights[top_filter_indices]\n",
|
||
" \n",
|
||
" # Calculate statistics\n",
|
||
" actual_sparsity = 1 - (num_filters_to_keep / out_channels)\n",
|
||
" \n",
|
||
" stats = {\n",
|
||
" 'original_filters': out_channels,\n",
|
||
" 'remaining_filters': num_filters_to_keep,\n",
|
||
" 'pruned_filters': out_channels - num_filters_to_keep,\n",
|
||
" 'target_sparsity': sparsity,\n",
|
||
" 'actual_sparsity': actual_sparsity,\n",
|
||
" 'compression_ratio': out_channels / num_filters_to_keep,\n",
|
||
" 'filter_norms': filter_norms,\n",
|
||
" 'kept_filter_indices': top_filter_indices.tolist()\n",
|
||
" }\n",
|
||
" \n",
|
||
" return pruned_weights, top_filter_indices.tolist(), stats\n",
|
||
" # END SOLUTION\n",
|
||
"\n",
|
||
"def compare_structured_vs_unstructured(conv_weights: np.ndarray, sparsity: float = 0.5):\n",
|
||
" \"\"\"\n",
|
||
" Compare structured vs unstructured pruning on the same layer.\n",
|
||
" \"\"\"\n",
|
||
" print(\"🔬 Structured vs Unstructured Pruning Comparison\")\n",
|
||
" print(\"=\" * 60)\n",
|
||
" \n",
|
||
" # Unstructured pruning\n",
|
||
" pruner = MagnitudePruner()\n",
|
||
" unstructured_pruned, unstructured_mask, unstructured_stats = pruner.prune(conv_weights, sparsity)\n",
|
||
" \n",
|
||
" # Structured pruning \n",
|
||
" structured_pruned, kept_filters, structured_stats = prune_conv_filters(conv_weights, sparsity)\n",
|
||
" \n",
|
||
" print(\"Unstructured Pruning:\")\n",
|
||
" print(f\" Original shape: {conv_weights.shape}\")\n",
|
||
" print(f\" Pruned shape: {unstructured_pruned.shape} (same)\")\n",
|
||
" print(f\" Sparsity: {unstructured_stats['actual_sparsity']:.1%}\")\n",
|
||
" print(f\" Compression: {unstructured_stats['compression_ratio']:.1f}x\")\n",
|
||
" print(f\" Zero elements: {np.sum(unstructured_pruned == 0):,}\")\n",
|
||
" \n",
|
||
" print(\"\\nStructured Pruning:\")\n",
|
||
" print(f\" Original shape: {conv_weights.shape}\")\n",
|
||
" print(f\" Pruned shape: {structured_pruned.shape}\")\n",
|
||
" print(f\" Sparsity: {structured_stats['actual_sparsity']:.1%}\")\n",
|
||
" print(f\" Compression: {structured_stats['compression_ratio']:.1f}x\")\n",
|
||
" print(f\" Filters removed: {structured_stats['pruned_filters']}\")\n",
|
||
" \n",
|
||
" print(f\"\\n💡 Key Differences:\")\n",
|
||
" print(f\" • Unstructured: Irregular sparsity, requires sparse kernels\")\n",
|
||
" print(f\" • Structured: Regular reduction, standard dense computation\")\n",
|
||
" print(f\" • Hardware: Structured pruning provides actual speedup\")\n",
|
||
" print(f\" • Memory: Structured pruning reduces memory footprint\")\n",
|
||
" \n",
|
||
" return {\n",
|
||
" 'unstructured': (unstructured_pruned, unstructured_stats),\n",
|
||
" 'structured': (structured_pruned, structured_stats)\n",
|
||
" }"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "15339fed",
|
||
"metadata": {
|
||
"cell_marker": "\"\"\"",
|
||
"lines_to_next_cell": 1
|
||
},
|
||
"source": [
|
||
"### Test: Structured Pruning Implementation\n",
|
||
"\n",
|
||
"Let's verify structured pruning works correctly and compare it with unstructured pruning."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"id": "d9952bab",
|
||
"metadata": {
|
||
"nbgrader": {
|
||
"grade": true,
|
||
"grade_id": "test-structured-pruning",
|
||
"locked": false,
|
||
"points": 15,
|
||
"schema_version": 3,
|
||
"solution": false,
|
||
"task": false
|
||
}
|
||
},
|
||
"outputs": [],
|
||
"source": [
|
||
"def test_structured_pruning():\n",
|
||
" \"\"\"Test structured pruning implementation.\"\"\"\n",
|
||
" print(\"Testing structured pruning...\")\n",
|
||
" \n",
|
||
" # Create sample conv weights: (out_channels, in_channels, H, W)\n",
|
||
" np.random.seed(42)\n",
|
||
" conv_weights = np.random.normal(0, 0.1, (8, 4, 3, 3))\n",
|
||
" \n",
|
||
" # Test structured pruning\n",
|
||
" pruned_weights, kept_filters, stats = prune_conv_filters(conv_weights, sparsity=0.5)\n",
|
||
" \n",
|
||
" print(f\"Original shape: {conv_weights.shape}\")\n",
|
||
" print(f\"Pruned shape: {pruned_weights.shape}\")\n",
|
||
" print(f\"Kept filters: {kept_filters}\")\n",
|
||
" print(f\"Stats: {stats}\")\n",
|
||
" \n",
|
||
" # Verify output shape is correct\n",
|
||
" expected_filters = int(8 * (1 - 0.5)) # 50% sparsity = keep 50% of filters\n",
|
||
" assert pruned_weights.shape[0] == expected_filters, f\"Should keep {expected_filters} filters\"\n",
|
||
" assert pruned_weights.shape[1:] == conv_weights.shape[1:], \"Other dimensions should match\"\n",
|
||
" \n",
|
||
" # Verify kept filters are the strongest ones\n",
|
||
" filter_norms = [np.linalg.norm(conv_weights[i]) for i in range(8)]\n",
|
||
" top_indices = np.argsort(filter_norms)[-expected_filters:]\n",
|
||
" top_indices.sort()\n",
|
||
" \n",
|
||
" for i, kept_idx in enumerate(kept_filters):\n",
|
||
" # Verify the pruned weight matches original filter\n",
|
||
" np.testing.assert_array_equal(\n",
|
||
" pruned_weights[i], \n",
|
||
" conv_weights[kept_idx],\n",
|
||
" f\"Filter {i} should match original filter {kept_idx}\"\n",
|
||
" )\n",
|
||
" \n",
|
||
" # Test comparison function\n",
|
||
" comparison = compare_structured_vs_unstructured(conv_weights, 0.5)\n",
|
||
" \n",
|
||
" # Verify both methods produce different results\n",
|
||
" unstructured_result = comparison['unstructured'][0]\n",
|
||
" structured_result = comparison['structured'][0]\n",
|
||
" \n",
|
||
" assert unstructured_result.shape == conv_weights.shape, \"Unstructured keeps same shape\"\n",
|
||
" assert structured_result.shape[0] < conv_weights.shape[0], \"Structured reduces filters\"\n",
|
||
" \n",
|
||
" print(\"✅ Structured pruning test passed!\")\n",
|
||
"\n",
|
||
"test_structured_pruning()"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "7bb0d7d8",
|
||
"metadata": {
|
||
"cell_marker": "\"\"\"",
|
||
"lines_to_next_cell": 1
|
||
},
|
||
"source": [
|
||
"## Part 4: Sparse Neural Networks - Efficient Computation\n",
|
||
"\n",
|
||
"Pruning creates sparse networks, but how do we compute with them efficiently? We need sparse linear layers that skip computation for zero weights.\n",
|
||
"\n",
|
||
"### Sparse Computation Challenges:\n",
|
||
"- **Memory layout**: How to store only non-zero weights efficiently\n",
|
||
"- **Computation patterns**: Skip multiply-add operations for zero weights \n",
|
||
"- **Hardware support**: Most hardware isn't optimized for arbitrary sparsity\n",
|
||
"- **Software optimization**: Need specialized sparse kernels for speedup"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"id": "3cc82880",
|
||
"metadata": {
|
||
"lines_to_next_cell": 1,
|
||
"nbgrader": {
|
||
"grade": false,
|
||
"grade_id": "sparse-computation",
|
||
"locked": false,
|
||
"schema_version": 3,
|
||
"solution": true,
|
||
"task": false
|
||
}
|
||
},
|
||
"outputs": [],
|
||
"source": [
|
||
"#| export\n",
|
||
"class SparseLinear:\n",
|
||
" \"\"\"\n",
|
||
" Sparse linear layer that efficiently computes with pruned weights.\n",
|
||
" \n",
|
||
" This demonstrates how to build sparse computation systems\n",
|
||
" that actually achieve speedup from sparsity.\n",
|
||
" \"\"\"\n",
|
||
" \n",
|
||
" def __init__(self, in_features: int, out_features: int):\n",
|
||
" # BEGIN SOLUTION\n",
|
||
" self.in_features = in_features\n",
|
||
" self.out_features = out_features\n",
|
||
" \n",
|
||
" # Dense weights (will be pruned)\n",
|
||
" self.dense_weights = None\n",
|
||
" self.bias = None\n",
|
||
" \n",
|
||
" # Sparse representation\n",
|
||
" self.sparse_weights = None\n",
|
||
" self.mask = None\n",
|
||
" self.sparsity = 0.0\n",
|
||
" \n",
|
||
" # Performance tracking\n",
|
||
" self.dense_ops = 0\n",
|
||
" self.sparse_ops = 0\n",
|
||
" # END SOLUTION\n",
|
||
" \n",
|
||
" def load_dense_weights(self, weights: np.ndarray, bias: Optional[np.ndarray] = None):\n",
|
||
" \"\"\"Load dense weights before pruning.\"\"\"\n",
|
||
" # BEGIN SOLUTION\n",
|
||
" assert weights.shape == (self.out_features, self.in_features), f\"Weight shape mismatch\"\n",
|
||
" self.dense_weights = weights.copy()\n",
|
||
" self.bias = bias.copy() if bias is not None else np.zeros(self.out_features)\n",
|
||
" # END SOLUTION\n",
|
||
" \n",
|
||
" def prune_weights(self, sparsity: float = 0.7):\n",
|
||
" \"\"\"Prune weights using magnitude-based pruning.\"\"\"\n",
|
||
" # BEGIN SOLUTION\n",
|
||
" if self.dense_weights is None:\n",
|
||
" raise ValueError(\"Must load dense weights before pruning\")\n",
|
||
" \n",
|
||
" # Use magnitude pruner\n",
|
||
" pruner = MagnitudePruner()\n",
|
||
" self.sparse_weights, self.mask, stats = pruner.prune(self.dense_weights, sparsity)\n",
|
||
" self.sparsity = stats['actual_sparsity']\n",
|
||
" \n",
|
||
" print(f\"✂️ Pruned {self.sparsity:.1%} of weights\")\n",
|
||
" print(f\" Compression: {stats['compression_ratio']:.1f}x\")\n",
|
||
" # END SOLUTION\n",
|
||
" \n",
|
||
" def forward_dense(self, x: np.ndarray) -> np.ndarray:\n",
|
||
" \"\"\"Forward pass using dense weights (reference).\"\"\"\n",
|
||
" # BEGIN SOLUTION\n",
|
||
" if self.dense_weights is None:\n",
|
||
" raise ValueError(\"Dense weights not loaded\")\n",
|
||
" \n",
|
||
" # Count operations\n",
|
||
" self.dense_ops = self.in_features * self.out_features\n",
|
||
" \n",
|
||
" # Standard matrix multiply: y = x @ W^T + b\n",
|
||
" output = np.dot(x, self.dense_weights.T) + self.bias\n",
|
||
" return output\n",
|
||
" # END SOLUTION\n",
|
||
" \n",
|
||
" def forward_sparse_naive(self, x: np.ndarray) -> np.ndarray:\n",
|
||
" \"\"\"Forward pass using sparse weights (naive implementation).\"\"\"\n",
|
||
" # BEGIN SOLUTION\n",
|
||
" if self.sparse_weights is None:\n",
|
||
" raise ValueError(\"Weights not pruned yet\")\n",
|
||
" \n",
|
||
" # Count actual operations (skip zero weights)\n",
|
||
" self.sparse_ops = np.sum(self.mask)\n",
|
||
" \n",
|
||
" # Naive sparse computation: still do full matrix multiply\n",
|
||
" # (Real sparse implementations would use CSR/CSC formats)\n",
|
||
" output = np.dot(x, self.sparse_weights.T) + self.bias\n",
|
||
" return output\n",
|
||
" # END SOLUTION\n",
|
||
" \n",
|
||
" def forward_sparse_optimized(self, x: np.ndarray) -> np.ndarray:\n",
|
||
" \"\"\"Forward pass using optimized sparse computation.\"\"\"\n",
|
||
" # BEGIN SOLUTION\n",
|
||
" if self.sparse_weights is None:\n",
|
||
" raise ValueError(\"Weights not pruned yet\")\n",
|
||
" \n",
|
||
" # Find non-zero weights\n",
|
||
" nonzero_indices = np.nonzero(self.sparse_weights)\n",
|
||
" \n",
|
||
" # Count actual operations\n",
|
||
" self.sparse_ops = len(nonzero_indices[0])\n",
|
||
" \n",
|
||
" # Optimized sparse computation (simulated)\n",
|
||
" # In practice, this would use specialized sparse matrix libraries\n",
|
||
" output = np.zeros((x.shape[0], self.out_features))\n",
|
||
" \n",
|
||
" # Only compute for non-zero weights\n",
|
||
" for i in range(len(nonzero_indices[0])):\n",
|
||
" row = nonzero_indices[0][i]\n",
|
||
" col = nonzero_indices[1][i]\n",
|
||
" weight = self.sparse_weights[row, col]\n",
|
||
" \n",
|
||
" # Accumulate: output[batch, row] += input[batch, col] * weight\n",
|
||
" output[:, row] += x[:, col] * weight\n",
|
||
" \n",
|
||
" # Add bias\n",
|
||
" output += self.bias\n",
|
||
" \n",
|
||
" return output\n",
|
||
" # END SOLUTION\n",
|
||
" \n",
|
||
" def benchmark_speedup(self, batch_size: int = 32, iterations: int = 100) -> Dict:\n",
|
||
" \"\"\"Benchmark sparse vs dense computation speedup.\"\"\"\n",
|
||
" # BEGIN SOLUTION\n",
|
||
" import time\n",
|
||
" \n",
|
||
" # Create test input\n",
|
||
" x = np.random.normal(0, 1, (batch_size, self.in_features))\n",
|
||
" \n",
|
||
" # Benchmark dense forward pass\n",
|
||
" start_time = time.time()\n",
|
||
" for _ in range(iterations):\n",
|
||
" _ = self.forward_dense(x)\n",
|
||
" dense_time = time.time() - start_time\n",
|
||
" \n",
|
||
" # Benchmark sparse forward pass\n",
|
||
" start_time = time.time()\n",
|
||
" for _ in range(iterations):\n",
|
||
" _ = self.forward_sparse_naive(x)\n",
|
||
" sparse_time = time.time() - start_time\n",
|
||
" \n",
|
||
" # Calculate speedup metrics\n",
|
||
" theoretical_speedup = self.dense_ops / self.sparse_ops if self.sparse_ops > 0 else 1\n",
|
||
" actual_speedup = dense_time / sparse_time if sparse_time > 0 else 1\n",
|
||
" \n",
|
||
" return {\n",
|
||
" 'dense_time_ms': dense_time * 1000,\n",
|
||
" 'sparse_time_ms': sparse_time * 1000,\n",
|
||
" 'dense_ops': self.dense_ops,\n",
|
||
" 'sparse_ops': self.sparse_ops,\n",
|
||
" 'theoretical_speedup': theoretical_speedup,\n",
|
||
" 'actual_speedup': actual_speedup,\n",
|
||
" 'sparsity': self.sparsity,\n",
|
||
" 'efficiency': actual_speedup / theoretical_speedup\n",
|
||
" }\n",
|
||
" # END SOLUTION"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "0ffe0018",
|
||
"metadata": {
|
||
"cell_marker": "\"\"\"",
|
||
"lines_to_next_cell": 1
|
||
},
|
||
"source": [
|
||
"### Test: Sparse Neural Network Implementation\n",
|
||
"\n",
|
||
"Let's verify our sparse neural network works correctly and measure performance."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"id": "8d118ef4",
|
||
"metadata": {
|
||
"nbgrader": {
|
||
"grade": true,
|
||
"grade_id": "test-sparse-neural-network",
|
||
"locked": false,
|
||
"points": 15,
|
||
"schema_version": 3,
|
||
"solution": false,
|
||
"task": false
|
||
}
|
||
},
|
||
"outputs": [],
|
||
"source": [
|
||
"def test_sparse_neural_network():\n",
|
||
" \"\"\"Test sparse neural network implementation.\"\"\"\n",
|
||
" print(\"Testing sparse neural network...\")\n",
|
||
" \n",
|
||
" # Create sparse linear layer\n",
|
||
" sparse_layer = SparseLinear(256, 128)\n",
|
||
" \n",
|
||
" # Load random weights\n",
|
||
" np.random.seed(42)\n",
|
||
" weights = np.random.normal(0, 0.1, (128, 256))\n",
|
||
" bias = np.random.normal(0, 0.01, 128)\n",
|
||
" sparse_layer.load_dense_weights(weights, bias)\n",
|
||
" \n",
|
||
" # Prune weights\n",
|
||
" sparse_layer.prune_weights(sparsity=0.8) # 80% sparsity\n",
|
||
" \n",
|
||
" # Test forward passes\n",
|
||
" x = np.random.normal(0, 1, (4, 256)) # Batch of 4\n",
|
||
" \n",
|
||
" # Compare outputs\n",
|
||
" output_dense = sparse_layer.forward_dense(x)\n",
|
||
" output_sparse_naive = sparse_layer.forward_sparse_naive(x)\n",
|
||
" output_sparse_opt = sparse_layer.forward_sparse_optimized(x)\n",
|
||
" \n",
|
||
" print(f\"Output shapes:\")\n",
|
||
" print(f\" Dense: {output_dense.shape}\")\n",
|
||
" print(f\" Sparse naive: {output_sparse_naive.shape}\")\n",
|
||
" print(f\" Sparse optimized: {output_sparse_opt.shape}\")\n",
|
||
" \n",
|
||
" # Verify outputs have correct shape\n",
|
||
" expected_shape = (4, 128)\n",
|
||
" assert output_dense.shape == expected_shape, \"Dense output shape incorrect\"\n",
|
||
" assert output_sparse_naive.shape == expected_shape, \"Sparse naive output shape incorrect\"\n",
|
||
" assert output_sparse_opt.shape == expected_shape, \"Sparse optimized output shape incorrect\"\n",
|
||
" \n",
|
||
" # Verify sparse outputs match expected computation\n",
|
||
" # Sparse naive should match dense computation on pruned weights\n",
|
||
" np.testing.assert_allclose(\n",
|
||
" output_sparse_naive, output_sparse_opt, rtol=1e-5,\n",
|
||
" err_msg=\"Sparse naive and optimized should produce same results\"\n",
|
||
" )\n",
|
||
" \n",
|
||
" # The outputs shouldn't be identical (due to pruning) but should be reasonably close\n",
|
||
" relative_error = np.mean(np.abs(output_dense - output_sparse_naive)) / np.mean(np.abs(output_dense))\n",
|
||
" print(f\"Relative error from pruning: {relative_error:.3%}\")\n",
|
||
" # With 80% sparsity, relative error can be substantial but model should still function\n",
|
||
" assert relative_error < 1.0, \"Error from pruning shouldn't completely destroy the model\"\n",
|
||
" \n",
|
||
" # Benchmark performance\n",
|
||
" benchmark = sparse_layer.benchmark_speedup(batch_size=32, iterations=50)\n",
|
||
" \n",
|
||
" print(f\"\\nPerformance Benchmark:\")\n",
|
||
" print(f\" Sparsity: {benchmark['sparsity']:.1%}\")\n",
|
||
" print(f\" Dense ops: {benchmark['dense_ops']:,}\")\n",
|
||
" print(f\" Sparse ops: {benchmark['sparse_ops']:,}\")\n",
|
||
" print(f\" Theoretical speedup: {benchmark['theoretical_speedup']:.1f}x\")\n",
|
||
" print(f\" Actual speedup: {benchmark['actual_speedup']:.1f}x\")\n",
|
||
" print(f\" Efficiency: {benchmark['efficiency']:.1%}\")\n",
|
||
" \n",
|
||
" # Verify operation counting\n",
|
||
" expected_dense_ops = 256 * 128\n",
|
||
" assert benchmark['dense_ops'] == expected_dense_ops, \"Dense op count incorrect\"\n",
|
||
" assert benchmark['sparse_ops'] < benchmark['dense_ops'], \"Sparse should use fewer ops\"\n",
|
||
" \n",
|
||
" print(\"✅ Sparse neural network test passed!\")\n",
|
||
"\n",
|
||
"test_sparse_neural_network()"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "e3714629",
|
||
"metadata": {
|
||
"cell_marker": "\"\"\"",
|
||
"lines_to_next_cell": 1
|
||
},
|
||
"source": [
|
||
"## Part 5: Model Compression Pipeline - End-to-End Pruning\n",
|
||
"\n",
|
||
"Now let's build a complete model compression pipeline that can prune entire neural networks layer by layer, maintaining the overall architecture while reducing parameters.\n",
|
||
"\n",
|
||
"### Production Compression Pipeline:\n",
|
||
"1. **Model analysis**: Identify pruneable layers and sensitivity\n",
|
||
"2. **Layer-wise pruning**: Apply different sparsity levels per layer\n",
|
||
"3. **Accuracy validation**: Ensure pruning doesn't degrade performance \n",
|
||
"4. **Performance benchmarking**: Measure actual compression benefits\n",
|
||
"5. **Export for deployment**: Package compressed model for inference"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"id": "4dd53ba3",
|
||
"metadata": {
|
||
"lines_to_next_cell": 1,
|
||
"nbgrader": {
|
||
"grade": false,
|
||
"grade_id": "compression-pipeline",
|
||
"locked": false,
|
||
"schema_version": 3,
|
||
"solution": true,
|
||
"task": false
|
||
}
|
||
},
|
||
"outputs": [],
|
||
"source": [
|
||
"#| export\n",
|
||
"class ModelCompressor:\n",
|
||
" \"\"\"\n",
|
||
" Complete model compression pipeline for neural networks.\n",
|
||
" \n",
|
||
" This class implements production-ready compression workflows\n",
|
||
" that can handle complex models with mixed layer types.\n",
|
||
" \"\"\"\n",
|
||
" \n",
|
||
" def __init__(self):\n",
|
||
" # BEGIN SOLUTION\n",
|
||
" self.original_model = {}\n",
|
||
" self.compressed_model = {}\n",
|
||
" self.compression_stats = {}\n",
|
||
" self.layer_sensitivities = {}\n",
|
||
" # END SOLUTION\n",
|
||
" \n",
|
||
" def analyze_model_for_compression(self, model_weights: Dict[str, np.ndarray]) -> Dict[str, Any]:\n",
|
||
" \"\"\"\n",
|
||
" Analyze model structure to determine compression strategy.\n",
|
||
" \n",
|
||
" Args:\n",
|
||
" model_weights: Dictionary mapping layer names to weight arrays\n",
|
||
" \n",
|
||
" Returns:\n",
|
||
" analysis: Compression analysis and recommendations\n",
|
||
" \"\"\"\n",
|
||
" # BEGIN SOLUTION\n",
|
||
" analysis = {\n",
|
||
" 'layers': {},\n",
|
||
" 'total_params': 0,\n",
|
||
" 'compressible_params': 0,\n",
|
||
" 'recommendations': {}\n",
|
||
" }\n",
|
||
" \n",
|
||
" print(\"🔍 Model Compression Analysis\")\n",
|
||
" print(\"=\" * 50)\n",
|
||
" print(\"Layer | Type | Parameters | Natural Sparsity | Recommendation\")\n",
|
||
" print(\"-\" * 70)\n",
|
||
" \n",
|
||
" for layer_name, weights in model_weights.items():\n",
|
||
" layer_analysis = analyze_weight_redundancy(weights, f\"Layer {layer_name}\")\n",
|
||
" \n",
|
||
" # Determine layer type from shape\n",
|
||
" if len(weights.shape) == 4: # Conv layer: (out, in, H, W)\n",
|
||
" layer_type = \"Conv2D\"\n",
|
||
" recommended_sparsity = 0.6 # Conservative for conv layers\n",
|
||
" elif len(weights.shape) == 2: # Dense layer: (out, in) \n",
|
||
" layer_type = \"Dense\"\n",
|
||
" recommended_sparsity = 0.8 # Aggressive for dense layers\n",
|
||
" else:\n",
|
||
" layer_type = \"Other\"\n",
|
||
" recommended_sparsity = 0.5 # Safe default\n",
|
||
" \n",
|
||
" analysis['layers'][layer_name] = {\n",
|
||
" 'type': layer_type,\n",
|
||
" 'shape': weights.shape,\n",
|
||
" 'parameters': weights.size,\n",
|
||
" 'natural_sparsity': layer_analysis['natural_sparsity'],\n",
|
||
" 'recommended_sparsity': recommended_sparsity\n",
|
||
" }\n",
|
||
" \n",
|
||
" analysis['total_params'] += weights.size\n",
|
||
" if layer_type in ['Conv2D', 'Dense']:\n",
|
||
" analysis['compressible_params'] += weights.size\n",
|
||
" \n",
|
||
" print(f\"{layer_name:12} | {layer_type:7} | {weights.size:10,} | \"\n",
|
||
" f\"{layer_analysis['natural_sparsity']:12.1f}% | {recommended_sparsity:.0%}\")\n",
|
||
" \n",
|
||
" # Calculate overall compression potential\n",
|
||
" compression_potential = analysis['compressible_params'] / analysis['total_params']\n",
|
||
" \n",
|
||
" print(f\"\\n📊 Model Summary:\")\n",
|
||
" print(f\" Total parameters: {analysis['total_params']:,}\")\n",
|
||
" print(f\" Compressible parameters: {analysis['compressible_params']:,}\")\n",
|
||
" print(f\" Compression potential: {compression_potential:.1%}\")\n",
|
||
" \n",
|
||
" analysis['compression_potential'] = compression_potential\n",
|
||
" return analysis\n",
|
||
" # END SOLUTION\n",
|
||
" \n",
|
||
" def compress_model(self, model_weights: Dict[str, np.ndarray], \n",
|
||
" layer_sparsities: Optional[Dict[str, float]] = None) -> Dict[str, Any]:\n",
|
||
" \"\"\"\n",
|
||
" Compress entire model using layer-wise pruning.\n",
|
||
" \n",
|
||
" Args:\n",
|
||
" model_weights: Dictionary mapping layer names to weights\n",
|
||
" layer_sparsities: Optional per-layer sparsity targets\n",
|
||
" \n",
|
||
" Returns:\n",
|
||
" compressed_model: Compressed weights and statistics\n",
|
||
" \"\"\"\n",
|
||
" # BEGIN SOLUTION\n",
|
||
" if layer_sparsities is None:\n",
|
||
" # Use default sparsities based on layer analysis\n",
|
||
" analysis = self.analyze_model_for_compression(model_weights)\n",
|
||
" layer_sparsities = {\n",
|
||
" name: info['recommended_sparsity'] \n",
|
||
" for name, info in analysis['layers'].items()\n",
|
||
" }\n",
|
||
" \n",
|
||
" print(f\"\\n⚙️ Compressing Model Layers\")\n",
|
||
" print(\"=\" * 50)\n",
|
||
" \n",
|
||
" compressed_weights = {}\n",
|
||
" total_original_params = 0\n",
|
||
" total_remaining_params = 0\n",
|
||
" \n",
|
||
" for layer_name, weights in model_weights.items():\n",
|
||
" sparsity = layer_sparsities.get(layer_name, 0.7) # Default 70%\n",
|
||
" \n",
|
||
" print(f\"\\n🔧 Compressing {layer_name} (target: {sparsity:.0%} sparsity)...\")\n",
|
||
" \n",
|
||
" # Apply magnitude-based pruning\n",
|
||
" pruner = MagnitudePruner()\n",
|
||
" pruned_weights, mask, stats = pruner.prune(weights, sparsity)\n",
|
||
" \n",
|
||
" compressed_weights[layer_name] = {\n",
|
||
" 'weights': pruned_weights,\n",
|
||
" 'mask': mask,\n",
|
||
" 'original_shape': weights.shape,\n",
|
||
" 'stats': stats\n",
|
||
" }\n",
|
||
" \n",
|
||
" total_original_params += stats['original_params']\n",
|
||
" total_remaining_params += stats['remaining_params']\n",
|
||
" \n",
|
||
" print(f\" Sparsity achieved: {stats['actual_sparsity']:.1%}\")\n",
|
||
" print(f\" Compression: {stats['compression_ratio']:.1f}x\")\n",
|
||
" \n",
|
||
" # Calculate overall compression\n",
|
||
" overall_compression = total_original_params / total_remaining_params if total_remaining_params > 0 else 1\n",
|
||
" overall_sparsity = 1 - (total_remaining_params / total_original_params)\n",
|
||
" \n",
|
||
" self.compressed_model = compressed_weights\n",
|
||
" self.compression_stats = {\n",
|
||
" 'total_original_params': total_original_params,\n",
|
||
" 'total_remaining_params': total_remaining_params,\n",
|
||
" 'overall_sparsity': overall_sparsity,\n",
|
||
" 'overall_compression': overall_compression,\n",
|
||
" 'layer_sparsities': layer_sparsities\n",
|
||
" }\n",
|
||
" \n",
|
||
" print(f\"\\n✅ Model Compression Complete!\")\n",
|
||
" print(f\" Original parameters: {total_original_params:,}\")\n",
|
||
" print(f\" Remaining parameters: {total_remaining_params:,}\")\n",
|
||
" print(f\" Overall sparsity: {overall_sparsity:.1%}\")\n",
|
||
" print(f\" Overall compression: {overall_compression:.1f}x\")\n",
|
||
" \n",
|
||
" return compressed_weights\n",
|
||
" # END SOLUTION\n",
|
||
" \n",
|
||
" def validate_compression_quality(self, original_weights: Dict[str, np.ndarray], \n",
|
||
" compressed_model: Dict[str, Any]) -> Dict[str, Any]:\n",
|
||
" \"\"\"\n",
|
||
" Validate that compression doesn't degrade model too much.\n",
|
||
" \n",
|
||
" This is a simplified validation - in practice you'd run full model evaluation.\n",
|
||
" \"\"\"\n",
|
||
" # BEGIN SOLUTION\n",
|
||
" validation_results = {\n",
|
||
" 'layer_quality': {},\n",
|
||
" 'overall_quality': {},\n",
|
||
" 'quality_score': 0.0\n",
|
||
" }\n",
|
||
" \n",
|
||
" print(f\"\\n✅ Validating Compression Quality\")\n",
|
||
" print(\"=\" * 50)\n",
|
||
" print(\"Layer | Weight Error | Norm Preservation | Quality\")\n",
|
||
" print(\"-\" * 55)\n",
|
||
" \n",
|
||
" layer_scores = []\n",
|
||
" \n",
|
||
" for layer_name in original_weights.keys():\n",
|
||
" original = original_weights[layer_name]\n",
|
||
" compressed_info = compressed_model[layer_name]\n",
|
||
" compressed = compressed_info['weights']\n",
|
||
" \n",
|
||
" # Calculate quality metrics\n",
|
||
" weight_diff = np.abs(original - compressed)\n",
|
||
" mean_error = weight_diff.mean()\n",
|
||
" max_error = weight_diff.max()\n",
|
||
" \n",
|
||
" # Norm preservation\n",
|
||
" orig_norm = np.linalg.norm(original)\n",
|
||
" comp_norm = np.linalg.norm(compressed)\n",
|
||
" norm_preservation = comp_norm / orig_norm if orig_norm > 0 else 1.0\n",
|
||
" \n",
|
||
" # Simple quality score (higher is better)\n",
|
||
" # Penalize high error, reward norm preservation\n",
|
||
" quality_score = norm_preservation * (1 - mean_error / (np.abs(original).mean() + 1e-8))\n",
|
||
" quality_score = max(0, min(1, quality_score)) # Clamp to [0, 1]\n",
|
||
" \n",
|
||
" validation_results['layer_quality'][layer_name] = {\n",
|
||
" 'mean_error': mean_error,\n",
|
||
" 'max_error': max_error,\n",
|
||
" 'norm_preservation': norm_preservation,\n",
|
||
" 'quality_score': quality_score\n",
|
||
" }\n",
|
||
" \n",
|
||
" layer_scores.append(quality_score)\n",
|
||
" \n",
|
||
" print(f\"{layer_name:12} | {mean_error:.6f} | {norm_preservation:13.3f} | {quality_score:.3f}\")\n",
|
||
" \n",
|
||
" # Overall quality\n",
|
||
" overall_quality_score = np.mean(layer_scores)\n",
|
||
" validation_results['overall_quality'] = {\n",
|
||
" 'mean_quality_score': overall_quality_score,\n",
|
||
" 'quality_std': np.std(layer_scores),\n",
|
||
" 'min_quality': np.min(layer_scores),\n",
|
||
" 'max_quality': np.max(layer_scores)\n",
|
||
" }\n",
|
||
" validation_results['quality_score'] = overall_quality_score\n",
|
||
" \n",
|
||
" print(f\"\\n🎯 Overall Quality Score: {overall_quality_score:.3f}\")\n",
|
||
" if overall_quality_score > 0.8:\n",
|
||
" print(\" ✅ Excellent compression quality!\")\n",
|
||
" elif overall_quality_score > 0.6:\n",
|
||
" print(\" ⚠️ Acceptable compression quality\") \n",
|
||
" else:\n",
|
||
" print(\" ❌ Poor compression quality - consider lower sparsity\")\n",
|
||
" \n",
|
||
" return validation_results\n",
|
||
" # END SOLUTION"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "3f625377",
|
||
"metadata": {
|
||
"cell_marker": "\"\"\"",
|
||
"lines_to_next_cell": 1
|
||
},
|
||
"source": [
|
||
"### Test: Model Compression Pipeline\n",
|
||
"\n",
|
||
"Let's verify our complete compression pipeline works on a multi-layer model."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"id": "61b92386",
|
||
"metadata": {
|
||
"nbgrader": {
|
||
"grade": true,
|
||
"grade_id": "test-compression-pipeline",
|
||
"locked": false,
|
||
"points": 20,
|
||
"schema_version": 3,
|
||
"solution": false,
|
||
"task": false
|
||
}
|
||
},
|
||
"outputs": [],
|
||
"source": [
|
||
"def test_compression_pipeline():\n",
|
||
" \"\"\"Test complete model compression pipeline.\"\"\"\n",
|
||
" print(\"Testing model compression pipeline...\")\n",
|
||
" \n",
|
||
" # Create sample multi-layer model\n",
|
||
" np.random.seed(42)\n",
|
||
" model_weights = {\n",
|
||
" 'conv1': np.random.normal(0, 0.02, (32, 3, 3, 3)), # Conv: 32 filters, 3 input channels\n",
|
||
" 'conv2': np.random.normal(0, 0.02, (64, 32, 3, 3)), # Conv: 64 filters, 32 input channels\n",
|
||
" 'fc1': np.random.normal(0, 0.01, (512, 1024)), # Dense: 512 → 1024\n",
|
||
" 'fc2': np.random.normal(0, 0.01, (10, 512)), # Dense: 10 → 512 (output layer)\n",
|
||
" }\n",
|
||
" \n",
|
||
" # Create compressor\n",
|
||
" compressor = ModelCompressor()\n",
|
||
" \n",
|
||
" # Step 1: Analyze model\n",
|
||
" analysis = compressor.analyze_model_for_compression(model_weights)\n",
|
||
" \n",
|
||
" assert analysis['total_params'] > 0, \"Should count total parameters\"\n",
|
||
" assert len(analysis['layers']) == 4, \"Should analyze all 4 layers\"\n",
|
||
" assert 'conv1' in analysis['layers'], \"Should analyze conv1\"\n",
|
||
" assert 'fc1' in analysis['layers'], \"Should analyze fc1\"\n",
|
||
" \n",
|
||
" # Verify layer type detection\n",
|
||
" assert analysis['layers']['conv1']['type'] == 'Conv2D', \"Should detect conv layers\"\n",
|
||
" assert analysis['layers']['fc1']['type'] == 'Dense', \"Should detect dense layers\"\n",
|
||
" \n",
|
||
" # Step 2: Compress model with custom sparsities\n",
|
||
" custom_sparsities = {\n",
|
||
" 'conv1': 0.5, # Conservative for first conv layer\n",
|
||
" 'conv2': 0.6, # Moderate for second conv layer\n",
|
||
" 'fc1': 0.8, # Aggressive for large dense layer\n",
|
||
" 'fc2': 0.3 # Conservative for output layer\n",
|
||
" }\n",
|
||
" \n",
|
||
" compressed_model = compressor.compress_model(model_weights, custom_sparsities)\n",
|
||
" \n",
|
||
" # Verify compression results\n",
|
||
" assert len(compressed_model) == 4, \"Should compress all layers\"\n",
|
||
" for layer_name in model_weights.keys():\n",
|
||
" assert layer_name in compressed_model, f\"Missing compressed {layer_name}\"\n",
|
||
" compressed_info = compressed_model[layer_name]\n",
|
||
" assert 'weights' in compressed_info, \"Should have compressed weights\"\n",
|
||
" assert 'mask' in compressed_info, \"Should have pruning mask\"\n",
|
||
" assert 'stats' in compressed_info, \"Should have compression stats\"\n",
|
||
" \n",
|
||
" # Verify compression statistics\n",
|
||
" stats = compressor.compression_stats\n",
|
||
" assert stats['overall_compression'] > 2.0, \"Should achieve significant compression\"\n",
|
||
" assert 0.5 <= stats['overall_sparsity'] <= 0.8, \"Overall sparsity should be reasonable\"\n",
|
||
" \n",
|
||
" # Step 3: Validate compression quality\n",
|
||
" validation = compressor.validate_compression_quality(model_weights, compressed_model)\n",
|
||
" \n",
|
||
" assert 'layer_quality' in validation, \"Should validate each layer\"\n",
|
||
" assert 'overall_quality' in validation, \"Should have overall quality metrics\"\n",
|
||
" assert 0 <= validation['quality_score'] <= 1, \"Quality score should be normalized\"\n",
|
||
" \n",
|
||
" # Each layer should have quality metrics\n",
|
||
" for layer_name in model_weights.keys():\n",
|
||
" assert layer_name in validation['layer_quality'], f\"Missing quality for {layer_name}\"\n",
|
||
" layer_quality = validation['layer_quality'][layer_name]\n",
|
||
" assert 'norm_preservation' in layer_quality, \"Should measure norm preservation\"\n",
|
||
" assert layer_quality['norm_preservation'] > 0, \"Norm preservation should be positive\"\n",
|
||
" \n",
|
||
" # Test that compressed weights are actually sparse\n",
|
||
" for layer_name, compressed_info in compressed_model.items():\n",
|
||
" compressed_weights = compressed_info['weights']\n",
|
||
" sparsity = np.sum(compressed_weights == 0) / compressed_weights.size\n",
|
||
" expected_sparsity = custom_sparsities[layer_name]\n",
|
||
" \n",
|
||
" # Allow some tolerance in sparsity\n",
|
||
" assert abs(sparsity - expected_sparsity) < 0.1, f\"{layer_name} sparsity mismatch\"\n",
|
||
" \n",
|
||
" print(\"✅ Model compression pipeline test passed!\")\n",
|
||
"\n",
|
||
"test_compression_pipeline()"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "3a61f4c6",
|
||
"metadata": {
|
||
"cell_marker": "\"\"\"",
|
||
"lines_to_next_cell": 1
|
||
},
|
||
"source": [
|
||
"## Part 6: Systems Analysis - Memory, Performance, and Deployment Impact\n",
|
||
"\n",
|
||
"Let's analyze compression from a systems engineering perspective, measuring the real-world impact on memory usage, inference speed, and deployment scenarios.\n",
|
||
"\n",
|
||
"### ML Systems Analysis: Why Pruning Enables Edge AI\n",
|
||
"\n",
|
||
"**Memory Complexity**: O(N × sparsity) storage reduction where N = original parameters\n",
|
||
"**Computational Complexity**: Theoretical O(N × sparsity) speedup, actual depends on hardware\n",
|
||
"**Cache Efficiency**: Smaller models fit in cache, reducing memory bandwidth bottlenecks \n",
|
||
"**Energy Efficiency**: Fewer operations = lower power consumption for mobile devices\n",
|
||
"**Deployment Enablement**: Makes models fit where they couldn't before"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"id": "1afc2887",
|
||
"metadata": {
|
||
"lines_to_next_cell": 1,
|
||
"nbgrader": {
|
||
"grade": false,
|
||
"grade_id": "compression-systems-analysis",
|
||
"locked": false,
|
||
"schema_version": 3,
|
||
"solution": true,
|
||
"task": false
|
||
}
|
||
},
|
||
"outputs": [],
|
||
"source": [
|
||
"#| export\n",
|
||
"def profile_compression_memory():\n",
|
||
" \"\"\"\n",
|
||
" Profile memory usage patterns during model compression.\n",
|
||
" \n",
|
||
" This function demonstrates how compression affects memory footprint\n",
|
||
" and enables deployment on resource-constrained devices.\n",
|
||
" \"\"\"\n",
|
||
" import tracemalloc\n",
|
||
" \n",
|
||
" print(\"🔬 Memory Profiling: Model Compression\")\n",
|
||
" print(\"=\" * 50)\n",
|
||
" \n",
|
||
" # Start memory tracking\n",
|
||
" tracemalloc.start()\n",
|
||
" \n",
|
||
" # Create large model (simulating real CNN)\n",
|
||
" print(\"Creating large model weights...\")\n",
|
||
" model_weights = {\n",
|
||
" 'conv1': np.random.normal(0, 0.02, (128, 64, 3, 3)), # ~0.3M parameters\n",
|
||
" 'conv2': np.random.normal(0, 0.02, (256, 128, 3, 3)), # ~1.2M parameters \n",
|
||
" 'fc1': np.random.normal(0, 0.01, (1024, 4096)), # ~4.2M parameters\n",
|
||
" 'fc2': np.random.normal(0, 0.01, (10, 1024)), # ~10K parameters\n",
|
||
" }\n",
|
||
" \n",
|
||
" snapshot1 = tracemalloc.take_snapshot()\n",
|
||
" current, peak = tracemalloc.get_traced_memory()\n",
|
||
" print(f\"After model creation: {current / 1024 / 1024:.1f} MB current, {peak / 1024 / 1024:.1f} MB peak\")\n",
|
||
" \n",
|
||
" # Calculate original model size\n",
|
||
" original_params = sum(w.size for w in model_weights.values())\n",
|
||
" original_size_mb = sum(w.nbytes for w in model_weights.values()) / (1024 * 1024)\n",
|
||
" \n",
|
||
" print(f\"Original model: {original_params:,} parameters, {original_size_mb:.1f} MB\")\n",
|
||
" \n",
|
||
" # Compress model\n",
|
||
" print(\"\\nCompressing model...\")\n",
|
||
" compressor = ModelCompressor()\n",
|
||
" compressed_model = compressor.compress_model(model_weights)\n",
|
||
" \n",
|
||
" snapshot2 = tracemalloc.take_snapshot()\n",
|
||
" current, peak = tracemalloc.get_traced_memory()\n",
|
||
" print(f\"After compression: {current / 1024 / 1024:.1f} MB current, {peak / 1024 / 1024:.1f} MB peak\")\n",
|
||
" \n",
|
||
" # Calculate compressed model size\n",
|
||
" compressed_params = sum(\n",
|
||
" np.sum(info['weights'] != 0) \n",
|
||
" for info in compressed_model.values()\n",
|
||
" )\n",
|
||
" \n",
|
||
" # Estimate compressed storage (could use sparse formats)\n",
|
||
" compressed_size_mb = original_size_mb * (compressed_params / original_params)\n",
|
||
" \n",
|
||
" print(f\"\\n💾 Storage Analysis:\")\n",
|
||
" print(f\" Original: {original_params:,} parameters ({original_size_mb:.1f} MB)\")\n",
|
||
" print(f\" Compressed: {compressed_params:,} parameters ({compressed_size_mb:.1f} MB)\")\n",
|
||
" print(f\" Compression ratio: {original_params / compressed_params:.1f}x\")\n",
|
||
" print(f\" Size reduction: {original_size_mb / compressed_size_mb:.1f}x\")\n",
|
||
" print(f\" Storage savings: {original_size_mb - compressed_size_mb:.1f} MB\")\n",
|
||
" \n",
|
||
" tracemalloc.stop()\n",
|
||
" \n",
|
||
" return {\n",
|
||
" 'original_params': original_params,\n",
|
||
" 'compressed_params': compressed_params,\n",
|
||
" 'original_size_mb': original_size_mb,\n",
|
||
" 'compressed_size_mb': compressed_size_mb,\n",
|
||
" 'compression_ratio': original_params / compressed_params,\n",
|
||
" 'size_reduction': original_size_mb / compressed_size_mb\n",
|
||
" }\n",
|
||
"\n",
|
||
"def analyze_deployment_scenarios():\n",
|
||
" \"\"\"Analyze how compression enables different deployment scenarios.\"\"\"\n",
|
||
" print(\"\\n🚀 Compression Deployment Impact Analysis\")\n",
|
||
" print(\"=\" * 60)\n",
|
||
" \n",
|
||
" # Define deployment constraints\n",
|
||
" scenarios = [\n",
|
||
" {\n",
|
||
" 'name': 'Mobile Phone',\n",
|
||
" 'memory_limit_mb': 100,\n",
|
||
" 'compute_limit_gflops': 10,\n",
|
||
" 'power_sensitive': True,\n",
|
||
" 'description': 'On-device inference for camera apps'\n",
|
||
" },\n",
|
||
" {\n",
|
||
" 'name': 'IoT Device',\n",
|
||
" 'memory_limit_mb': 20,\n",
|
||
" 'compute_limit_gflops': 1,\n",
|
||
" 'power_sensitive': True,\n",
|
||
" 'description': 'Smart sensor with microcontroller'\n",
|
||
" },\n",
|
||
" {\n",
|
||
" 'name': 'Edge Server',\n",
|
||
" 'memory_limit_mb': 1000,\n",
|
||
" 'compute_limit_gflops': 100,\n",
|
||
" 'power_sensitive': False,\n",
|
||
" 'description': 'Local inference server for privacy'\n",
|
||
" },\n",
|
||
" {\n",
|
||
" 'name': 'Wearable',\n",
|
||
" 'memory_limit_mb': 10,\n",
|
||
" 'compute_limit_gflops': 0.5,\n",
|
||
" 'power_sensitive': True,\n",
|
||
" 'description': 'Smartwatch health monitoring'\n",
|
||
" }\n",
|
||
" ]\n",
|
||
" \n",
|
||
" # Model sizes at different compression levels\n",
|
||
" model_configs = [\n",
|
||
" {'name': 'Dense Model', 'size_mb': 200, 'gflops': 50, 'accuracy': 95.0},\n",
|
||
" {'name': '50% Sparse', 'size_mb': 100, 'gflops': 25, 'accuracy': 94.5},\n",
|
||
" {'name': '70% Sparse', 'size_mb': 60, 'gflops': 15, 'accuracy': 93.8},\n",
|
||
" {'name': '90% Sparse', 'size_mb': 20, 'gflops': 5, 'accuracy': 91.2},\n",
|
||
" ]\n",
|
||
" \n",
|
||
" print(\"Scenario | Memory | Compute | Dense | 50% | 70% | 90% | Best Option\")\n",
|
||
" print(\"-\" * 80)\n",
|
||
" \n",
|
||
" for scenario in scenarios:\n",
|
||
" name = scenario['name']\n",
|
||
" mem_limit = scenario['memory_limit_mb']\n",
|
||
" compute_limit = scenario['compute_limit_gflops']\n",
|
||
" \n",
|
||
" # Check which model configurations fit\n",
|
||
" viable_models = []\n",
|
||
" for config in model_configs:\n",
|
||
" fits_memory = config['size_mb'] <= mem_limit\n",
|
||
" fits_compute = config['gflops'] <= compute_limit\n",
|
||
" \n",
|
||
" if fits_memory and fits_compute:\n",
|
||
" viable_models.append(config['name'])\n",
|
||
" \n",
|
||
" # Determine best option\n",
|
||
" if not viable_models:\n",
|
||
" best_option = \"None fit!\"\n",
|
||
" else:\n",
|
||
" # Choose highest accuracy among viable options\n",
|
||
" viable_configs = [c for c in model_configs if c['name'] in viable_models]\n",
|
||
" best_config = max(viable_configs, key=lambda x: x['accuracy'])\n",
|
||
" best_option = f\"{best_config['name']} ({best_config['accuracy']:.1f}%)\"\n",
|
||
" \n",
|
||
" # Show fit status for each compression level\n",
|
||
" fit_status = []\n",
|
||
" for config in model_configs:\n",
|
||
" fits_mem = config['size_mb'] <= mem_limit\n",
|
||
" fits_comp = config['gflops'] <= compute_limit\n",
|
||
" if fits_mem and fits_comp:\n",
|
||
" status = \"✅\"\n",
|
||
" elif fits_mem:\n",
|
||
" status = \"⚡\" # Memory OK, compute too high\n",
|
||
" elif fits_comp:\n",
|
||
" status = \"💾\" # Compute OK, memory too high\n",
|
||
" else:\n",
|
||
" status = \"❌\"\n",
|
||
" fit_status.append(status)\n",
|
||
" \n",
|
||
" print(f\"{name:14} | {mem_limit:4d}MB | {compute_limit:5.1f}G | \"\n",
|
||
" f\"{fit_status[0]:3} | {fit_status[1]:3} | {fit_status[2]:3} | {fit_status[3]:3} | {best_option}\")\n",
|
||
" \n",
|
||
" print(f\"\\n💡 Key Insights:\")\n",
|
||
" print(f\" • Compression often determines deployment feasibility\")\n",
|
||
" print(f\" • Edge devices require 70-90% sparsity for deployment\")\n",
|
||
" print(f\" • Mobile devices can use moderate compression (50-70%)\")\n",
|
||
" print(f\" • Power constraints favor sparse models (fewer operations)\")\n",
|
||
" print(f\" • Memory limits are often more restrictive than compute limits\")\n",
|
||
"\n",
|
||
"def benchmark_sparse_inference_speedup():\n",
|
||
" \"\"\"Benchmark actual vs theoretical speedup from sparsity.\"\"\"\n",
|
||
" print(\"\\n⚡ Sparse Inference Speedup Analysis\")\n",
|
||
" print(\"=\" * 50)\n",
|
||
" \n",
|
||
" import time\n",
|
||
" \n",
|
||
" # Test different model sizes and sparsity levels\n",
|
||
" configs = [\n",
|
||
" {'size': (256, 512), 'sparsity': 0.5},\n",
|
||
" {'size': (512, 1024), 'sparsity': 0.7},\n",
|
||
" {'size': (1024, 2048), 'sparsity': 0.8},\n",
|
||
" {'size': (2048, 4096), 'sparsity': 0.9},\n",
|
||
" ]\n",
|
||
" \n",
|
||
" print(\"Model Size | Sparsity | Theoretical | Actual | Efficiency | Notes\")\n",
|
||
" print(\"-\" * 70)\n",
|
||
" \n",
|
||
" for config in configs:\n",
|
||
" size = config['size']\n",
|
||
" sparsity = config['sparsity']\n",
|
||
" \n",
|
||
" # Create sparse layer\n",
|
||
" sparse_layer = SparseLinear(size[0], size[1])\n",
|
||
" \n",
|
||
" # Load and prune weights\n",
|
||
" weights = np.random.normal(0, 0.1, (size[1], size[0]))\n",
|
||
" sparse_layer.load_dense_weights(weights)\n",
|
||
" sparse_layer.prune_weights(sparsity)\n",
|
||
" \n",
|
||
" # Benchmark\n",
|
||
" benchmark = sparse_layer.benchmark_speedup(batch_size=16, iterations=100)\n",
|
||
" \n",
|
||
" theoretical = benchmark['theoretical_speedup']\n",
|
||
" actual = benchmark['actual_speedup'] \n",
|
||
" efficiency = benchmark['efficiency']\n",
|
||
" \n",
|
||
" # Determine bottleneck\n",
|
||
" if efficiency > 0.8:\n",
|
||
" notes = \"CPU bound\"\n",
|
||
" elif efficiency > 0.5:\n",
|
||
" notes = \"Memory bound\"\n",
|
||
" else:\n",
|
||
" notes = \"Framework overhead\"\n",
|
||
" \n",
|
||
" print(f\"{size[0]}x{size[1]:4} | {sparsity:6.0%} | {theoretical:9.1f}x | \"\n",
|
||
" f\"{actual:5.1f}x | {efficiency:8.1%} | {notes}\")\n",
|
||
" \n",
|
||
" print(f\"\\n🎯 Speedup Reality Check:\")\n",
|
||
" print(f\" • Theoretical speedup assumes perfect sparse hardware\")\n",
|
||
" print(f\" • Actual speedup limited by memory bandwidth and overhead\")\n",
|
||
" print(f\" • High sparsity (>80%) shows diminishing returns\") \n",
|
||
" print(f\" • Production sparse hardware (GPUs, TPUs) achieve better efficiency\")"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "a528a133",
|
||
"metadata": {
|
||
"cell_marker": "\"\"\"",
|
||
"lines_to_next_cell": 1
|
||
},
|
||
"source": [
|
||
"### Test: Systems Analysis Implementation\n",
|
||
"\n",
|
||
"Let's verify our systems analysis provides valuable performance insights."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"id": "95340fc7",
|
||
"metadata": {
|
||
"nbgrader": {
|
||
"grade": true,
|
||
"grade_id": "test-systems-analysis",
|
||
"locked": false,
|
||
"points": 10,
|
||
"schema_version": 3,
|
||
"solution": false,
|
||
"task": false
|
||
}
|
||
},
|
||
"outputs": [],
|
||
"source": [
|
||
"def test_systems_analysis():\n",
|
||
" \"\"\"Test systems analysis and profiling functions.\"\"\"\n",
|
||
" print(\"Testing systems analysis...\")\n",
|
||
" \n",
|
||
" # Test memory profiling\n",
|
||
" memory_results = profile_compression_memory()\n",
|
||
" assert memory_results['compression_ratio'] > 2.0, \"Should show significant compression\"\n",
|
||
" assert memory_results['original_size_mb'] > memory_results['compressed_size_mb'], \"Should reduce size\"\n",
|
||
" \n",
|
||
" # Test deployment analysis\n",
|
||
" analyze_deployment_scenarios()\n",
|
||
" \n",
|
||
" # Test speedup benchmarking\n",
|
||
" benchmark_sparse_inference_speedup()\n",
|
||
" \n",
|
||
" # All functions should run without errors\n",
|
||
" print(\"✅ Systems analysis test passed!\")\n",
|
||
"\n",
|
||
"test_systems_analysis()"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "f9419421",
|
||
"metadata": {
|
||
"cell_marker": "\"\"\"",
|
||
"lines_to_next_cell": 1
|
||
},
|
||
"source": [
|
||
"## Part 7: Production Context - Real-World Pruning Systems\n",
|
||
"\n",
|
||
"Let's explore how pruning is used in production ML systems and connect our implementation to real frameworks and deployment platforms.\n",
|
||
"\n",
|
||
"### Production Pruning Systems:\n",
|
||
"1. **PyTorch Pruning**: `torch.nn.utils.prune` for magnitude and structured pruning\n",
|
||
"2. **TensorFlow Model Optimization**: Pruning API with gradual sparsity\n",
|
||
"3. **NVIDIA TensorRT**: Structured pruning for inference acceleration\n",
|
||
"4. **OpenVINO**: Intel's optimization toolkit with pruning support\n",
|
||
"5. **Edge TPU**: Google's quantization + pruning for mobile inference\n",
|
||
"6. **Apple Neural Engine**: Hardware-accelerated sparse computation"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"id": "b61b9874",
|
||
"metadata": {
|
||
"lines_to_next_cell": 1,
|
||
"nbgrader": {
|
||
"grade": false,
|
||
"grade_id": "production-context",
|
||
"locked": false,
|
||
"schema_version": 3,
|
||
"solution": true,
|
||
"task": false
|
||
}
|
||
},
|
||
"outputs": [],
|
||
"source": [
|
||
"def compare_with_production_pruning():\n",
|
||
" \"\"\"\n",
|
||
" Compare our implementation with production pruning systems.\n",
|
||
" \n",
|
||
" This function explains how real ML frameworks handle pruning\n",
|
||
" and where our implementation fits in the broader ecosystem.\n",
|
||
" \"\"\"\n",
|
||
" print(\"🏭 Production Pruning Systems Comparison\")\n",
|
||
" print(\"=\" * 70)\n",
|
||
" \n",
|
||
" frameworks = {\n",
|
||
" 'PyTorch': {\n",
|
||
" 'pruning_methods': ['Magnitude', 'Random', 'Structured', 'Custom'],\n",
|
||
" 'sparsity_support': ['Unstructured', 'Structured (channel)', '2:4 sparsity'],\n",
|
||
" 'deployment': 'TorchScript, ONNX export with sparse ops',\n",
|
||
" 'hardware_acceleration': 'Limited - mostly research focused',\n",
|
||
" 'our_similarity': 'High - similar magnitude-based approach'\n",
|
||
" },\n",
|
||
" 'TensorFlow': {\n",
|
||
" 'pruning_methods': ['Magnitude', 'Gradual', 'Structured'],\n",
|
||
" 'sparsity_support': ['Unstructured', 'Block sparse', 'Structured'],\n",
|
||
" 'deployment': 'TensorFlow Lite with sparse inference',\n",
|
||
" 'hardware_acceleration': 'XLA optimization, mobile acceleration',\n",
|
||
" 'our_similarity': 'High - magnitude pruning with calibration'\n",
|
||
" },\n",
|
||
" 'TensorRT': {\n",
|
||
" 'pruning_methods': ['Structured only', 'Channel pruning'],\n",
|
||
" 'sparsity_support': ['2:4 structured sparsity', 'Channel removal'],\n",
|
||
" 'deployment': 'Optimized inference engine with sparse kernels',\n",
|
||
" 'hardware_acceleration': 'GPU Tensor Cores, specialized sparse ops',\n",
|
||
" 'our_similarity': 'Medium - focuses on structured pruning'\n",
|
||
" },\n",
|
||
" 'OpenVINO': {\n",
|
||
" 'pruning_methods': ['Magnitude', 'Structured', 'Mixed precision'],\n",
|
||
" 'sparsity_support': ['Unstructured', 'Block sparse', 'Channel wise'],\n",
|
||
" 'deployment': 'Intel CPU/GPU optimization with sparse support',\n",
|
||
" 'hardware_acceleration': 'Intel VPU, CPU vectorization',\n",
|
||
" 'our_similarity': 'High - comprehensive pruning toolkit'\n",
|
||
" },\n",
|
||
" 'Our TinyTorch': {\n",
|
||
" 'pruning_methods': ['Magnitude-based', 'Structured filter pruning'],\n",
|
||
" 'sparsity_support': ['Unstructured', 'Structured (filter removal)'],\n",
|
||
" 'deployment': 'Educational sparse computation simulation',\n",
|
||
" 'hardware_acceleration': 'Educational - simulated speedups',\n",
|
||
" 'our_similarity': 'Reference implementation for learning'\n",
|
||
" }\n",
|
||
" }\n",
|
||
" \n",
|
||
" print(\"Framework | Methods | Hardware Support | Deployment | Similarity\")\n",
|
||
" print(\"-\" * 70)\n",
|
||
" \n",
|
||
" for name, specs in frameworks.items():\n",
|
||
" methods_str = specs['pruning_methods'][0] # Primary method\n",
|
||
" hw_str = specs['hardware_acceleration'][:20] + \"...\" if len(specs['hardware_acceleration']) > 20 else specs['hardware_acceleration']\n",
|
||
" deploy_str = specs['deployment'][:20] + \"...\" if len(specs['deployment']) > 20 else specs['deployment']\n",
|
||
" sim_str = specs['our_similarity'][:15] + \"...\" if len(specs['our_similarity']) > 15 else specs['our_similarity']\n",
|
||
" \n",
|
||
" print(f\"{name:9} | {methods_str:12} | {hw_str:16} | {deploy_str:12} | {sim_str}\")\n",
|
||
" \n",
|
||
" print(f\"\\n🎯 Key Production Insights:\")\n",
|
||
" print(f\" • Our magnitude approach is industry standard\")\n",
|
||
" print(f\" • Production systems emphasize structured pruning for hardware\")\n",
|
||
" print(f\" • Real frameworks integrate pruning with quantization\")\n",
|
||
" print(f\" • Hardware acceleration requires specialized sparse kernels\")\n",
|
||
" print(f\" • Mobile deployment drives most production pruning adoption\")\n",
|
||
"\n",
|
||
"def demonstrate_pruning_applications():\n",
|
||
" \"\"\"Show real-world applications where pruning enables deployment.\"\"\"\n",
|
||
" print(\"\\n🌟 Real-World Pruning Applications\")\n",
|
||
" print(\"=\" * 50)\n",
|
||
" \n",
|
||
" applications = [\n",
|
||
" {\n",
|
||
" 'domain': 'Mobile Photography',\n",
|
||
" 'model': 'Portrait segmentation CNN',\n",
|
||
" 'constraints': '< 10MB, < 100ms inference',\n",
|
||
" 'pruning_strategy': '70% unstructured + quantization',\n",
|
||
" 'outcome': 'Real-time portrait mode on phone cameras',\n",
|
||
" 'example': 'Google Pixel, iPhone portrait mode'\n",
|
||
" },\n",
|
||
" {\n",
|
||
" 'domain': 'Autonomous Vehicles', \n",
|
||
" 'model': 'Object detection (YOLO)',\n",
|
||
" 'constraints': '< 500MB, < 50ms inference, safety critical',\n",
|
||
" 'pruning_strategy': '50% structured pruning for latency',\n",
|
||
" 'outcome': 'Real-time object detection for ADAS',\n",
|
||
" 'example': 'Tesla FSD, Waymo perception stack'\n",
|
||
" },\n",
|
||
" {\n",
|
||
" 'domain': 'Smart Home',\n",
|
||
" 'model': 'Voice keyword detection',\n",
|
||
" 'constraints': '< 1MB, always-on, battery powered',\n",
|
||
" 'pruning_strategy': '90% sparsity + 8-bit quantization',\n",
|
||
" 'outcome': 'Always-listening wake word detection',\n",
|
||
" 'example': 'Alexa, Google Assistant edge processing'\n",
|
||
" },\n",
|
||
" {\n",
|
||
" 'domain': 'Medical Imaging',\n",
|
||
" 'model': 'X-ray diagnosis CNN',\n",
|
||
" 'constraints': 'Edge deployment, <1GB memory',\n",
|
||
" 'pruning_strategy': '60% structured pruning + knowledge distillation',\n",
|
||
" 'outcome': 'Portable medical AI for remote clinics',\n",
|
||
" 'example': 'Google AI for radiology, Zebra Medical'\n",
|
||
" },\n",
|
||
" {\n",
|
||
" 'domain': 'Augmented Reality',\n",
|
||
" 'model': 'Hand tracking and gesture recognition',\n",
|
||
" 'constraints': '< 50MB, 60fps, mobile GPU',\n",
|
||
" 'pruning_strategy': 'Channel pruning + mobile-optimized architecture',\n",
|
||
" 'outcome': 'Real-time hand tracking for AR experiences',\n",
|
||
" 'example': 'Apple ARKit, Google ARCore, Meta Quest'\n",
|
||
" }\n",
|
||
" ]\n",
|
||
" \n",
|
||
" print(\"Domain | Model Type | Pruning Strategy | Outcome\")\n",
|
||
" print(\"-\" * 75)\n",
|
||
" \n",
|
||
" for app in applications:\n",
|
||
" domain_str = app['domain'][:18]\n",
|
||
" model_str = app['model'][:15] + \"...\" if len(app['model']) > 15 else app['model']\n",
|
||
" strategy_str = app['pruning_strategy'][:20] + \"...\" if len(app['pruning_strategy']) > 20 else app['pruning_strategy']\n",
|
||
" outcome_str = app['outcome'][:25] + \"...\" if len(app['outcome']) > 25 else app['outcome']\n",
|
||
" \n",
|
||
" print(f\"{domain_str:18} | {model_str:10} | {strategy_str:16} | {outcome_str}\")\n",
|
||
" print(f\" Example: {app['example']}\")\n",
|
||
" print()\n",
|
||
" \n",
|
||
" print(\"💡 Common Patterns in Production Pruning:\")\n",
|
||
" print(\" • Latency-critical apps use structured pruning (regular sparsity)\") \n",
|
||
" print(\" • Memory-constrained devices use aggressive unstructured pruning\")\n",
|
||
" print(\" • Safety-critical systems use conservative pruning with validation\")\n",
|
||
" print(\" • Mobile apps combine pruning + quantization for maximum compression\")\n",
|
||
" print(\" • Edge AI enables privacy (on-device processing) through compression\")"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "6a6e6296",
|
||
"metadata": {
|
||
"cell_marker": "\"\"\"",
|
||
"lines_to_next_cell": 1
|
||
},
|
||
"source": [
|
||
"### Test: Production Context Analysis\n",
|
||
"\n",
|
||
"Let's verify our production context analysis provides valuable insights."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"id": "34c025b2",
|
||
"metadata": {
|
||
"nbgrader": {
|
||
"grade": true,
|
||
"grade_id": "test-production-context",
|
||
"locked": false,
|
||
"points": 5,
|
||
"schema_version": 3,
|
||
"solution": false,
|
||
"task": false
|
||
}
|
||
},
|
||
"outputs": [],
|
||
"source": [
|
||
"def test_production_context():\n",
|
||
" \"\"\"Test production context analysis.\"\"\"\n",
|
||
" print(\"Testing production context analysis...\")\n",
|
||
" \n",
|
||
" # Test framework comparison\n",
|
||
" compare_with_production_pruning()\n",
|
||
" \n",
|
||
" # Test applications demonstration\n",
|
||
" demonstrate_pruning_applications()\n",
|
||
" \n",
|
||
" # Both functions should run without errors and provide insights\n",
|
||
" print(\"✅ Production context analysis test passed!\")\n",
|
||
"\n",
|
||
"test_production_context()"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "33bb80cd",
|
||
"metadata": {
|
||
"cell_marker": "\"\"\"",
|
||
"lines_to_next_cell": 1
|
||
},
|
||
"source": [
|
||
"## Comprehensive Testing\n",
|
||
"\n",
|
||
"Let's run a comprehensive test of all compression functionality to ensure everything works together correctly."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"id": "2898e405",
|
||
"metadata": {
|
||
"nbgrader": {
|
||
"grade": false,
|
||
"grade_id": "comprehensive-testing",
|
||
"locked": false,
|
||
"schema_version": 3,
|
||
"solution": false,
|
||
"task": false
|
||
}
|
||
},
|
||
"outputs": [],
|
||
"source": [
|
||
"def run_all_tests():\n",
|
||
" \"\"\"Run comprehensive test suite for compression module.\"\"\"\n",
|
||
" print(\"🧪 Running Comprehensive Compression Test Suite\")\n",
|
||
" print(\"=\" * 60)\n",
|
||
" \n",
|
||
" test_functions = [\n",
|
||
" (\"Weight Redundancy Analysis\", test_redundancy_analysis),\n",
|
||
" (\"Magnitude-Based Pruning\", test_magnitude_pruning),\n",
|
||
" (\"Structured Pruning\", test_structured_pruning),\n",
|
||
" (\"Sparse Neural Network\", test_sparse_neural_network),\n",
|
||
" (\"Model Compression Pipeline\", test_compression_pipeline),\n",
|
||
" (\"Systems Analysis\", test_systems_analysis),\n",
|
||
" (\"Production Context\", test_production_context)\n",
|
||
" ]\n",
|
||
" \n",
|
||
" passed = 0\n",
|
||
" total = len(test_functions)\n",
|
||
" \n",
|
||
" for test_name, test_func in test_functions:\n",
|
||
" print(f\"\\n{'='*20} {test_name} {'='*20}\")\n",
|
||
" try:\n",
|
||
" test_func()\n",
|
||
" print(f\"✅ {test_name}: PASSED\")\n",
|
||
" passed += 1\n",
|
||
" except Exception as e:\n",
|
||
" print(f\"❌ {test_name}: FAILED - {e}\")\n",
|
||
" \n",
|
||
" print(f\"\\n🎯 Test Results: {passed}/{total} tests passed\")\n",
|
||
" \n",
|
||
" if passed == total:\n",
|
||
" print(\"🎉 All compression tests passed! Module implementation complete.\")\n",
|
||
" \n",
|
||
" # Show final demo\n",
|
||
" print(f\"\\n🚀 Final Compression Demo:\")\n",
|
||
" print(\"=\" * 50)\n",
|
||
" \n",
|
||
" # Create a realistic model and compress it\n",
|
||
" np.random.seed(42)\n",
|
||
" demo_model = {\n",
|
||
" 'backbone_conv': np.random.normal(0, 0.02, (128, 64, 3, 3)),\n",
|
||
" 'classifier_fc': np.random.normal(0, 0.01, (10, 2048)),\n",
|
||
" }\n",
|
||
" \n",
|
||
" compressor = ModelCompressor()\n",
|
||
" compressed = compressor.compress_model(demo_model, {'backbone_conv': 0.7, 'classifier_fc': 0.8})\n",
|
||
" \n",
|
||
" original_params = sum(w.size for w in demo_model.values())\n",
|
||
" compressed_params = sum(np.sum(info['weights'] != 0) for info in compressed.values())\n",
|
||
" \n",
|
||
" print(f\"🎯 FINAL RESULT:\")\n",
|
||
" print(f\" Original model: {original_params:,} parameters\")\n",
|
||
" print(f\" Compressed model: {compressed_params:,} parameters\")\n",
|
||
" print(f\" Compression achieved: {original_params/compressed_params:.1f}x smaller\")\n",
|
||
" print(f\" Size reduction: {(1-compressed_params/original_params)*100:.1f}% of parameters removed\")\n",
|
||
" print(f\" ✅ Ready for edge deployment!\")\n",
|
||
" \n",
|
||
" else:\n",
|
||
" print(f\"⚠️ {total - passed} tests failed. Review implementation.\")\n",
|
||
"\n",
|
||
"if __name__ == \"__main__\":\n",
|
||
" run_all_tests()"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "016ded8e",
|
||
"metadata": {
|
||
"cell_marker": "\"\"\""
|
||
},
|
||
"source": [
|
||
"## 🤔 ML Systems Thinking: Interactive Questions\n",
|
||
"\n",
|
||
"Now that you've implemented neural network pruning, let's reflect on the systems engineering principles and production deployment considerations.\n",
|
||
"\n",
|
||
"**Instructions**: Think through these questions based on your implementation experience. Consider both the technical details and the broader systems implications."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "7464a149",
|
||
"metadata": {
|
||
"cell_marker": "\"\"\"",
|
||
"nbgrader": {
|
||
"grade": true,
|
||
"grade_id": "systems-thinking-1",
|
||
"locked": false,
|
||
"points": 10,
|
||
"schema_version": 3,
|
||
"solution": true,
|
||
"task": false
|
||
}
|
||
},
|
||
"source": [
|
||
"**Question 1: Pruning Strategy Analysis**\n",
|
||
"\n",
|
||
"You implemented both magnitude-based and structured pruning in your `MagnitudePruner` and `prune_conv_filters()` functions:\n",
|
||
"\n",
|
||
"a) Why does magnitude-based pruning work so well for neural networks? What does the effectiveness of this simple heuristic tell us about neural network weight distributions?\n",
|
||
"\n",
|
||
"b) In your structured vs unstructured comparison, structured pruning achieved lower compression ratios but is preferred for deployment. Explain this tradeoff in terms of hardware efficiency and inference speed.\n",
|
||
"\n",
|
||
"c) Your compression pipeline used different sparsity targets per layer (conv: 60%, dense: 80%). Why do dense layers typically tolerate higher sparsity than convolutional layers?\n",
|
||
"\n",
|
||
"**Your Answer:**\n",
|
||
"\n",
|
||
"<!-- BEGIN SOLUTION -->\n",
|
||
"a) Magnitude-based pruning works because:\n",
|
||
"- Neural networks exhibit natural redundancy with many small, unimportant weights\n",
|
||
"- Weight magnitude correlates with importance - small weights contribute little to output\n",
|
||
"- Networks are over-parametrized, so removing low-magnitude weights has minimal accuracy impact\n",
|
||
"- The success reveals that weight distributions have long tails - most weights are small, few are large\n",
|
||
"- This natural sparsity suggests networks learn efficient representations despite overparametrization\n",
|
||
"\n",
|
||
"b) The structured vs unstructured tradeoff:\n",
|
||
"- Unstructured: Higher compression (removes individual weights) but irregular sparsity patterns\n",
|
||
"- Structured: Lower compression (removes entire filters/channels) but regular, hardware-friendly patterns\n",
|
||
"- Hardware prefers structured because: dense computation on smaller tensors is faster than sparse computation\n",
|
||
"- Memory access: structured removal reduces tensor sizes, improving cache efficiency\n",
|
||
"- No need for specialized sparse kernels - can use standard GEMM operations\n",
|
||
"- Inference speed: structured pruning provides actual speedup, unstructured often theoretical only\n",
|
||
"\n",
|
||
"c) Layer-specific sparsity tolerance:\n",
|
||
"- Dense layers: High redundancy, many parameters, more overparametrized → tolerate 80% sparsity\n",
|
||
"- Conv layers: Fewer parameters, each filter captures important spatial features → more sensitive\n",
|
||
"- First layers: Extract low-level features (edges, textures) → very sensitive to pruning\n",
|
||
"- Later layers: More abstract features with redundancy → can handle moderate pruning\n",
|
||
"- Output layers: Critical for final predictions → require conservative pruning\n",
|
||
"<!-- END SOLUTION -->"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "51c856b6",
|
||
"metadata": {
|
||
"cell_marker": "\"\"\"",
|
||
"nbgrader": {
|
||
"grade": true,
|
||
"grade_id": "systems-thinking-2",
|
||
"locked": false,
|
||
"points": 10,
|
||
"schema_version": 3,
|
||
"solution": true,
|
||
"task": false
|
||
}
|
||
},
|
||
"source": [
|
||
"**Question 2: Sparse Computation and Hardware Efficiency**\n",
|
||
"\n",
|
||
"Your `SparseLinear` class demonstrated the challenges of actually accelerating sparse computation:\n",
|
||
"\n",
|
||
"a) Why did your sparse computation benchmarks show lower actual speedup compared to theoretical speedup? What are the main bottlenecks preventing sparse computation from achieving theoretical gains?\n",
|
||
"\n",
|
||
"b) In your deployment analysis, mobile devices required 70-90% sparsity while edge servers could use 50%. Explain how hardware constraints drive pruning requirements differently across deployment targets.\n",
|
||
"\n",
|
||
"c) You found that structured pruning provides better real-world performance than unstructured pruning. How would you design a neural network architecture that's naturally \"pruning-friendly\" from the start?\n",
|
||
"\n",
|
||
"**Your Answer:**\n",
|
||
"\n",
|
||
"<!-- BEGIN SOLUTION -->\n",
|
||
"a) Lower actual speedup due to multiple bottlenecks:\n",
|
||
"- Memory bandwidth: Sparse computation is often memory-bound, not compute-bound\n",
|
||
"- Framework overhead: PyTorch/NumPy not optimized for arbitrary sparsity patterns\n",
|
||
"- Cache inefficiency: Irregular sparse patterns hurt cache locality compared to dense operations\n",
|
||
"- Vectorization loss: SIMD instructions work best on dense, regular data patterns\n",
|
||
"- Index overhead: Storing and accessing sparse indices adds computational cost\n",
|
||
"- Hardware mismatch: Most CPUs/GPUs optimized for dense linear algebra, not sparse\n",
|
||
"\n",
|
||
"b) Hardware-driven pruning requirements:\n",
|
||
"- Mobile: Strict memory (4GB total), battery, thermal constraints → need aggressive 70-90% sparsity\n",
|
||
"- Edge servers: More memory (16GB+), power, cooling → moderate 50% sparsity sufficient\n",
|
||
"- Cloud: Abundant resources → pruning for cost optimization, not necessity\n",
|
||
"- Embedded/IoT: Extreme constraints (MB not GB) → need structured pruning + quantization\n",
|
||
"- Different hardware accelerators: Edge TPU loves sparsity, standard GPUs don't benefit much\n",
|
||
"\n",
|
||
"c) Pruning-friendly architecture design:\n",
|
||
"- Use more, smaller layers rather than fewer, large layers (easier to prune entire channels)\n",
|
||
"- Design with skip connections (allows aggressive pruning of individual branches)\n",
|
||
"- Separate feature extraction from classification (different pruning sensitivities)\n",
|
||
"- Use group convolutions (natural structured pruning boundaries)\n",
|
||
"- Design with mobile-first mindset (efficient from start, not compressed afterward)\n",
|
||
"- Consider lottery ticket initialization (start with good sparse subnetwork)\n",
|
||
"<!-- END SOLUTION -->"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "6e6209ca",
|
||
"metadata": {
|
||
"cell_marker": "\"\"\"",
|
||
"nbgrader": {
|
||
"grade": true,
|
||
"grade_id": "systems-thinking-3",
|
||
"locked": false,
|
||
"points": 10,
|
||
"schema_version": 3,
|
||
"solution": true,
|
||
"task": false
|
||
}
|
||
},
|
||
"source": [
|
||
"**Question 3: Model Compression Pipeline and Production Deployment**\n",
|
||
"\n",
|
||
"Your `ModelCompressor` implemented a complete compression pipeline with analysis, compression, and validation:\n",
|
||
"\n",
|
||
"a) Your pipeline analyzed each layer to recommend sparsity levels. In production deployment, how would you extend this to handle dynamic workloads where the optimal sparsity might change based on accuracy requirements or latency constraints?\n",
|
||
"\n",
|
||
"b) You implemented quality validation by comparing weight preservation. But in production, what matters is end-to-end accuracy and latency. How would you design a compression validation system that ensures deployment success?\n",
|
||
"\n",
|
||
"c) Looking at your production applications analysis, why is pruning often combined with other optimizations (quantization, knowledge distillation) rather than used alone? What are the complementary benefits?\n",
|
||
"\n",
|
||
"**Your Answer:**\n",
|
||
"\n",
|
||
"<!-- BEGIN SOLUTION -->\n",
|
||
"a) Dynamic compression for production:\n",
|
||
"- A/B testing framework: gradually adjust sparsity based on accuracy metrics in production\n",
|
||
"- Multi-model serving: maintain models at different compression levels (70%, 80%, 90% sparse)\n",
|
||
"- Dynamic switching: use less compressed models during high-accuracy periods, more during low-latency needs\n",
|
||
"- Feedback loop: monitor accuracy degradation and automatically adjust compression\n",
|
||
"- User-specific models: different compression for different user segments or use cases\n",
|
||
"- Time-based adaptation: more compression during peak load, less during quality-critical periods\n",
|
||
"- Canary deployments: test compression changes on small traffic percentage first\n",
|
||
"\n",
|
||
"b) End-to-end validation system:\n",
|
||
"- Task-specific metrics: measure final accuracy, F1, BLEU - whatever matters for the application\n",
|
||
"- Latency benchmarking: measure actual inference time on target hardware\n",
|
||
"- A/B testing: compare compressed vs uncompressed models on real user traffic\n",
|
||
"- Regression testing: ensure compression doesn't break edge cases or specific inputs\n",
|
||
"- Hardware-specific validation: test on actual deployment hardware, not just development machines\n",
|
||
"- Load testing: verify performance under realistic concurrent inference loads\n",
|
||
"- Accuracy monitoring: continuous validation in production with automatic rollback triggers\n",
|
||
"\n",
|
||
"c) Why pruning is combined with other optimizations:\n",
|
||
"- Pruning + quantization: attack both parameter count and parameter size (4x + 4x = 16x compression)\n",
|
||
"- Pruning + knowledge distillation: maintain accuracy while compressing (teacher-student training)\n",
|
||
"- Complementary bottlenecks: pruning reduces compute, quantization reduces memory bandwidth\n",
|
||
"- Different deployment needs: mobile needs both size and speed, cloud needs cost optimization\n",
|
||
"- Diminishing returns: 90% pruning alone may hurt accuracy, but 70% pruning + quantization achieves same compression with better accuracy\n",
|
||
"- Hardware optimization: different techniques work better on different hardware (GPU vs mobile CPU)\n",
|
||
"<!-- END SOLUTION -->"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "a3584d5f",
|
||
"metadata": {
|
||
"cell_marker": "\"\"\"",
|
||
"nbgrader": {
|
||
"grade": true,
|
||
"grade_id": "systems-thinking-4",
|
||
"locked": false,
|
||
"points": 10,
|
||
"schema_version": 3,
|
||
"solution": true,
|
||
"task": false
|
||
}
|
||
},
|
||
"source": [
|
||
"**Question 4: Edge AI and Deployment Enablement**\n",
|
||
"\n",
|
||
"Based on your systems analysis and deployment scenarios:\n",
|
||
"\n",
|
||
"a) Your memory profiling showed that pruning enables deployment where dense models won't fit. But pruning also changes the computational characteristics of models. How does this affect the entire ML systems stack, from training to serving?\n",
|
||
"\n",
|
||
"b) In your production applications analysis, you saw pruning enabling privacy-preserving on-device AI. Explain how compression techniques like pruning change the fundamental economics and capabilities of AI deployment.\n",
|
||
"\n",
|
||
"c) Looking forward, how do you think the relationship between model architectures, hardware capabilities, and compression techniques will evolve? What are the implications for ML systems engineering?\n",
|
||
"\n",
|
||
"**Your Answer:**\n",
|
||
"\n",
|
||
"<!-- BEGIN SOLUTION -->\n",
|
||
"a) Pruning affects the entire ML systems stack:\n",
|
||
"- Training: Need pruning-aware training, gradual sparsity increases, specialized optimizers\n",
|
||
"- Model versioning: Track both dense and compressed versions, compression parameters\n",
|
||
"- Serving infrastructure: Need sparse computation support, different batching strategies\n",
|
||
"- Monitoring: Different performance characteristics, need sparsity-aware metrics\n",
|
||
"- Debugging: Sparse models behave differently, need specialized debugging tools\n",
|
||
"- Hardware utilization: Lower compute utilization but different memory access patterns\n",
|
||
"- Load balancing: Sparse models have different latency profiles, affects request routing\n",
|
||
"\n",
|
||
"b) Compression changes AI deployment economics:\n",
|
||
"- Democratizes AI: Enables AI on devices that couldn't run dense models (phones, IoT, wearables)\n",
|
||
"- Privacy transformation: On-device processing eliminates need to send data to cloud\n",
|
||
"- Cost structure shift: Reduces cloud compute costs, shifts processing to edge devices\n",
|
||
"- Latency improvement: Local processing eliminates network round-trips\n",
|
||
"- Offline capability: Compressed models enable AI without internet connectivity\n",
|
||
"- Market expansion: Creates new use cases impossible with cloud-only AI\n",
|
||
"- Energy efficiency: Critical for battery-powered devices, enables always-on AI\n",
|
||
"\n",
|
||
"c) Future evolution predictions:\n",
|
||
"- Hardware-software co-design: Chips designed specifically for sparse computation (like Edge TPU)\n",
|
||
"- Architecture evolution: Networks designed for compression from scratch, not post-hoc optimization\n",
|
||
"- Automatic compression: ML systems that automatically find optimal compression for deployment targets\n",
|
||
"- Dynamic compression: Models that adapt compression level based on runtime constraints\n",
|
||
"- Compression-aware training: End-to-end training that considers deployment constraints\n",
|
||
"- Standardization: Common sparse formats and APIs across frameworks and hardware\n",
|
||
"- New paradigms: Mixture of experts, early exit networks - architecturally sparse models\n",
|
||
"- The future is compression-first design, not compression as afterthought\n",
|
||
"<!-- END SOLUTION -->"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "b7aabbc8",
|
||
"metadata": {
|
||
"cell_marker": "\"\"\""
|
||
},
|
||
"source": [
|
||
"## 🎯 MODULE SUMMARY: Compression - Neural Network Pruning for Edge Deployment\n",
|
||
"\n",
|
||
"### What You Accomplished\n",
|
||
"\n",
|
||
"In this module, you built a complete **neural network compression system** using pruning techniques that remove 70% of parameters while maintaining 95%+ accuracy. You learned to:\n",
|
||
"\n",
|
||
"**🔧 Core Implementation Skills:**\n",
|
||
"- **Magnitude-based pruning**: Identified and removed unimportant weights using simple yet effective heuristics\n",
|
||
"- **Structured vs unstructured pruning**: Built both approaches and understood their hardware tradeoffs\n",
|
||
"- **Sparse computation**: Implemented efficient sparse linear layers and benchmarked real vs theoretical speedups\n",
|
||
"- **End-to-end compression pipeline**: Created production-ready model compression with analysis, validation, and optimization\n",
|
||
"\n",
|
||
"**📊 Systems Engineering Insights:**\n",
|
||
"- **Neural network redundancy**: Discovered that networks contain 70-90% redundant parameters that can be safely removed\n",
|
||
"- **Hardware efficiency tradeoffs**: Understood why structured pruning provides actual speedup while unstructured gives theoretical speedup\n",
|
||
"- **Memory vs compute optimization**: Learned how pruning reduces both memory footprint and computational requirements\n",
|
||
"- **Deployment enablement**: Saw how compression makes models fit where they previously couldn't run\n",
|
||
"\n",
|
||
"**🏭 Production Understanding:**\n",
|
||
"- **Edge deployment scenarios**: Analyzed how pruning enables mobile, IoT, and embedded AI applications\n",
|
||
"- **Compression pipeline design**: Built systems that analyze, compress, and validate models for production deployment\n",
|
||
"- **Hardware-aware optimization**: Understood how different deployment targets require different pruning strategies\n",
|
||
"- **Quality assurance**: Implemented validation systems to ensure compression doesn't degrade model performance\n",
|
||
"\n",
|
||
"### ML Systems Engineering Connection\n",
|
||
"\n",
|
||
"This module demonstrates that **compression is fundamentally about enabling deployment**, not just reducing model size. You learned:\n",
|
||
"\n",
|
||
"- **Why redundancy exists**: Neural networks are over-parametrized, creating massive compression opportunities\n",
|
||
"- **Hardware drives strategy**: Structured vs unstructured pruning choice depends on target hardware capabilities\n",
|
||
"- **Compression enables privacy**: On-device processing becomes possible when models are small enough\n",
|
||
"- **Systems thinking**: Compression affects the entire ML stack from training to serving\n",
|
||
"\n",
|
||
"### Real-World Impact\n",
|
||
"\n",
|
||
"Your compression implementation mirrors production systems used by:\n",
|
||
"- **Mobile AI**: Apple's Neural Engine, Google's Edge TPU leverage sparsity for efficient inference\n",
|
||
"- **Autonomous vehicles**: Tesla FSD uses pruning for real-time object detection\n",
|
||
"- **Smart devices**: Alexa, Google Assistant use extreme compression for always-on wake word detection\n",
|
||
"- **Medical AI**: Portable diagnostic systems enabled by compressed models\n",
|
||
"\n",
|
||
"The techniques you built make the difference between AI that runs in the cloud versus AI that runs in your pocket - enabling privacy, reducing latency, and creating entirely new application categories.\n",
|
||
"\n",
|
||
"**Next**: This completes our ML Systems engineering journey! You've now built the complete stack from tensors to production deployment, understanding how each component contributes to building real-world AI systems that scale."
|
||
]
|
||
}
|
||
],
|
||
"metadata": {
|
||
"jupytext": {
|
||
"main_language": "python"
|
||
}
|
||
},
|
||
"nbformat": 4,
|
||
"nbformat_minor": 5
|
||
}
|