Files
TinyTorch/tinytorch/core/dense.py
Vijay Janapa Reddi e82bc8ba97 Complete comprehensive system validation and cleanup
🎯 Major Accomplishments:
•  All 15 module dev files validated and unit tests passing
•  Comprehensive integration tests (11/11 pass)
•  All 3 examples working with PyTorch-like API (XOR, MNIST, CIFAR-10)
•  Training capability verified (4/4 tests pass, XOR shows 35.8% improvement)
•  Clean directory structure (modules/source/ → modules/)

🧹 Repository Cleanup:
• Removed experimental/debug files and old logos
• Deleted redundant documentation (API_SIMPLIFICATION_COMPLETE.md, etc.)
• Removed empty module directories and backup files
• Streamlined examples (kept modern API versions only)
• Cleaned up old TinyGPT implementation (moved to examples concept)

📊 Validation Results:
• Module unit tests: 15/15 
• Integration tests: 11/11 
• Example validation: 3/3 
• Training validation: 4/4 

🔧 Key Fixes:
• Fixed activations module requires_grad test
• Fixed networks module layer name test (Dense → Linear)
• Fixed spatial module Conv2D weights attribute issues
• Updated all documentation to reflect new structure

📁 Structure Improvements:
• Simplified modules/source/ → modules/ (removed unnecessary nesting)
• Added comprehensive validation test suites
• Created VALIDATION_COMPLETE.md and WORKING_MODULES.md documentation
• Updated book structure to reflect ML evolution story

🚀 System Status: READY FOR PRODUCTION
All components validated, examples working, training capability verified.
Test-first approach successfully implemented and proven.
2025-09-23 10:00:33 -04:00

225 lines
7.7 KiB
Python
Generated

# AUTOGENERATED! DO NOT EDIT! File to edit: ../../modules/source/05_networks/networks_dev.ipynb.
# %% auto 0
__all__ = ['Sequential', 'create_mlp', 'MLP']
# %% ../../modules/source/05_networks/networks_dev.ipynb 1
import numpy as np
import sys
import os
from typing import List, Optional
import matplotlib.pyplot as plt
# Import all the building blocks we need - try package first, then local modules
try:
from tinytorch.core.tensor import Tensor
from tinytorch.core.layers import Dense
from tinytorch.core.activations import ReLU, Sigmoid, Tanh, Softmax
except ImportError:
# For development, import from local modules
sys.path.append(os.path.join(os.path.dirname(__file__), '..', '02_tensor'))
sys.path.append(os.path.join(os.path.dirname(__file__), '..', '03_activations'))
sys.path.append(os.path.join(os.path.dirname(__file__), '..', '04_layers'))
from tensor_dev import Tensor
from activations_dev import ReLU, Sigmoid, Tanh, Softmax
from layers_dev import Dense
# %% ../../modules/source/05_networks/networks_dev.ipynb 7
class Sequential:
"""
Sequential Network: Composes layers in sequence
The most fundamental network architecture.
Applies layers in order: f(x) = layer_n(...layer_2(layer_1(x)))
"""
def __init__(self, layers: Optional[List] = None):
"""
Initialize Sequential network with layers.
Args:
layers: List of layers to compose in order (optional, defaults to empty list)
TODO: Store the layers and implement forward pass
APPROACH:
1. Store the layers list as an instance variable
2. Initialize empty list if no layers provided
3. Prepare for forward pass implementation
EXAMPLE:
Sequential([Dense(3,4), ReLU(), Dense(4,2)])
creates a 3-layer network: Dense → ReLU → Dense
HINTS:
- Use self.layers to store the layers
- Handle empty initialization case
LEARNING CONNECTIONS:
- This is equivalent to torch.nn.Sequential in PyTorch
- Used in every neural network to chain layers together
- Foundation for models like VGG, ResNet, and transformers
- Enables modular network design and experimentation
"""
### BEGIN SOLUTION
self.layers = layers if layers is not None else []
### END SOLUTION
def forward(self, x: Tensor) -> Tensor:
"""
Forward pass through all layers in sequence.
Args:
x: Input tensor
Returns:
Output tensor after passing through all layers
TODO: Implement sequential forward pass through all layers
APPROACH:
1. Start with the input tensor
2. Apply each layer in sequence
3. Each layer's output becomes the next layer's input
4. Return the final output
EXAMPLE:
Input: Tensor([[1, 2, 3]])
Layer1 (Dense): Tensor([[1.4, 2.8]])
Layer2 (ReLU): Tensor([[1.4, 2.8]])
Layer3 (Dense): Tensor([[0.7]])
Output: Tensor([[0.7]])
HINTS:
- Use a for loop: for layer in self.layers:
- Apply each layer: x = layer(x)
- The output of one layer becomes input to the next
- Return the final result
LEARNING CONNECTIONS:
- This is the core of feedforward neural networks
- Powers inference in every deployed model
- Critical for real-time predictions in production
- Foundation for gradient flow in backpropagation
"""
### BEGIN SOLUTION
# Apply each layer in sequence
for layer in self.layers:
x = layer(x)
return x
### END SOLUTION
def __call__(self, x: Tensor) -> Tensor:
"""Make the network callable: sequential(x) instead of sequential.forward(x)"""
return self.forward(x)
def add(self, layer):
"""Add a layer to the network."""
self.layers.append(layer)
# %% ../../modules/source/05_networks/networks_dev.ipynb 11
def create_mlp(input_size: int, hidden_sizes: List[int], output_size: int,
activation=ReLU, output_activation=Sigmoid) -> Sequential:
"""
Create a Multi-Layer Perceptron (MLP) network.
Args:
input_size: Number of input features
hidden_sizes: List of hidden layer sizes
output_size: Number of output features
activation: Activation function for hidden layers (default: ReLU)
output_activation: Activation function for output layer (default: Sigmoid)
Returns:
Sequential network with MLP architecture
TODO: Implement MLP creation with alternating Dense and activation layers.
APPROACH:
1. Start with an empty list of layers
2. Add layers in this pattern:
- Dense(input_size → first_hidden_size)
- Activation()
- Dense(first_hidden_size → second_hidden_size)
- Activation()
- ...
- Dense(last_hidden_size → output_size)
- Output_activation()
3. Return Sequential(layers)
EXAMPLE:
create_mlp(3, [4, 2], 1) creates:
Dense(3→4) → ReLU → Dense(4→2) → ReLU → Dense(2→1) → Sigmoid
HINTS:
- Start with layers = []
- Track current_size starting with input_size
- For each hidden_size: add Dense(current_size, hidden_size), then activation
- Finally add Dense(last_hidden_size, output_size), then output_activation
- Return Sequential(layers)
LEARNING CONNECTIONS:
- This pattern is used in every feedforward network implementation
- Foundation for architectures like autoencoders and GANs
- Enables rapid prototyping of neural architectures
- Similar to tf.keras.Sequential with Dense layers
"""
layers = []
current_size = input_size
# Add hidden layers with activations
for hidden_size in hidden_sizes:
layers.append(Dense(current_size, hidden_size))
layers.append(activation())
current_size = hidden_size
# Add output layer with output activation
layers.append(Dense(current_size, output_size))
layers.append(output_activation())
return Sequential(layers)
# %% ../../modules/source/05_networks/networks_dev.ipynb 24
class MLP:
"""
Multi-Layer Perceptron (MLP) class.
A convenient wrapper around Sequential networks for standard MLP architectures.
Maintains parameter information and provides a clean interface.
Args:
input_size: Number of input features
hidden_size: Size of the single hidden layer
output_size: Number of output features
activation: Activation function for hidden layer (default: ReLU)
output_activation: Activation function for output layer (default: Sigmoid)
"""
def __init__(self, input_size: int, hidden_size: int, output_size: int,
activation=ReLU, output_activation=None):
self.input_size = input_size
self.hidden_size = hidden_size
self.output_size = output_size
# Build the network layers
layers = []
# Input to hidden layer
layers.append(Dense(input_size, hidden_size))
layers.append(activation())
# Hidden to output layer
layers.append(Dense(hidden_size, output_size))
if output_activation is not None:
layers.append(output_activation())
self.network = Sequential(layers)
def forward(self, x):
"""Forward pass through the MLP network."""
return self.network.forward(x)
def __call__(self, x):
"""Make the MLP callable."""
return self.forward(x)