How Many Parameters Does Conv2d(128, 256, 3) Have?
Conv2d(128, 256, 3) has 295,168 trainable parameters. This includes 294,912 weights and 256 bias terms.
Formula Breakdown
For a Conv2d layer, the parameter count is:
parameters = in_channels * out_channels * kernel_size^2 + out_channels (bias)
parameters = 128 * 256 * 3 * 3 + 256
parameters = 128 * 256 * 9 + 256
parameters = 294,912 + 256
parameters = 295,168
Each of the 256 output filters is a 3D kernel of shape (128, 3, 3). That gives 256 × 128 × 3 × 3 = 294,912 weights, plus 256 bias terms. Total: 295,168 trainable parameters.
Memory Usage
In float32, this layer uses 1.13 MB of memory for weights alone. During training with Adam optimizer, multiply by 3 = 3.38 MB.
Architecture Context
This layer configuration is found in the 128-to-256 channel expansion in VGG and ResNet deeper stages. Understanding parameter counts helps you estimate model size, memory requirements, and the risk of overfitting. Layers with more parameters need more training data and compute to train effectively.
Convolutional layers are parameter-efficient compared to fully connected layers because weights are shared across spatial positions. A Conv2d(128, 256, 3) processes any input spatial size with the same 295,168 parameters.
PyTorch Code to Verify
import torch.nn as nn
layer = nn.Conv2d(128, 256, kernel_size=3)
# Count parameters
total = sum(p.numel() for p in layer.parameters())
print(f"Total parameters: {total}") # 295,168
# Break it down
print(f"Weight shape: {layer.weight.shape}") # (256, 128, 3, 3)
print(f"Weight params: {layer.weight.numel()}") # 294,912
print(f"Bias shape: {layer.bias.shape}") # (256,)
print(f"Bias params: {layer.bias.numel()}") # 256
# Without bias (common in batch-normalized networks)
layer_no_bias = nn.Conv2d(128, 256, kernel_size=3, bias=False)
print(f"Without bias: {sum(p.numel() for p in layer_no_bias.parameters())}") # 294,912
Comparison: With vs. Without Bias
| Configuration | Parameters |
|---|---|
| Conv2d(128, 256, 3) (with bias) | 295,168 |
| Conv2d(128, 256, 3, bias=False) | 294,912 |
When using BatchNorm after a convolutional layer, the bias is redundant because BatchNorm has its own bias term. Setting bias=False saves 256 parameters per layer.