What Does Conv2d Output with 224×224 Input, Kernel 5, Stride 2?
Conv2d with 224×224 input, kernel_size=5, stride=2, padding=2 outputs 112×112. The formula gives: floor((224 + 2×2 - 5) / 2) + 1 = 112.
Formula Breakdown
The Conv2d output size formula is:
output_size = floor((input_size - kernel_size + 2 * padding) / stride) + 1
Plugging in the values for 224×224 input:
output = floor((224 - 5 + 2*2) / 2) + 1
output = floor((224 - 5 + 4) / 2) + 1
output = floor(223 / 2) + 1
output = floor(111.5) + 1
output = 112
So the spatial dimensions go from 224×224 to 112×112.
PyTorch Code Example
import torch
import torch.nn as nn
# Define the Conv2d layer
conv = nn.Conv2d(in_channels=3, out_channels=64, kernel_size=5, stride=2, padding=2)
# Create input tensor: (batch, channels, height, width)
x = torch.randn(1, 3, 224, 224)
output = conv(x)
print(output.shape) # torch.Size([1, 64, 112, 112])
# Verify with formula
expected = (224 + 2 * 2 - 5) // 2 + 1
print(f"Expected output size: {expected}x{expected}") # 112x112
Architecture Context
A strided 5×5 convolution used in some GAN generators and discriminators, as well as certain Inception variants for spatial reduction.
Parameter Count
A Conv2d(3, 64, 5) layer has:
parameters = in_channels * out_channels * kernel_size^2 + out_channels (bias)
parameters = 3 * 64 * 5 * 5 + 64
parameters = 4,864
This layer has 4,864 trainable parameters (4800 weights + 64 bias terms).
Practical Tips
- Memory usage: The output feature map for a single image is 64 × 112 × 112 = 802,816 float values (3.06 MB in float32).
- Batch dimension: Multiply memory by batch size. A batch of 32 uses 98.0 MB for this layer's output alone.
- Same padding rule: For any kernel, setting padding = (kernel_size - 1) / 2 with stride=1 preserves spatial dimensions.