What Does Conv2d Output with 112×112 Input, Kernel 3, Stride 2?
Conv2d with 112×112 input, kernel_size=3, stride=2, padding=1 outputs 56×56. The formula gives: floor((112 + 2×1 - 3) / 2) + 1 = 56.
Formula Breakdown
The Conv2d output size formula is:
output_size = floor((input_size - kernel_size + 2 * padding) / stride) + 1
Plugging in the values for 112×112 input:
output = floor((112 - 3 + 2*1) / 2) + 1
output = floor((112 - 3 + 2) / 2) + 1
output = floor(111 / 2) + 1
output = floor(55.5) + 1
output = 56
So the spatial dimensions go from 112×112 to 56×56.
PyTorch Code Example
import torch
import torch.nn as nn
# Define the Conv2d layer
conv = nn.Conv2d(in_channels=64, out_channels=128, kernel_size=3, stride=2, padding=1)
# Create input tensor: (batch, channels, height, width)
x = torch.randn(1, 64, 112, 112)
output = conv(x)
print(output.shape) # torch.Size([1, 128, 56, 56])
# Verify with formula
expected = (112 + 2 * 1 - 3) // 2 + 1
print(f"Expected output size: {expected}x{expected}") # 56x56
Architecture Context
This is a strided convolution that halves spatial dimensions. Modern architectures like ResNet and ConvNeXt use this instead of max-pooling for downsampling.
Parameter Count
A Conv2d(64, 128, 3) layer has:
parameters = in_channels * out_channels * kernel_size^2 + out_channels (bias)
parameters = 64 * 128 * 3 * 3 + 128
parameters = 73,856
This layer has 73,856 trainable parameters (73728 weights + 128 bias terms).
Practical Tips
- Memory usage: The output feature map for a single image is 128 × 56 × 56 = 401,408 float values (1.53 MB in float32).
- Batch dimension: Multiply memory by batch size. A batch of 32 uses 49.0 MB for this layer's output alone.
- Same padding rule: For any kernel, setting padding = (kernel_size - 1) / 2 with stride=1 preserves spatial dimensions.