What Does Conv2d Output with 512×512 Input, Kernel 3?

Conv2d with 512×512 input, kernel_size=3, stride=1, padding=1 outputs 512×512. This is a “same” convolution — the output has the same spatial dimensions as the input. The formula gives: floor((512 + 2×1 - 3) / 1) + 1 = 512.

Formula Breakdown

The Conv2d output size formula is:

output_size = floor((input_size - kernel_size + 2 * padding) / stride) + 1

Plugging in the values for 512×512 input:

output = floor((512 - 3 + 2*1) / 1) + 1
output = floor((512 - 3 + 2) / 1) + 1
output = floor(511 / 1) + 1
output = floor(511) + 1
output = 512

So the spatial dimensions go from 512×512 to 512×512.

PyTorch Code Example

import torch
import torch.nn as nn

# Define the Conv2d layer
conv = nn.Conv2d(in_channels=3, out_channels=64, kernel_size=3, stride=1, padding=1)

# Create input tensor: (batch, channels, height, width)
x = torch.randn(1, 3, 512, 512)
output = conv(x)
print(output.shape)  # torch.Size([1, 64, 512, 512])

# Verify with formula
expected = (512 + 2 * 1 - 3) // 1 + 1
print(f"Expected output size: {expected}x{expected}")  # 512x512

Architecture Context

This is the standard “same” convolution preserving spatial dimensions. Used extensively in VGG, ResNet, and DenseNet architectures.

Parameter Count

A Conv2d(3, 64, 3) layer has:

parameters = in_channels * out_channels * kernel_size^2 + out_channels (bias)
parameters = 3 * 64 * 3 * 3 + 64
parameters = 1,792

This layer has 1,792 trainable parameters (1728 weights + 64 bias terms).

Practical Tips

Related Questions

Try the Conv2d Calculator