How Many Parameters Does Conv2d(3, 64, 7) Have?

Conv2d(3, 64, 7) has 9,472 trainable parameters. This includes 9,408 weights and 64 bias terms.

Formula Breakdown

For a Conv2d layer, the parameter count is:

parameters = in_channels * out_channels * kernel_size^2 + out_channels (bias)
parameters = 3 * 64 * 7 * 7 + 64
parameters = 3 * 64 * 49 + 64
parameters = 9,408 + 64
parameters = 9,472

Each of the 64 output filters is a 3D kernel of shape (3, 7, 7). That gives 64 × 3 × 7 × 7 = 9,408 weights, plus 64 bias terms. Total: 9,472 trainable parameters.

Memory Usage

In float32, this layer uses 0.04 MB of memory for weights alone. During training with Adam optimizer, multiply by 3 = 0.11 MB.

Architecture Context

This layer configuration is found in ResNet conv1 — the first convolution layer processing raw RGB images. Understanding parameter counts helps you estimate model size, memory requirements, and the risk of overfitting. Layers with more parameters need more training data and compute to train effectively.

Convolutional layers are parameter-efficient compared to fully connected layers because weights are shared across spatial positions. A Conv2d(3, 64, 7) processes any input spatial size with the same 9,472 parameters.

PyTorch Code to Verify

import torch.nn as nn

layer = nn.Conv2d(3, 64, kernel_size=7)

# Count parameters
total = sum(p.numel() for p in layer.parameters())
print(f"Total parameters: {total}")  # 9,472

# Break it down
print(f"Weight shape: {layer.weight.shape}")  # (64, 3, 7, 7)
print(f"Weight params: {layer.weight.numel()}")  # 9,408
print(f"Bias shape: {layer.bias.shape}")  # (64,)
print(f"Bias params: {layer.bias.numel()}")  # 64

# Without bias (common in batch-normalized networks)
layer_no_bias = nn.Conv2d(3, 64, kernel_size=7, bias=False)
print(f"Without bias: {sum(p.numel() for p in layer_no_bias.parameters())}")  # 9,408

Comparison: With vs. Without Bias

Configuration Parameters
Conv2d(3, 64, 7) (with bias) 9,472
Conv2d(3, 64, 7, bias=False) 9,408

When using BatchNorm after a convolutional layer, the bias is redundant because BatchNorm has its own bias term. Setting bias=False saves 64 parameters per layer.

Related Questions

Try the Parameter Counter