Embedding Layer Shape Calculator

Calculate the output shape of a PyTorch nn.Embedding layer. Enter vocab size, embedding dimension, and sequence length to see the output tensor shape.

Built by Michael Lip

Frequently Asked Questions

What is the output shape of nn.Embedding?

nn.Embedding(num_embeddings, embedding_dim) converts integer indices to dense vectors. For input [batch, seq_len] (integer indices), output is [batch, seq_len, embedding_dim].

How many parameters does an Embedding layer have?

An Embedding layer has num_embeddings * embedding_dim parameters. For a vocabulary of 30,000 tokens with 768-dimensional embeddings, that's 30,000 * 768 = 23,040,000 parameters (about 88 MB in float32).

What is the difference between Embedding and Linear?

Embedding is a lookup table that takes integer indices as input. Linear performs matrix multiplication on float inputs. Mathematically, Embedding is equivalent to one-hot encoding followed by a Linear layer, but much more memory-efficient.

About This Tool

This tool is part of HeyTensor, a free suite of PyTorch and deep learning utilities. All calculations run entirely in your browser — no data is sent to any server. The source code is open on GitHub.

Contact

HeyTensor is built and maintained by Michael Lip. For questions or feedback, email [email protected].

📊 Based on real data from our Most Common PyTorch Errors research — 20 errors ranked by frequency