How to Fix "Expected Scalar Type Float" in PyTorch
Your tensor is the wrong dtype. Fix: x = x.float() or x = x.to(torch.float32) before passing to the model.
Common Causes and Fixes
1. NumPy float64 to PyTorch (most common)
import numpy as np
import torch
data = np.array([1.0, 2.0, 3.0]) # float64 by default
tensor = torch.from_numpy(data) # torch.float64 (Double)
# model(tensor) # ERROR: expected Float, got Double
# Fix:
tensor = torch.from_numpy(data).float() # torch.float32 ✓
# or
tensor = torch.tensor(data, dtype=torch.float32) # ✓
2. Image loaded as uint8
# Raw image data is 0-255 uint8
image = torch.ByteTensor([[[128, 255], [0, 64]]]) # torch.uint8
# model(image) # ERROR: expected Float, got Byte
# Fix: convert and normalize
image = image.float() / 255.0 # torch.float32, range [0, 1] ✓
3. Mixing float32 model with float64 input
# Models default to float32
model = nn.Linear(10, 5) # float32 weights
x = torch.randn(32, 10, dtype=torch.float64) # float64 input
# model(x) # ERROR
# Fix: match the model's dtype
x = x.float() # convert to float32 ✓
# or convert model to float64 (not recommended):
# model = model.double()
Check Dtype
print(x.dtype) # torch.float64, torch.uint8, etc.
print(next(model.parameters()).dtype) # torch.float32 (typical)
Quick Reference
x.float() # -> torch.float32 (most common)
x.double() # -> torch.float64
x.half() # -> torch.float16
x.int() # -> torch.int32
x.long() # -> torch.int64 (for labels/indices)