RuntimeError: Expected 3D (unbatched) or 4D (batched) input to conv2d, but got input of size: [1, 784]


I am getting this error. I can’t understand why I am getting this.

I also tried input in shape [batch_size, channels, height, width] as suggested by @ptrblck in another Topic. but it shows another RuntimeError: shape ‘[256, -1, 28, 28]’ is invalid for input of size 784.

As you correctly said, nn.Conv2d expects a 3D (unbatched) or 4D (batched) tensor in the shape [batch_size, channels, height, width] while you are flattening it to a 2D tensor.
How did you reshape the tensor to the 4D one and how did you create a negative shape?

Thanks for your reply.
Actually, I am a beginner in PyTorch. And exploring it from several tutorials and documentation. I am confused with your questions ‘reshape the tensor to the 4D one’ and ‘negative shape’.
Though -1 and 1 both are producing same result here (I don’t know why!)

I’ve attached some snippets from where I came to this point!

transform = test_transform = transforms.Compose([
        transforms.Resize(28),
        transforms.ToTensor(),
        transforms.Normalize(mean, std)
])
import glob
i=2
test_images = glob.glob(TEST_PATH+"/*")
img = Image.open(test_images[i]).convert('L')
img = transform(img)

The original error was raised because of the flattening. However, I don’t know what kind of code changes you’ve applied to try to fix it as you haven’t posted them.
Here is an example of a working approach using 3D and 4D input tensors:

transform = test_transform = transforms.Compose([
        transforms.Resize(28),
        transforms.ToTensor(),
        transforms.Normalize([0.,], [1.])
])

img = transforms.ToPILImage()(torch.randint(0, 256, (1, 224, 224), dtype=torch.uint8))
x = transform(img)
print(x.shape)
# torch.Size([1, 28, 28])

model = nn.Conv2d(1, 6, 3)

# unbatched
out = model(x)
print(out.shape)
# torch.Size([6, 26, 26])

# batched
x = x.unsqueeze(0)
print(x.shape)
# torch.Size([1, 1, 28, 28])
out = model(x)
print(out.shape)
# torch.Size([1, 6, 26, 26])