Weight Dimension Error

Hello All, Can anyone help me on where I could be going wrong here.

for epoch in range(50): # loop over the dataset multiple times
running_loss = 0.0
for i, data in enumerate(trainloader, 0):
# get the inputs
inputs, labels = data
inputs = inputs.cuda()
labels = torch.tensor(labels)
labels = labels.cuda()

            # zero the parameter gradients
            # forward + backward + optimize
            outputs = net(inputs)


File “E:/usc/599/project_notes/Embedding-cifar10.py”, line 100, in forward
x = self.pool(F.relu(self.conv1(x)))

File “F:\conda\envs\gpu\lib\site-packages\torch\nn\modules\module.py”, line 489, in call
result = self.forward(*input, **kwargs)

File “F:\conda\envs\gpu\lib\site-packages\torch\nn\modules\conv.py”, line 320, in forward
self.padding, self.dilation, self.groups)

RuntimeError: Expected 4-dimensional input for 4-dimensional weight [6, 3, 5, 5], but got 5-dimensional input of size [1, 4, 3, 32, 32] instead

Your DataLoader will already provide batches so that you usually don’t need to unsqueeze the data.
It seems you have a batch of 4 color images. If that’s the case, just remove the unsqueeze lines of code and try to run your code again.

1 Like