RuntimeError: Expected 4-dimensional input for 4-dimensional weight [6, 3, 3, 3], but got 1-dimensional input of size [10] instead

Hello I am working on training a CNN model for a FASHION MNIST dataset but I am getting continously the above mentioned error after the model trains for one epoch. Here is my CNN model architecture:

class ConvolutionalNetwork(nn.Module):
def init(self):
super().init()
self.conv1 = nn.Conv2d(3,6,3,1)
self.conv2 = nn.Conv2d(6,16,3,1)
self.fc1 = nn.Linear(5516,120)
self.fc2 = nn.Linear(120,84)
self.fc3= nn.Linear(84,20)
self.fc4 = nn.Linear(20,10)
def forward(self, X):
X= f.relu(self.conv1(X))
X =f.max_pool2d(X,2,2)
X= f.relu(self.conv2(X))
X= f.max_pool2d(X,2,2)
X= X.view(-1,5516)
X= f.relu(self.fc1(X))
X= f.relu(self.fc2(X))
X= f.relu(self.fc3(X))
X= self.fc4(X)
return f.log_softmax(X, dim=1)

Here is my batch training code:
import time

start_time = time.time()
epochs = 10
train_losses =
test_losses =
train_correct =
test_correct =

for i in range(epochs):
trn_corr=0
tst_corr = 0

#Run the training batches
for b, (X_train,y_train) in enumerate(train_loader):
    b+=1
    
    y_pred = model(X_train)
    loss= criterion(y_pred,y_train)
    predicted = torch.max(y_pred, 1)[1]
    batch_corr = (predicted==y_train).sum()
    trn_corr += batch_corr
    
    optimizer.zero_grad()
    loss.backward()
    optimizer.step()
    
    #Print interim results
    if b%600 == 0:
        print(f'epoch: {i:2}  batch: {b:4} [{10*b:6}/60000]  loss: {loss.item():10.8f}  \

accuracy: {trn_corr.item()100/(10b):7.3f}%')

loss=loss.detach().numpy()
train_losses.append(loss)
train_correct.append(trn_corr)

#Run testing batches
with torch.no_grad():
        for b, (X_test,y_test) in enumerate(test_loader):
            b+=1
            y_val = model(y_test)
            loss = criterion(y_val,y_test)
            predicted = torch.max(y_val, 1)[1]
            batch_corr = (predicted==y_val).sum()
            tst_corr += batch_corr
        
            optimzer.zero_grad()
            loss.backward()
            optimzer.step()
            
        loss=loss.detach().numpy()
        test_losses.append(loss)
        test_correct.append(tst_corr)

print(f"time_duration:{start_time-time.time()}")

Here is my error which is being shown while training the model.


RuntimeError Traceback (most recent call last)
in
38 for b, (X_test,y_test) in enumerate(test_loader):
39 b+=1
—> 40 y_val = model(y_test)
41 loss = criterion(y_val,y_test)
42 predicted = torch.max(y_val, 1)[1]

~\Anaconda3\lib\site-packages\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs)
887 result = self._slow_forward(*input, **kwargs)
888 else:
→ 889 result = self.forward(*input, **kwargs)
890 for hook in itertools.chain(
891 _global_forward_hooks.values(),

in forward(self, X)
9 self.fc4 = nn.Linear(20,10)
10 def forward(self, X):
—> 11 X= f.relu(self.conv1(X))
12 X =f.max_pool2d(X,2,2)
13 X= f.relu(self.conv2(X))

~\Anaconda3\lib\site-packages\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs)
887 result = self._slow_forward(*input, **kwargs)
888 else:
→ 889 result = self.forward(*input, **kwargs)
890 for hook in itertools.chain(
891 _global_forward_hooks.values(),

~\Anaconda3\lib\site-packages\torch\nn\modules\conv.py in forward(self, input)
397
398 def forward(self, input: Tensor) → Tensor:
→ 399 return self._conv_forward(input, self.weight, self.bias)
400
401 class Conv3d(_ConvNd):

~\Anaconda3\lib\site-packages\torch\nn\modules\conv.py in _conv_forward(self, input, weight, bias)
394 _pair(0), self.dilation, self.groups)
395 return F.conv2d(input, weight, bias, self.stride,
→ 396 self.padding, self.dilation, self.groups)
397
398 def forward(self, input: Tensor) → Tensor:

RuntimeError: Expected 4-dimensional input for 4-dimensional weight [6, 3, 3, 3], but got 1-dimensional input of size [10] instead

Please tell me how can I fix this error. Thanks!!!

This looks like you are forward-passing the labels instead of the images during the test run
I am guessing this should be y_val = model(X_test)

Thanks I just fixed it but now after one epoch I am getting an error “RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn”. Could you please tell what is the problem here now??? Thanks

Thanks I just fixed it but now after one epoch I am getting an error “RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn”. Could you please tell what is the problem here now??? Thanks

Oh. I didn’t notice this before but there are a few more misstakes in your Run testing batches code part.
So. here:

You are doing testing and not training like above. You are already correctly using with torch.no_grad(): which turns off grad calculation for all tensors. Using this during testing is correct.
But you are also doing back propagation and optimizer steps.

This is normally not done during testing since your network is not supposed to learn from test-data.
The error comes from loss.backward() requiring grad calculation to be turned on which you turned off with with torch.no_grad():.
Remove these three lines

            optimzer.zero_grad()
            loss.backward()
            optimzer.step()

while keeping the with torch.no_grad(): and it should work.
(only for what is under #Run testing batches! under #Run the training batches you of course keep those lines)

Thank you so much :slight_smile: You saved my day!!! ;-D

if there is any other error in my Convolutional neural network model or in my training batches code. Kindly let me know. Thanks!!!