Shape '[32, 150528]' is invalid for input of size 1492992

Using GPU: True

Epoch 1/10
----------
---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
<ipython-input-13-cd4c03781ec7> in <module>()
      8 exp_lr_scheduler = lr_scheduler.StepLR(optimizer, step_size=5, gamma=0.1) #decay
      9                                                             # 0.1 every 5 epochs
---> 10 model_ft = train_model(net, criterion, optimizer, exp_lr_scheduler, num_epochs=10)

2 frames
<ipython-input-12-e791953bf1c6> in forward(self, x)
     17         # If the size is a square you can only specify a single number
     18         x = F.max_pool2d(F.relu(self.conv2(x)), 2)
---> 19         x = x.view(x.size(0), 3 * 224 * 224)
     20         x = F.relu(self.fc1(x))
     21         x = F.relu(self.fc2(x))

I have this problem but I don’t know how to solve this. Could someone help me, please?

That’s my Net

class Net(nn.Module):

    def __init__(self):
        super(Net, self).__init__()
        # 1 input image channel, 6 output channels, 3x3 square convolution
        # kernel
        self.conv1 = nn.Conv2d(3, 6, 3)
        self.conv2 = nn.Conv2d(6, 16, 3)
        # an affine operation: y = Wx + b
        self.fc1 = nn.Linear(3 * 224 * 224, 144)  # 6*6 from image dimension
        self.fc2 = nn.Linear(120, 84)
        self.fc3 = nn.Linear(84, 10)

    def forward(self, x):
        # Max pooling over a (2, 2) window
        x = F.max_pool2d(F.relu(self.conv1(x)), (2, 2))
        # If the size is a square you can only specify a single number
        x = F.max_pool2d(F.relu(self.conv2(x)), 2)
        x = x.view(x.size(0), 3 * 224 * 224)
        x = F.relu(self.fc1(x))
        x = F.relu(self.fc2(x))
        x = self.fc3(x)
        return x

    def num_flat_features(self, x):
        size = x.size()[1:]  # all dimensions except the batch dimension
        num_features = 1
        for s in size:
            num_features *= s
        return num_features

I expect, your problem is when using view x = x.view(x.size(0), 3 * 224 * 224) which is giving error,
without providing explicit shape it might be better to use
x.view(x.shape[0],-1)

Another problem might be in self.fc1 and self.fc2
As we know, torch.nn.Linear(in_features, out_features)
so you need to change,

fc1 : in_features to be shape of self.conv2 returning shape, in my case if input is 224x224 then
and also out_features should be input of fc2 as in example,

self.fc1 = nn.Linear(46656, 144)  # 6*6 from image dimension
self.fc2 = nn.Linear(144, 84)

Hope it solves problem

I follow your orientations and I had another problem:

Using GPU: True

Epoch 1/10
----------
---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
<ipython-input-102-cd4c03781ec7> in <module>()
      8 exp_lr_scheduler = lr_scheduler.StepLR(optimizer, step_size=5, gamma=0.1) #decay
      9                                                             # 0.1 every 5 epochs
---> 10 model_ft = train_model(net, criterion, optimizer, exp_lr_scheduler, num_epochs=10)

5 frames
/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py in linear(input, weight, bias)
   1672     if input.dim() == 2 and bias is not None:
   1673         # fused op is marginally faster
-> 1674         ret = torch.addmm(bias, input, weight.t())
   1675     else:
   1676         output = input.matmul(weight.t())

RuntimeError: mat1 dim 1 must match mat2 dim 0

please, could you print me your input shape.
I tried, Here is code

class Net(nn.Module):

    def __init__(self):
        super(Net, self).__init__()
        # 1 input image channel, 6 output channels, 3x3 square convolution
        # kernel
        self.conv1 = nn.Conv2d(3, 6, 3)
        self.conv2 = nn.Conv2d(6, 16, 3)
        # an affine operation: y = Wx + b
        self.fc1 = nn.Linear(46656, 144)  # 6*6 from image dimension
        self.fc2 = nn.Linear(144, 84)
        self.fc3 = nn.Linear(84, 10)
  def forward(self, x):
        # Max pooling over a (2, 2) window
        x = F.max_pool2d(F.relu(self.conv1(x)), (2, 2))
        # If the size is a square you can only specify a single number
        x = F.max_pool2d(F.relu(self.conv2(x)), 2)
        x = x.view(x.shape[0], -1)
        print(x.shape) # prints flatten shape
        x = F.relu(self.fc1(x))
        x = F.relu(self.fc2(x))
        x = self.fc3(x)
        return x
def num_flat_features(self, x):
        size = x.size()[1:]  # all dimensions except the batch dimension
        num_features = 1
        for s in size:
            num_features *= s
        return num_features

It does not throws error when you have input t = torch.randn([1, 3, 224,224]). However if input size is different it might be problem, you should use transforms in Dataloader to reduce size to applicable for model.

My input shape is torch.Size([128, 16, 54, 54]).

I think, No , your input size,
use print(x.shape) just below forward function

Sorry, you’re right! I did it wrong before reshape.

 def forward(self, x):
        # Max pooling over a (2, 2) window
        x = F.max_pool2d(F.relu(self.conv1(x)), (2, 2))
        # If the size is a square you can only specify a single number
        x = F.max_pool2d(F.relu(self.conv2(x)), 2)
        #print(x.shape)
        x = x.view(x.shape[0], -1)
        print(x.shape)
        x = F.relu(self.fc1(x))
        x = F.relu(self.fc2(x))
        x = self.fc3(x)
        return x

The print result is torch.Size([128, 46656]).

1 Like

How do you find the input_features = 46656?

self.fc1 = nn.Linear(46656, 144)  # 6*6 from image dimension

You can get it from just printing below reshaped shape. As torch.Size([128, 46656]) is flattened shape where
128: is no of batch size
46656 is tensor need to passed to the Linear in_features.

This only happens when you have input of shape [batch, channel, h,w] i.e [128,3,224,224] . If you wanted to have different input shape , you can see reshape shape and use it accordingly.

I got it. My problem is gone. Thank you so much for your help, @mathematics!

1 Like