RuntimeError: mat1 and mat2 shapes cannot be multiplied (1x524288 and 32768x2048)

Please see the code below.

class residualBlock(nn.Module):
def init(self, in_channels=64, k=3, n=64, s=1):
super(residualBlock, self).init()
m = OrderedDict()
m[‘conv1’] = nn.Conv2d(in_channels, n, k, stride=s, padding=1)
m[‘bn1’] = nn.BatchNorm2d(n)
m[‘ReLU1’] = nn.ReLU(inplace=True)
m[‘conv2’] = nn.Conv2d(n, n, k, stride=s, padding=1)
m[‘bn2’] = nn.BatchNorm2d(n)
self.group1 = nn.Sequential(m)
self.relu = nn.Sequential(nn.ReLU(inplace=True))

def forward(self, x):
    out = self.group1(x) + x
    out = self.relu(out)
    return out

class Generator(nn.Module):
def init(self, n_residual_blocks):
super(Generator, self).init()
self.n_residual_blocks = n_residual_blocks
#self.upsample_factor = upsample_factor

    self.conv1 = nn.Conv2d(6, 64, 9, stride=1, padding=4)  #9 4
    self.bn1 = nn.BatchNorm2d(64)
    self.relu1 = nn.ReLU(inplace=True)
    self.conv2 = nn.Conv2d(64, 64, 3, stride=1, padding=1)
    self.bn2 = nn.BatchNorm2d(64)
    self.relu2 = nn.ReLU(inplace=True)
    self.conv3 = nn.Conv2d(64, 64, 3, stride=1, padding=1)
    self.bn3 = nn.BatchNorm2d(64)
    self.relu3 = nn.ReLU(inplace=True)

    for i in range(self.n_residual_blocks):
        self.add_module('residual_block' + str(i + 1), residualBlock())

    self.conv4 = nn.Conv2d(64, 32, 3, stride=1, padding=1)
    self.bn4 = nn.BatchNorm2d(32)
    self.relu4 = nn.ReLU(inplace=True)
    self.conv5 = nn.Conv2d(32, 32, 3, stride=1, padding=1)
    self.bn5 = nn.BatchNorm2d(32)
    self.relu5 = nn.ReLU(inplace=True)
    self.conv6 = nn.Conv2d(32, 32, 3, stride=1, padding=1) #64,2,3 for pair
    self.bn6 = nn.BatchNorm2d(32)
    self.relu6 = nn.ReLU(inplace=True)

    self.fc = nn.Linear(32*32*32,2*32*32)   # 32768,2048
    self.sigmoid = nn.Sigmoid()

def forward(self, x):
    x = swish(self.relu1(self.bn1(self.conv1(x))))
    print("1= ",x.size())
    x = swish(self.relu2(self.bn2(self.conv2(x))))
    print("2= ",x.size())
    x = swish(self.relu3(self.bn3(self.conv3(x))))
    print("3= ",x.size())
    y = x.clone()
    for i in range(self.n_residual_blocks):
        y = self.__getattr__('residual_block' + str(i + 1))(y)
    x = swish(self.relu4(self.bn4(self.conv4(y)))) # + x
    print("4= ",x.size())
    x = swish(self.relu5(self.bn5(self.conv5(x))))
    print("5= ",x.size())
    output_map = swish(self.relu6(self.bn6(self.conv6(x))))
    print("output_map= ",output_map.size())
    flattened_output = output_map.view(output_map.size(0), -1)  # Flatten the output

    fc_output = self.fc(flattened_output)
    print("fc_output= ",fc_output.size())
    fc_output = self.sigmoid(fc_output)  # Apply sigmoid activation
    fc_output = fc_output.view(x.size(0), 2,128*128)  # Reshape output to (batch_size, 2, 128, 128)
    return fc_output

I am getting error as

My input image size is 128*128. there are two input images, and correspondingly two output images of the same size.

Please tell where I am wrong

The shape mismatch is caused by self.fc which expects 32*32*32=32768 input features while your input activation has 524288 features. Set the in_features value of self.fc to 524288 and it should work.

But after setting this value, I got another error
fc_output = fc_output.view(x.size(0), 2,128*128) # Reshape output to (batch_size, 2, 128, 128)
RuntimeError: shape ‘[1, 2, 16384]’ is invalid for input of size 2048

printed values are

1= torch.Size([1, 64, 128, 128])
2= torch.Size([1, 64, 128, 128])
3= torch.Size([1, 64, 128, 128])
4= torch.Size([1, 32, 128, 128])
5= torch.Size([1, 32, 128, 128])
output_map= torch.Size([1, 32, 128, 128])
fc_output= torch.Size([1, 2048])
outputs shape= torch.Size([1, 2048])

The self.fc layer outputs 2*32*32=2048 features while you are trying to reshape the feature dimension if the output to [batch_size, 2, 128, 128] which is invalid as it expects to have 32768 features.

I have changed the shape now to 2 * 128 * 128.
Now it does not give error related to any mismatch of shape but error related to memory.

RuntimeError: [enforce fail at alloc_cpu.cpp:73] . DefaultCPUAllocator: can’t allocate memory: you tried to allocate 68719476736 bytes. Error code 12 (Cannot allocate memory)

I will try to solve this at my end. If there will be any problem, I will get back to you.

If you know any idea for this, you can share.
Thank you so much for your help.

I don’t know which layer causes the issue, but note that fixing both shape mismatch errors via increasing the feature dimension will create a weight matrix of 524288 * 32768 which uses 16GB, so you might need to check if you want to reduce the features instead.

Actually my GPU is of 8GB only. I have already reduce it from 64 to 32 in the 4th conv. layer, but if I reduce it more, than it will impact the performance of my network.
In self.fc=nn.Linear(524288*32768),
the first argument is the no. of feature maps from previous layer * spatial dimensions of image (height, width).
The second argument is the total no. of output feature maps * spatial dimensions of image

i.e we got 32 * 128 * 128 = 524288 and 2 * 128 * 128 = 32768

Am I right?

The second argument is just the number of features in the output. You are then reinterpreting it as the number of feature maps * spatial dimensions of these features, but the linear layer doesn’t have any knowledge of this representation.
Also, in my previous post I forgot to multiply the size of the number of bytes per element (4 for float32), which then yields the needed 64GB raised in your error message.

Before multiplying the output feature maps with spatial dimension, I have written only 2 in the second argument, but at that time I was getting error like below

RuntimeError: The size of tensor a (2) must match the size of tensor b (128) at non-singleton dimension 3

If you change the output shape of your model you would most likely need to change more parts of the code, e.g. the target tensor shape etc. Check which line of code raises the error and try to understand what is causing the shape mismatch now.

“The size of tensor a (2) must match the size of tensor b (128) at non-singleton dimension 3”

This Error is raised in training file where loss is calculate between the output feature map and ground truth labels.
criterion(outputs,labels)

here input tensor is of shape 128 * 128 * 6. (6 channels because two images are concatenated in a tensor).
output (binary map) is two images of size 128 * 128 corresponding to two input images.

output_shape =torch.Size({1,2])
target labels have size =torch.Size([1, 32, 128, 128])

I am very much confused in all this shapes and errors.