Runtime error (Output size is too small) after upgrading to new pytorch version

I am getting a “RuntimeError: Given input size: (512, 1, 1). Calculated output size: (324, 1, -510). Output size is too small.” for an input of torch.Size([324, 512, 1, 1]) after doing nn.Conv2d(512, 512, kernel_size=(1, 1), stride=(1, 1), bias=False). It is working perfectly fine on the previous version of pytorch.

My model is:

class Net(nn.Module):

  def __init__(self):

    super(Net, self).__init__()

    self.relu = nn.ReLU(True)
    self.activ = Active()
    self.drop = nn.Dropout2d(0.2)

    self.conv1 = nn.Conv2d(1, 64, kernel_size = 15, stride = 3, padding = 0, bias = True)
    self.bn1 = nn.BatchNorm2d(64)
    self.maxpool1 = nn.MaxPool2d(kernel_size = 3, stride = 2)

    self.conv2 = nn.Conv2d(64, 128, kernel_size = 5, stride = 1, padding = 0, bias = False)
    self.bn2 = nn.BatchNorm2d(64)
    self.maxpool2 = nn.MaxPool2d(kernel_size = 3, stride = 2)


    self.conv3 = nn.Conv2d(128, 256, kernel_size = 3, stride = 1, padding= 1, bias = False)
    self.bn3 = nn.BatchNorm2d(128)

    # self.conv4 = nn.Conv2d(96, 192, 1, 1)
    self.conv4 = nn.Conv2d(256, 256, kernel_size = 3, stride = 1, padding = 1, bias = False)
    self.bn4 = nn.BatchNorm2d(256)

    self.conv5 = nn.Conv2d(256, 256, kernel_size = 3, stride = 1, padding = 1, bias = False)
    self.bn5 = nn.BatchNorm2d(256)

    self.maxpool3 = nn.MaxPool2d(kernel_size = 3, stride = 2)

    self.conv6 = nn.Conv2d(256, 512, kernel_size = 7, stride = 1, padding = 0, bias = False)
    self.bn6 = nn.BatchNorm2d(512)

    self.conv7 = nn.Conv2d(512, 512, kernel_size = 1, stride = 1, padding = 0, bias = False)
    self.bn7 = nn.BatchNorm2d(512)

    self.conv8 = nn.Conv2d(512, 250, kernel_size = 1, stride = 1,  padding = 0, bias = True)


def forward(self, x):

    print(x.size())
    x = self.relu(self.bn1(self.conv1(x)))
    x = self.maxpool1(x)
    print(x.size())
    x = self.relu((self.conv2(self.activ((self.bn2(x))))))
    x = self.maxpool2(x)
    print(x.size())
    x = self.relu((self.conv3(self.activ(self.bn3(x)))))
    x = self.relu((self.conv4(self.activ(self.bn4(x)))))
    print(x.size())
    x = self.relu((self.conv5(self.activ(self.bn5(x)))))
    x = self.maxpool3(x)
    x = self.relu(self.bn6(self.conv6(x)))
    x = self.drop(x)
    print(x.size())
    x = self.conv7(x)
    print(x.size())
    x = self.relu(self.bn7(x))
    x = self.drop(x)
    print(x.size())
    x = self.conv8(x)
    print(x.size())
    x = x.view(-1, 250)
    print(x.size())
    return F.log_softmax(x)

and this is the error that shows:
torch.Size([324, 1, 225, 225])
torch.Size([324, 64, 35, 35])
torch.Size([324, 128, 15, 15])
torch.Size([324, 256, 15, 15])
torch.Size([324, 512, 1, 1])
Traceback (most recent call last):
File “main.py”, line 92, in
main()
File “main.py”, line 73, in main
trainer.train(train_loader, epoch, opt)
File “/home/rohit/Documents/cvitwork/WACV17/code/train.py”, line 77, in train
outputs = self.model(inputs)
File “/home/rohit/.local/lib/python3.5/site-packages/torch/nn/modules/module.py”, line 224, in call
result = self.forward(*input, **kwargs)
File “/home/rohit/Documents/cvitwork/WACV17/code/models/binnet.py”, line 62, in forward
x = self.conv7(x)
File “/home/rohit/.local/lib/python3.5/site-packages/torch/nn/modules/module.py”, line 224, in call
result = self.forward(*input, **kwargs)
File “/home/rohit/.local/lib/python3.5/site-packages/torch/nn/modules/conv.py”, line 254, in forward
self.padding, self.dilation, self.groups)
File “/home/rohit/.local/lib/python3.5/site-packages/torch/nn/functional.py”, line 52, in conv2d
return f(input, weight, bias)
RuntimeError: Given input size: (512, 1, 1). Calculated output size: (324, 1, -510). Output size is too small.

If I can’t fix this, how do I downgrade to my previous version which was 0.1.12.post2 using pip?

I couldn’t reproduce. I just installed the latest version from master (which should be virtually equivalent to v0.2.0 for Convolution). I did had to fix an indentation problem with your function though (you need to indent forward to that it belongs to Net).

I’m still unable to fix this for some reason. I have pytorch v0.1.12 on python2 and it works in that but in v0.2 on python3 it doesnt. How do I downgrade to my previous version which was 0.1.12.post2 using pip? (Yeah the indent was a mistake when copying :P)

You can download the previous conda tar from https://anaconda.org/soumith/repo and install using conda and passing the downloaded file.
also, I’d look into broadcasting, it might be one of the things that could be affecting your results (but not in your model definition, as I could run it, maybe somewhere else in your code)

1 Like

You can also find old pip package URLS going through the history of this file: https://github.com/pytorch/pytorch.github.io/blob/master/_data/wizard.yml

I’d check the package in your current site-packages before you toast it and see if there is any mixture of versions/old libs hanging around that are causing a problem.

1 Like