Error executing Pytorch on Colab

I am freezing all layer except for last fc layer of resnet as follows:

resNet = models.resnet18(pretrained = True)
for param in resNet.parameters():  # Freeze all layers
    param.requires_grad = False
resNet.fc = nn.Linear(in_features=512, out_features=10, bias=True) # Unfreeze the last fc layer

if cuda.is_available:
    resNet = resNet.cuda()
criterion = nn.CrossEntropyLoss()
lr = 0.01
#optimizer = torch.optim.RMSpr(net.parameters(),lr = lr)
optimizer = torch.optim.SGD(resNet.parameters(),lr=lr,momentum=0.9)

This executes fine in my laptop. However the same code throws the following error when executed on google colab. The Pytorch version on Colab is 0.4.0. Any insights ??

ValueError                                Traceback (most recent call last)
<ipython-input-22-c7743f5bef78> in <module>()
     12 lr = 0.01
     13 #optimizer = torch.optim.RMSpr(net.parameters(),lr = lr)
---> 14 optimizer = torch.optim.SGD(resNet.parameters(),lr=lr,momentum=0.9)

/usr/local/lib/python3.6/dist-packages/torch/optim/ in __init__(self, params, lr, momentum, dampening, weight_decay, nesterov)
     62         if nesterov and (momentum <= 0 or dampening != 0):
     63             raise ValueError("Nesterov momentum requires a momentum and zero dampening")
---> 64         super(SGD, self).__init__(params, defaults)
     66     def __setstate__(self, state):

/usr/local/lib/python3.6/dist-packages/torch/optim/ in __init__(self, params, defaults)
     42         for param_group in param_groups:
---> 43             self.add_param_group(param_group)
     45     def __getstate__(self):

/usr/local/lib/python3.6/dist-packages/torch/optim/ in add_param_group(self, param_group)
    191                                 "but one of the params is " + torch.typename(param))
    192             if not param.requires_grad:
--> 193                 raise ValueError("optimizing a parameter that doesn't require gradients")
    194             if not param.is_leaf:
    195                 raise ValueError("can't optimize a non-leaf Tensor")

ValueError: optimizing a parameter that doesn't require gradients


The error is because you give to an optimizer tensors that do not require gradients. It cannot optimize them.
This has already been answered here for example.

Hello Alban,
However the same code works on my host machine. Can u please elaborate after looking at my code??

I think this restriction on the optimizer was lifted in the most recent versions. So if you use a more recent version on your host machine, you won’t see this error.

Ya, it is 4.1 on my machine while colab has 4.0

Thanks it worked,
I am changing the resnet only in the last fc layer by making outputs = 10 as I am using it on CIFAR 10 set. But I end up getting `

RuntimeError: size mismatch, m1: [128 x 2048], m2: [512 x 1000] at /pytorch/aten/src/THC/generic/`

error. Again this seems a version issue as I can run the code without error on my machine. What could be the reason??