I am freezing all layer except for last fc layer of resnet as follows:

```
resNet = models.resnet18(pretrained = True)
for param in resNet.parameters(): # Freeze all layers
param.requires_grad = False
resNet.fc = nn.Linear(in_features=512, out_features=10, bias=True) # Unfreeze the last fc layer
if cuda.is_available:
resNet = resNet.cuda()
criterion = nn.CrossEntropyLoss()
lr = 0.01
#optimizer = torch.optim.RMSpr(net.parameters(),lr = lr)
optimizer = torch.optim.SGD(resNet.parameters(),lr=lr,momentum=0.9)
```

This executes fine in my laptop. However the same code throws the following error when executed on google colab. The Pytorch version on Colab is 0.4.0. Any insights ??

```
ValueError Traceback (most recent call last)
<ipython-input-22-c7743f5bef78> in <module>()
12 lr = 0.01
13 #optimizer = torch.optim.RMSpr(net.parameters(),lr = lr)
---> 14 optimizer = torch.optim.SGD(resNet.parameters(),lr=lr,momentum=0.9)
/usr/local/lib/python3.6/dist-packages/torch/optim/sgd.py in __init__(self, params, lr, momentum, dampening, weight_decay, nesterov)
62 if nesterov and (momentum <= 0 or dampening != 0):
63 raise ValueError("Nesterov momentum requires a momentum and zero dampening")
---> 64 super(SGD, self).__init__(params, defaults)
65
66 def __setstate__(self, state):
/usr/local/lib/python3.6/dist-packages/torch/optim/optimizer.py in __init__(self, params, defaults)
41
42 for param_group in param_groups:
---> 43 self.add_param_group(param_group)
44
45 def __getstate__(self):
/usr/local/lib/python3.6/dist-packages/torch/optim/optimizer.py in add_param_group(self, param_group)
191 "but one of the params is " + torch.typename(param))
192 if not param.requires_grad:
--> 193 raise ValueError("optimizing a parameter that doesn't require gradients")
194 if not param.is_leaf:
195 raise ValueError("can't optimize a non-leaf Tensor")
ValueError: optimizing a parameter that doesn't require gradients
```