Hi, I am trying to use mobilenetv3 in transfer learning to classify images in 4 categories. At the first, I try to freeze the weights so I can use them for feature extraction and I add a custom layer at the end. However, I get the following error:
Traceback (most recent call last):
File “trial13MobileNetxray.py”, line 431, in
train(mobilenetv3, args)
File “trial13MobileNetxray.py”, line 230, in train
loss.backward() # backward pass (compute parameter updates)
File “/opt/conda/lib/python3.6/site-packages/torch/tensor.py”, line 245, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
File “/opt/conda/lib/python3.6/site-packages/torch/autograd/init.py”, line 147, in backward
allow_unreachable=True, accumulate_grad=True) # allow_unreachable flag
RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
File “/opt/conda/lib/python3.6/site-packages/torch/tensor.py”, line 245, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
File “/opt/conda/lib/python3.6/site-packages/torch/autograd/init.py”, line 147, in backward
allow_unreachable=True, accumulate_grad=True) # allow_unreachable flag
RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
My code is as follows:
#Otain pretrained mobilenet from pytorch models
mobilenetv3 = torchvision.models.mobilenet_v3_large(pretrained=True)
#Freeze the pretrained weights for use
for param in mobilenetv3.parameters():
param.requires_grad = False
# add custom layers to prevent overfitting and for finetuning
mobilenetv3.fc = nn.Sequential(nn.Dropout(0.2),
nn.BatchNorm1d(1280), #320
nn.ReLU(),
nn.Dropout(0.3),
nn.Linear(320, 4),
nn.LogSoftmax(dim=1)
)
Please help