Regarding training using convnet transfer learning

I got this error when adding one layer using classifier1 to pretrained model vgg16 here is the modified code
model_conv = torchvision.models.vgg16(pretrained=True)
for param in model_conv.parameters():
param.requires_grad = False

num_ftrs = model_conv.classifier[6].out_features
->model_conv.add_module(“classifier1”,nn.ReLU(inplace=True))
->model_conv.classifier1=nn.Sequential(model_conv.classifier1,nn.Dropout(0.5),nn.Linear(num_ftrs,2))
model_conv.classifier.requires_grad=True
model_conv = model_conv.to(device)
criterion = nn.CrossEntropyLoss()
optimizer_conv = optim.SGD(model_conv.classifier1.parameters(), lr=0.001, momentum=0.9)
exp_lr_scheduler = lr_scheduler.StepLR(optimizer_conv, step_size=7, gamma=0.1)
model_conv = train_model(model_conv, criterion, optimizer_conv,
exp_lr_scheduler, num_epochs=25)

I got this error while training
Traceback (most recent call last):
File “/home/gputest/PycharmProjects/test1/transfer_learning_tutorial.py”, line 289, in
exp_lr_scheduler, num_epochs=25)
File “/home/gputest/PycharmProjects/test1/transfer_learning_tutorial.py”, line 178, in train_model
loss.backward()
File “/home/gputest/.local/lib/python3.6/site-packages/torch/tensor.py”, line 107, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File “/home/gputest/.local/lib/python3.6/site-packages/torch/autograd/init.py”, line 93, in backward
allow_unreachable=True) # allow_unreachable flag
RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn

Answered here.