Hi, I am new to PyTorch. I am trying to train a Unet model. All layers used are custom-written for my project. The layers individually work and return expected outcomes. But while trying to put them together in my Unet model, seems like some gradient descent problem I am facing. It gives me the following error:
element 0 of tensors does not require grad and does not have a grad_fn.
It seems like some indexing I have messed up somewhere which I cannot see. I have printed the shapes and requires_grad of all layers and learned they are all in the expected condition.
for name, param in model.named_parameters():
print(name, param.requires_grad)
this check also shows all gradients are True. but in the training loop, I find the loss.requires_grad is False, which traces back to the last layer in the Unet, a deconvolution layer returns a requires_grad False. can anyone suggest some options to debug as in why this would be the case? since everything is custom written, I am not sure how to provide error reproducible script. I can share my github. let me know.
Many Thanks
Wasim