I’m trying to perform gradient backward propagation like this,
this_out = model(this_inp)
this_diff = torch.randn(1,1)
this_diff = 1.0
this_diff = this_diff.double()
for f in model.parameters():
_ = f.data.add_(f.grad.data * learning_rate)
and it says “f.grad” data is a NoneType
Sorry I just found that when I define the network, there are some layers not used in forward pass (which means they are not included in the network). Delete these unused layers helps to solve the problem. But why should we use all layers defined in the init function? Won’t PyTorch automatically ignore these layers?
Thank you for sharing the error information and solution, which really helped me to solve my problem.
Worked for me too! Thanks!
Hi Thank you for share your experience! It helped me solve my problem! So happy!!!
thanks for your question and answer