The triplet loss example in documentation is as follows:
input1 = torch.randn(100, 128, requires_grad=True)
input2 = torch.randn(100, 128, requires_grad=True)
input3 = torch.randn(100, 128, requires_grad=True)
output = triplet_loss(input1, input2, input3)
output.backward()
When I check the value of input1.grad
, it is coming to be None. Same for input2 and input3.
Is this as expected? If so, why is requires_grad=True
needed in the tensor?
Thanks
To me, the .grad
is not none.
Which version of pytorch are you using?
Hi @InnovArul,
Thanks for your reply! I am using pytorch 0.4.1.post2
What would be the effect if the input1.grad is None?
Could you please suggest what should I do to rectify it?
My input1,input2,input3 are outputs of an nn.Module based network
The gradients of intermediate variables (in this case, input1,input2,input3
) will be cleared by autograd during backpropagation. If you really want to retain the gradient for certain variables (for example, to visually inspect) after backpropagation, use .retain_grad()
on those variables.
Thanks a lot @InnovArul! Your reply helped me find and solve a bug which was hitting my code badly! Thanks a lot!
I am glad I could be of some help! cheers.