Is there a way to achieve the equivalent of .grad=None
in C++? (in order to later replace .grad().zero_()
when the gradient is defined)
Thank you!
Is there a way to achieve the equivalent of .grad=None
in C++? (in order to later replace .grad().zero_()
when the gradient is defined)
Thank you!
Maybe t.mutable_grad() = torch::Tensor({})
? AFAICT None
is equivalento to an undefined tensor in python.