I got the following RuntimeError for the matrix exponential: element 0 of tensors does not require grad and does not have a grad_fn. I don’t know how to make it work. Does anyone know where the problem is?
def test(x):
exp_x = torch.tensor(expm(x.detach().numpy()))
return torch.sum(exp_x)
x = torch.eye(3,3)
x = x.requires_grad_()
y = test(x)
y.backward()
Since you are detaching the computation graph from your tensor manually, x
won’t get gradients anymore.
I assume you are detaching x
to call expm
from scipy
.
You could try to implement the Pade approximation in PyTorch or write an autograd.Function
with a custom backward method.
Thanks for your reply. I have used a different algorithm to do my work instead of using the matrix exponential.