Hello! I have the following code to take higher order derivatives (simplified example):
import torch
from torch.autograd import grad
x = torch.tensor([3.], requires_grad=True)
y = x**4
for i in range(5):
print(i,y)
grads = grad(y, x, create_graph=True)[0]
y = grads.sum()
This is the output:
0 tensor([81.], grad_fn=<PowBackward0>)
1 tensor(108., grad_fn=<SumBackward0>)
2 tensor(108., grad_fn=<SumBackward0>)
3 tensor(72., grad_fn=<SumBackward0>)
4 tensor(24., grad_fn=<SumBackward0>)
It does what I want, except for the fact that it prints 108 twice. Why does it do that?