Calling .backward() multiple times per iteration (getting gradient information)

I’m trying to use the autograd to get gradient information for a parameter that I need, but I can’t quite seem to get the grasp of it.
I have a network where I feed it coordinate information and it returns an energy potential, and I wish to take the gradient of the energy potential, which is the force, and then calculate an actual loss with the force.

I have made this small little dummy setup to show how I’m currently doing it:

import torch
import torch.nn as nn
r = torch.randn(3,10) #
d = torch.sum(r**2,dim=0, keepdim=True) + torch.sum(r**2,dim=0,keepdim=True).t() - 2 * r.t() @ r
d.requires_grad_(True)
y_true = torch.randn(10)

F = nn.Linear(10,10) #Dummy network
output = F(d)
E = torch.sum(output)
E.backward(retain_graph=True)
y = - d.grad # This should now be \frac{\partial E} {\partial d} if I'm not mistaken
ysum = torch.sum(y,dim=0)

loss = ysum - y_true

loss.backward() #Now I wish to do the standard loss backward propagation with an optimizer to update the network parameters
#optimizer.step()

The problem is that when I run the second .backward() I get the following error, which I’m don’t know how to deal with:

Traceback (most recent call last):
  File "/snap/pycharm-community/236/plugins/python-ce/helpers/pydev/pydevd.py", line 1483, in _exec
    pydev_imports.execfile(file, globals, locals)  # execute the script
  File "/snap/pycharm-community/236/plugins/python-ce/helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile
    exec(compile(contents+"\n", file, 'exec'), glob, loc)
  File "/home/tue/.config/JetBrains/PyCharmCE2021.1/scratches/scratch_7.py", line 17, in <module>
    loss.backward() #Now I wish to do the standard loss backward propagation with an optimizer to update the network parameters
  File "/home/tue/.local/lib/python3.8/site-packages/torch/tensor.py", line 245, in backward
    torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
  File "/home/tue/.local/lib/python3.8/site-packages/torch/autograd/__init__.py", line 145, in backward
    Variable._execution_engine.run_backward(
RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn

If there is a smarter method to get the gradient information I would also love to hear it, I haven’t really tried to do anything like this before.

I think that maybe I fixed the issue:

import torch
import torch.nn as nn
from torch.autograd import grad
r = torch.randn(3,10) #
d = torch.sum(r**2,dim=0, keepdim=True) + torch.sum(r**2,dim=0,keepdim=True).t() - 2 * r.t() @ r
d.requires_grad_(True)
y_true = torch.randn(10)

F = nn.Linear(10,10) #Dummy network
output = F(d)

E = torch.sum(output)
# E.backward(retain_graph=True)
y = -  grad(E, d, create_graph=True)[0].requires_grad_(True)
ysum = torch.sum(y,dim=0)

loss = torch.sum(ysum - y_true)

loss.backward() #Now I wish to do the standard loss backward propagation with an optimizer to update the network parameters
#optimizer.step()
print("Done")

This seems to run, I haven’t confirmed that it actually does what I intend but I think it does.