Implementation of gradient-enhanced neural network

Hi everyone! I would like to implement a gradient-enhanced neural network (GENN) using pytorch.
I am quite new to pytorch, so don’t hesitate to point me out the obvious solutions.

The idea of the gradient-enhanced neural network is to train the network by adding the gradient error in the loss function. This is possible when the gradient information of the function is available or cheap-to-compute. Here is the snippet of a paper [Scalable gradient–enhanced artificial neural networks for airfoil shape design in the subsonic and transonic regimes | Structural and Multidisciplinary Optimization ]that I just read, and I am interested to implement it in pytorch.

genn

Here is the quick code that I wrote: (only the training part)

for epoch in range(1000):
training_loss = 0.0
for batch_no, data in enumerate(trainLoader):
x, f, df_dx = data[“x”], data[“f(x)”], data[“df(x)/dx”]
x.requires_grad = True
f.requires_grad = True
df_dx.requires_grad = True
optimizer.zero_grad()
pred = model(x)
pred.retain_grad()
pred.sum().backward()
# print(x.grad)
# print(df_dx)
loss = loss_fn(pred, f) + loss_fn(x.grad, df_dx)
loss.backward()
optimizer.step()

	training_loss += loss.item()

print(f'epoch = {epoch}, training_loss = {training_loss}')

torch.save(model, ‘genn_model.pth’)
torch.load(‘genn_model.pth’)

The above code gave the following errors:
RuntimeError: Trying to backward through the graph a second time (or directly access saved tensors after they have already been freed). Saved intermediate values of the graph are freed when you call .backward() or autograd.grad(). Specify retain_graph=True if you need to backward through the graph a second time or if you need to access saved tensors after calling backward.

I think the core idea to implement GENN is to use the autograd backward engine.

I appreciate if anyone has any thoughts on it :))