Finding the derivative of a function belonging to an equation, that is to be solved by a neural network

Hi, I am using PINN(Raissi et. al) to solve a set of equations.
The equations consist of functions and the derivative of the functions. Like this:
PDE1 = func1 + func2’
PDE2 = func1’ + func3
I am wondering if I can use autograd to do the derivation of the functions, and at the same time use autograd to find the gradients of the network. The reason for wanting to use autograd for the derivation of the functions is because I want to have the option to change the function e.g. it could be func1 = x2 in certain cases, but in other cases it could be e.g. func1 = x3. I dont have a lot of experience with programming and pytorch, so this might seem obvious to some, however I cannot seem to figure out how to do this.

Hi @hilde, can you share an example of what you’re trying to do? Please include the input variables and the outputs you expect. This will help us answer your question. Thanks!

I am such a noob. I was able to solve the problems I had. It helped thinking about what input and output to expect.

I’m glad you solved it! I find reframing my question in a minimal example very helpful too.
If you don’t mind sharing your solution, it might help someone in the future :slight_smile:

I used this:

I have a function: def eps1(self, x):
return x**2
That i derive by: def eps1_grad(self, points):
p = points.clone()
p.requires_grad_(True)
eps1xi = self.eps1(p)
eps1_xi = autograd.grad(outputs=eps1xi, inputs=p,
grad_outputs=torch.ones_like(eps1xi), create_graph=True)[0]
return eps1_xi

Then I use the derived function in:
def loss_Kvu(self, points):
p = points.clone()
p.requires_grad_(True)
KvuKvv = self.forward(p)
Kvu = KvuKvv[:, [0]]
Kvv = KvuKvv[:, [1]]
Kvu_grad =
autograd.grad(outputs=Kvu, inputs=p, grad_outputs=torch.ones_like(Kvu),
create_graph=True)[0]
self.Kvu_x = Kvu_grad[:, [0]]
self.Kvu_xi = Kvu_grad[:, [1]]
x = points[:, [0]]
xi = points[:, [1]]
eps1_xi = self.eps1_grad(xi)
f1 = self.eps2(x) * self.Kvu_x - self.eps1(xi) * self.Kvu_xi - eps1_xi * Kvu -
self.c2(xi) * Kvv
loss_f1 = self.loss_function(f1, f1_hat)
return loss_f1

Sadly it does not work when eps1 = np.sin(x). Only for function that I can define without the help of numpy.