Dear all, I’m working on the physics-informed neural network (PINN), which aims to solve partial differential equations. In the program, I need to implement the second derivative, but the solution on which seems to be not right. Here is my code for doing autograd:

``````    def predict(self, xf, tf):
X = torch.cat((xf,tf),1)
uv = self.forward(X)
u = uv[:,0]
v = uv[:,1]

f_u = u_t + torch.mul(v_xx,0.5) + torch.mul((u**2 + v**2),v)
f_v = v_t - torch.mul(u_xx,0.5) - torch.mul((u**2 + v**2),u)

return u, v, u_x, v_x, f_u, f_v

def loss_function(self):
u0_pred, v0_pred, _, _, _, _ = self.predict(self.x0, self.t_x0)
u_lb_pred, v_lb_pred, u_x_lb_pred, v_x_lb_pred, _, _= self.predict(self.x_lb,self.t_lb)
u_ub_pred, v_ub_pred, u_x_ub_pred, v_x_ub_pred, _, _ = self.predict(self.x_ub,self.t_ub)
_, _, _, _, f_u_pred, f_v_pred = self.predict(self.x_f, self.t_f)

lossmse = self.loss1(u0_pred,self.u0) + \
self.loss1(v0_pred,self.v0) + \
self.loss1(u_lb_pred,u_ub_pred) + \
self.loss1(v_lb_pred,v_ub_pred) + \
self.loss1(u_x_lb_pred,u_x_ub_pred) + \
self.loss1(v_x_lb_pred,v_x_ub_pred) + \
self.loss1(f_u_pred,self.f) + \
self.loss1(f_v_pred,self.f)
return lossmse
``````