In Pytorch, I have a multi-dimension solution stored on each column of a tensor u
, and for the training of the PINN I’m developing, I use this function to compute the derivative column-wise.
def dt(self, u, t):
# return batch_jacobian(u, t)
N = u.shape[1]
u_t = [] # initializing the time derivatives
# for each columns (i), we compute the time derivative ui_t
for i in range(N):
ui_t = torch.autograd.grad(u[:, i], t,
grad_outputs=torch.ones_like(u[:, i]),
retain_graph=True,
create_graph=True,
allow_unused=True)[0]
u_t.append(ui_t) # time derivatives are stored in u_t
u_t = torch.cat(tuple(u_t), dim=1) # we concateneate all the derivatives
return u_t
but it involves a for
loop that I would like to remove.
In my research about it, I found this topic with the same issue, but I wasn’t able to fix my code with the proposed solutions.
Here is the definition of the functional that is used for the training of the neural network :
def functional_f(self, t, theta):
self.A = torch.tensor(theta[:,1:], requires_grad=False).float().to(self.device)
self.mu = torch.tensor(theta[:,0], requires_grad=False).float().to(self.device)
u = self.dnn(t)
u_t = self.dt(u, t)
ode = u_t - self.renorm*(self.mu + (self.A @ torch.exp(u).T).T)
return ode
the ODE system to be solved is ∂t ui = µ + A⋅exp(ui), where µ ∈ ℝ^N and A ∈ ℝ^(N,N) are stored in the matrix theta
, are fixed parameters involved in the ODE.
How could I implement the function dt
so that there is no more loop in it ?
I also tried the decorator jit
but without success either…
(nb : this is a duplicate of my stack overflow post that did not get an answer yet…)