Hi, I’m new to pytorch and neutral networks and am having an issue devising a memory efficient. I want to implement the following pseudo-code:
optimizer = torch.optim.Adam(self.net_params_pinn, lr=adam_lr)
for n in range(max_epoch):
loss, boundary_loss, saved_loss = self.Method()
optimizer.zero_grad()
loss.backward()
optimizer.step()
if n % 100 == 0:
self.z = self.z + rho*self.u_net
I am training a neural net that outputs a function self.u_net
(that I am training using a PINNs scheme, that uses the function self.z
) that I wish to use compute a function self.z
using the above iterative relation.
The issue is that I am not well versed enough to understand how best to implement this final step. How can I go about doing this? Is there a way to make this memory or computationally efficient?