My code requires the autograd to be on even during eval mode, since I need the gradient information of my output with regards to one of my input variables. However this accumulates gradient information somehow when using eval mode. (In training mode this is not a problem, since it also calls loss.backward() and optimizer.step(), which frees up the memory along with optimizer.zero_grad())
How do I free the memory in a similar fashion but without actually running loss.backward()
Here is a simplified version of my train/eval mode:
def use_model(model,dataloader,train,optimizer,device,batch_size=1):
aloss = 0.0
if train:
model.train()
else:
model.eval()
for i, (Ri, Fi, Ei, zi) in enumerate(dataloader):
Ri.requires_grad_(True)
xn, xe, G = getIterData_MD17(Ri.squeeze(), device=device)
if train:
optimizer.zero_grad()
xnOut, xeOut = model(xn, xe, G)
E_pred = torch.sum(xnOut)
F_pred = -grad(E_pred, Ri, create_graph=True)[0].requires_grad_(True)
loss = F.mse_loss(F_pred, Fi)
if train:
loss.backward()
optimizer.step()
aloss += loss.detach()
return aloss