I want to include gradient of nn wrt. the input in the loss function, here is my code:
.... dydx = torch.zeros(Nminibatch,Nin*2) Y_net = torch.zeros(Nminibatch,Nout) optimizer = optim.Adam(net.parameters(), lr=LR) for epoch in range(Nepochs): print('starting epoch ' + str(epoch) + ', Learning rate = ' + str(LR)) for batch_idx, (X, Y) in enumerate(loader): X.requires_grad = True Y_net = net(X) # loop over minibatch for idx in range(Y_net.size()): dydx[idx,:] = torch.autograd.grad(Y_net[idx,0],X,create_graph=True)[idx,:] loss = loss_f(Y_net, Y) + (dydx[:,torch.arange(0,Nin)]).sum() optimizer.zero_grad() loss.backward(retain_graph=True) optimizer.step()
But I got this error message:
Traceback (most recent call last): File "<stdin>", line 14, in <module> File "/home/truhlard/ning0035/anaconda3/lib/python3.7/site-packages/torch/tensor.py", line 221, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph) File "/home/truhlard/ning0035/anaconda3/lib/python3.7/site-packages/torch/autograd/__init__.py", line 132, in backward allow_unreachable=True) # allow_unreachable flag RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [50, 50]], which is output 0 of TBackward, is at version 2; expected version 1 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!
Does anyone know how to fix this?
Thanks a lot!