Sorry for my questions, though I am fairly new into this topic. I am trying to learn a model to guide a gradient step in MAP restoration. My first question is simply if it is possible to backpropagate a gradient step (with adam optimizer)?
My second question is how to do this?
The problem I am having is that after loss.backward(), network parameters gradient is of “Nonetype” (zero) which is not true if we were aloud to backpropagate through the gradient step. How do I solve my problem?
Thank you for the library! But I don’t think it solves my problem, as I want to update the gradient before the optimizer step with:
out = net(img_ano.data())
img_ano.grad = img_ano.grad + out
This does not seem possible in the library as you need to send the loss function with the gradient step (optim.loss(loss_function)) and in my case I want to change the gradient I get after gfunc.backward(). Any suggestions?
inner_loss = l2_loss(img_ano,prior)
inner_loss.backward()
img_ano.grad += out(img_ano.data()) # Changing the gradient by adding the output of network
img_ano = img_ano + step_size * img_ano.grad # (restore_optimizer.step()) Here I want to take a corresponding step with adam optimizer instead
loss = diceloss(img_ano,target_img)
loss.backward() # Gather network params gradients, ie. backpropagate the gradient step
net_optimizer.step() # Update network params