JZArray
(Jz Array)
December 7, 2021, 11:55pm
1
Hello all! I have a question about the autograd, please look at the following the example.
out = net(input)
out = out.detach().numpy()
new_output = function_1(out) # function_1 contains some numpy operations
new_output = torch.from_numpy(new_output ).float()
new_output = Variable(new_output , requires_grad=True).to(device)
loss = loss_function(new_output, label)
loss.backward
When I call ‘loss.backward’, will the parameters in net also be updated?
Thanks
1 Like
In short, No. The parameters of the model will not receive gradients as you have .detach()
ed the computation graph from the loss function.
s11
(SSSZZZ)
January 6, 2022, 6:08am
3
JZArray:
es_grad
Hi, do you know how to solve this problem with other operations? Can I perform Numpy operations between the output of a network and loss calculation?
Nope, to my understanding, if you want to backpropagate through them, they have to be torch operations. You should re-implement those functions using torch.
1 Like
s11
(SSSZZZ)
January 6, 2022, 12:28pm
5
Ok. Thank you for your answer.
JZArray
(Jz Array)
January 6, 2022, 9:04pm
6
No, you can’t, but in torch there are lots of similar operations as numpy