Hi Oktai15!
Thank you for your reply!
If I understood correctly, the NoGradGuard should, in essence, emulate the behaviour of requires_grad=false, correct?
If so, no parameters should change if I have the following, right? I.e., I should see the same bias before and after?
std::cout << "Biases before:\n" << policy->affine2->bias.data() << std::endl;
{
torch::NoGradGuard no_grad;
loss.backward();
optimizer.step();
}
std::cout << "Biases after:\n" << policy->affine2->bias.data() << std::endl;
It turns out it still changed the biases, do you know why?