Partial derivative (torch.autograd.grad) within torch.no_grad()

Hello,

I have an HMC (Hamiltonian Monte Carlo) sampler, parametrised by neural networks.

After training procedure when performing sampling, I do not need to compute a backward graph, and that is why I would like to use torch.no_grad() upon my model to speed up the procedure.

But at the same time to perform a leapfrog step, I have to compute the partial derivative of log posterior, which only depends on the current coordinate (I always can manually set requires_grad=True to compute this derivative, but not within torch.no_grad()).

Is there a clever way, how one could use torch.autograd.grad within torch.no_grad()?

Thank you!

You can use torch.grad() guardian the same way you use torch no grad to compute grads in that specific part.

Do you mean the decorator https://pytorch.org/docs/stable/autograd.html#torch.autograd.set_grad_enabled
?

1 Like

Thank you, it works!

Ups I though it was torch.grad() but yeh, enable_grad() or set_grad_enabled. Any of them should work.