Hello,
I have an HMC (Hamiltonian Monte Carlo) sampler, parametrised by neural networks.
After training procedure when performing sampling, I do not need to compute a backward graph, and that is why I would like to use torch.no_grad() upon my model to speed up the procedure.
But at the same time to perform a leapfrog step, I have to compute the partial derivative of log posterior, which only depends on the current coordinate (I always can manually set requires_grad=True to compute this derivative, but not within torch.no_grad()).
Is there a clever way, how one could use torch.autograd.grad within torch.no_grad()?
Thank you!