Mixing stochastic gradients with autograd

Hi there,

I am trying to implement Algorithm 1 of this paper, where I need to evaluate a gradient composed by two terms, one I can compute with the usual .backward which is the g_\theta^{mod}(.) (the log likelihood of the model) and the other, g_\theta^{ent}, is the gradient of the entropy which is given by the Hamiltonian Monte Carlo estimate of the stochastic gradient, thus I cannot perform the backward pass on this. How can I apply these gradients on my model?

Thanks in advance.