Workaround for backward_input can only be called in training mode

RuntimeError: backward_input can only be called in training mode

I have an evaluation method, annealed importance sampling, that uses the model and needs to calculate gradients in a function like this: where U(z) is going to call a portion of the model. This is an evaluation method and so before calling this I am doing model.eval(). Is there a workaround to this? Is this really the correct behavior because I would think of somehow limiting only the optimizer.step() over parameters when in eval mode not actually computing the gradients.

    def grad_U(z):
            # grad w.r.t. outputs; mandatory in this case
            grad_outputs = torch.ones(B).type(mdtype)
            # torch.autograd.grad default returns volatile
            grad = torchgrad(U(z), z, grad_outputs=grad_outputs)[0]
            # clip by norm
            grad = torch.tensor(torch.clamp(grad, -B*z_size*100, B*z_size*100), requires_grad=True)
            return grad

Thanks in advance