Evidence lower bound loss function and its gradient in pytorch

How to implement evidence lower bound ELBO loss function and its gradient in pytorch. I have been using KL divergence as following:

# KL Divergence loss function
loss       = nn.KLDivLoss(size_average=False, log_target=True)                                    
out        = loss(Gaussian.log_prob(x), Gaussian.log_prob(xtrue))      
gr          = autograd.grad(outputs=[out], inputs=[x])[0]

How can I implement ELBO/gradient?