Custom Loss KL-divergence Error

That should work, just remember to zero the grads in your training loop.

Best regards

Thomas