Can I make the precision of KL divergence be lower than 1e-8?

Hi !
I’m using a neural network to approximate a Gaussian distribution.
The mean and log standard deviation are output from the NN and the loss is computed according to KL divergence.
After iterations, the loss (KL divergence) is still more than 1e-8.
And in some of the iteration the loss is printed to be 0.
But I want the loss to be close to 0 as much as possible.
Can I do some operations and make the loss further decrease, or be close to 1e-16 ?
(It didn’t help when I set the print precision to be 16. -_-)

If you use FloatTensor, 1e-8 is close to the smallest number you can represent precisely. At this point most the gradient computation are really noisy. You will need DoubleTensor if you want more precision.

1 Like

Thanks! It helped. The loss did decrease a bit to be about 1e-10 after I used the DoubleTensor.