I’ve never used Pytorch’s KLDivLoss. What input and target should be? It says
this function expects the first argument input, to be the output of the model (e.g. the neural network) and the second, target, to be the observations in the dataset
But what exactly should be those ones? Probability densities under different distributions?
The model output should represent log-probabilities (e.g. the output of e.g. F.log_softmax(logits)) while the targets are either probabilities by default or also log-probabilities if log_target=True is used.
I’m using the KLDivLoss function with the batchmean reduction in an example of a Variational Autoencoder. In this example, mu and std are output parameters of my model, and eps is a standard normal sample from the distribution that I’m trying to approximate. In my experiments with MNIST, I’ve noticed that some of the sampled values are too far from the standard mean. Am I doing something wrong with the parameters of the loss function?