What KLDivLoss expects as input?

I’ve never used Pytorch’s KLDivLoss. What input and target should be? It says

this function expects the first argument input, to be the output of the model (e.g. the neural network) and the second, target, to be the observations in the dataset

But what exactly should be those ones? Probability densities under different distributions?

The model output should represent log-probabilities (e.g. the output of e.g. F.log_softmax(logits)) while the targets are either probabilities by default or also log-probabilities if log_target=True is used.

I don’t know if I’m using it correctly.

        sigma = torch.exp(std)
        
        z = mu + sigma * eps
       
        kl_input = F.log_softmax(z, dim=0)
        kl_target = F.log_softmax(eps, dim=0)

        self.kl = self.kl_lossfn(
            kl_input,
            kl_target
        )

I’m using the KLDivLoss function with the batchmean reduction in an example of a Variational Autoencoder. In this example, mu and std are output parameters of my model, and eps is a standard normal sample from the distribution that I’m trying to approximate. In my experiments with MNIST, I’ve noticed that some of the sampled values are too far from the standard mean. Am I doing something wrong with the parameters of the loss function?

The usage looks alright assuming you have used log_target=True while creating the criterion.