i have an output of a NN that is supposed to follow a certain (normal in this case) distribution. From what i understand KLDivLoss measures the distance between two distributions, so i guess this is what i need to use. However, my outputs are not propabilities! And i think that is what KLDivLoss needs. You could consider the output to be samples of a distribution and i want this distribution to be guassian (or maybe something else later on).
So my question is: Is there a function in PyTorch i can use to get the distribution(/propabilities) from the samples so that i can use KLDivLoss and still backpropagate through it?
Using a histogramm came to mind first, but from what i found, torch.histc is not able backpropagate…
Would anyone know a good approach to this?