How to get the output probability distribution?

I would like to know if it’s possible to get a predict_proba() (function that returns the probability distribution from a model in sklearn) from a neural net in PyTorch.

I basically need them to make ROC and precision-recall curves. But since I’m using the train_model() function from this tutorial https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html , I’m not sure how to get the probability distribution.

output = net(image)
print(output) #print output from crossentropy score

sm = torch.nn.Softmax()
probabilities = sm(output) 
print(probabilities) #Converted to probabilities

This code above makes sense to get the probabilities?.. I obtain the probabilities when I use this block of code, but I want to know if this makes sense, or if there is a theoretical error. I ask this because the result of the output above is from a crossentropy loss criterion I’ve used, and I’m not sure if makes sense to stack a softmax after it.

Are you using binary classification? I think it would be good idea to add a layer after the last layer like nn.Linear(1000, 1) and then use sigmoid instead of softmax. Now it will make sense, the value which you got is the probability and you can draw roc corve by applying different thresholds.

Yes, I’m using binary classification. But with the code I provided above, I get a probability distribution over the 2 classes I have, and my final layer is already a nn.Linear(1024, 2), but I train the network with a crossentropy criterion… My doubt is if make sense to add a softmax on top of the output which is a result of a crossentropy loss.

If you have binary classification (not (dog or cat), but (dog or others) ), i think softmax is not logical way. This post may help you. There are binary classification loss functions such as torch.nn.BCEWithLogitsLoss.