I’m doing binary classification, however used categorical cross entropy loss rather than binary to train my model (I believe this is ok as results appear to be normal).
Using a saved model, during inference however, I would like to obtain class probabilities for the outputs from the model.
I believe logits are output first. I then pass them through a softmax layer before selecting the top probability/predicted class.
Does this look correct?
outputs = model(inputs) # obtain logits
soft_outputs = torch.nn.functional.softmax(outputs, dim=1) #pass through softmax
top_p, top_class = soft_outputs.topk(1, dim = 1) # select top probability as prediction
Yes, your usage of softmax looks correct, but note that the topk results will be the same using the logits or probabilities since softmax does not change the order of the logits.
Yes, torch.argmax on the logit outputs will return the same predicted class indices as when it’s applied on the probabilities (i.e. softmax outputs). However, if you want to “see” the probabilities, then you can of course apply the additional softmax. Just make sure to not pass these probabilities to e.g. nn.CrossEntropyLoss during training as this criterion expects logits.
Yes, the size of the logits and probabilities corresponds to the number of predicted classes as [batch_size, nb_classes, *].