I’m doing binary classification, however used categorical cross entropy loss rather than binary to train my model (I believe this is ok as results appear to be normal).

Using a saved model, during inference however, I would like to obtain class probabilities for the outputs from the model.

I believe logits are output first. I then pass them through a softmax layer before selecting the top probability/predicted class.

Does this look correct?

```
outputs = model(inputs) # obtain logits
soft_outputs = torch.nn.functional.softmax(outputs, dim=1) #pass through softmax
top_p, top_class = soft_outputs.topk(1, dim = 1) # select top probability as prediction
```

Yes, your usage of `softmax`

looks correct, but note that the `topk`

results will be the same using the logits or probabilities since `softmax`

does not change the order of the logits.

1 Like

Ah ok, yes, I basically wanted to know the most likely predicted class and it’s associated probability score.

It appears that the above predicted class is the same as when I predict the class label like this:

```
_, pred = torch.max(outputs, 1)
```

From my understanding, the size of the logits are related to the probability score…

Yes, `torch.argmax`

on the logit outputs will return the same predicted class indices as when it’s applied on the probabilities (i.e. `softmax`

outputs). However, if you want to “see” the probabilities, then you can of course apply the additional `softmax`

. Just make sure to not pass these probabilities to e.g. `nn.CrossEntropyLoss`

during training as this criterion expects logits.

Yes, the size of the logits and probabilities corresponds to the number of predicted classes as `[batch_size, nb_classes, *]`

.

1 Like