Since your model already has a softmax layer at the end, you don’t have to use F.softmax
on top of it. The outputs of your model are already “probabilities” of the classes.
However, your training might not work, depending on your loss function.
For a classification use case you would most likely use a nn.LogSoftmax
layer with nn.NLLLoss
as the critertion or raw logits, i.e. no non-linearity and nn.CrossEntropyLoss
.
As you are currently using nn.Softmax
, you would need to call torch.log
on the output and feed it to nn.NLLLoss
, which might be numerically unstable.
I would recommend to use the raw logits + nn.CrossEntropyLoss
for training and if you really need to see the probabilities, just call F.softmax
on the output as described in the other post.