there is not a softmax layer in this code ,but in the original paper the last layer is a softmax layer,why dose it happen?
if I use pretrained vgg model to recognize a picture,do I need to add a softmax layer in the last of the model to get each class’s probability?
torchvision models were trained usingnn.CrossEntropyLoss which consists of nn.LogSoftmax and then nn.NLLLoss.
that’s why there is no softmax layer at the end.
So you only need to use nn.CrossEntropyLoss() in your model