I tried fine-tuning InceptionV3 model. by this page https://pytorch.org/tutorials/beginner/finetuning_torchvision_models_tutorial.html?highlight=fine%20tuning#
but, I couldn’t see the activation layer like softmax. Is this normal ? or torchvision’s models don’t have the activation layer ?
In Keras, I always put activation layer on the last. So, it’s strange for me.
Using activation function is optional, though, I also consider adding activation function in the last layer is natural way.
You can add
nn.Softmax() in the sequential-formed InceptionV3 model to get normalized output.
Just as a side note in case you are trying to fine tune the model:
the usual loss functions for classification expect log probabilities (
nn.NLLLoss) or logits (no non-linearity +
nn.Softmax layer is still fine to get the normalized probabilities as @kenmikanmi suggested.
I understood how this model learn labels in pytorch.
Thanks for answer.