Using activation function is optional, though, I also consider adding activation function in the last layer is natural way.
You can add nn.Softmax() in the sequential-formed InceptionV3 model to get normalized output.
Just as a side note in case you are trying to fine tune the model:
the usual loss functions for classification expect log probabilities (nn.LogSoftmax + nn.NLLLoss) or logits (no non-linearity + nn.CrossEntropyLoss).
The nn.Softmax layer is still fine to get the normalized probabilities as @kenmikanmi suggested.