I read the documentation of torch.nn.functional.softmax, but I am not very clear upon the usage of the dim argument.
My output is of the following dimensions :
(batchsize, num_classes, length, breadth, height) - so it is a 5D output
I am doing a multi class semantic segmentation problem, so I want the pixel-wise softmax of corresponding pixels in (length, breadth, height) along all classes. So do I use dim = 1 while using torch.nn.functional.softmax(output, dim = 1) , this way?
Please advise.
Thanks a lot!

torch.nn.functional.softmax(output, dim = 1) will yield an output, where the probabilities for each pixel sum to 1 in the class dimension (dim1).
This probability tensor can be used as a sanity check or for visualization purposes.

However, note that e.g. nn.CrossEntropyLoss expects raw logits as the model’s output, since internally F.log_softmax(output, 1) and nn.NLLLoss will be used, so that you shouldn’t use the softmax method manually.

1 Like

Alright, thanks a lot!