Get probability from detected class

I have a network like this for image classification, for now i have 2 classes:

class ActionNet(Module):

    def __init__(self, num_class=4):

        super(ActionNet, self).__init__()

        

        self.cnn_layer = Sequential(

            #conv1

            Conv2d(in_channels=1, out_channels=32, kernel_size=1, bias=False),

            BatchNorm2d(32),

            PReLU(num_parameters=32),

            MaxPool2d(kernel_size=3),

            #conv2

            Conv2d(in_channels=32, out_channels=64, kernel_size=1, bias=False),

            BatchNorm2d(64),

            PReLU(num_parameters=64),

            MaxPool2d(kernel_size=3),

            #flatten

            Flatten(),

            Linear(576, 128),

            BatchNorm1d(128),

            ReLU(inplace=True),

            Dropout(0.5),

            Linear(128, num_class)

        )

    

    def forward(self, x):

        x = self.cnn_layer(x)

        return x

then after i training my network, i predict the image using this code:

def predict_image(image):

    

    input = torch.from_numpy(image)

    input = input.unsqueeze(1)

    input = input.to(device)

    output = model(input)

    index = output.data.cpu().numpy().argmax()

    return index

how do i get all class probability of prediction image?

Hi,

This can be achieved by simply using nn.Softmax(). You can add it in your model definition to get index and probs at the same time. Right after this line:

Or let’s say you have outputs (logits) and you are looking for probs here in your code:

You can achieve it by same trick:

softmax = nn.Softmax()
output = model(input)
probs = softmax(output)

Bests

is it ok to use softmax like that if im using CrossEntropy loss function

No, actually, softmax normalize logits between [0, 1] in the way that sum of all values adds up to 1, So, it can be interpreted as probability. And cross entropy does this internally.
For instance, a output of nn.Linear can be like this for 2 classes:

output = [[-2.3, 8.46],
          [-.56, 3.24], 
                   ...]

And after applying softmax:

output = [[0.1, 0.9],
          [0.3, 0.7],
                 ...]

edit: explanation was based on another library and partly wrong.
Thanks @timesler

No, you should avoid using CrossEntropyLoss on softmax-transformed values. CrossEntropyLoss expects raw logit values and calculates LogSoftmax followed by NLLLoss internally.

This is described in the documentation here.

Typically, it’s better practice to do the softmax calculation outside your model forward method so that you can pass the logits directly to the loss function.

2 Likes