Testing and Confidence score of Network trained with nn.CrossEntropyLoss()

I have trained a network with the following structure :slight_smile:

Intent_LSTM(
(attention): Attention()
(embedding): Embedding(34601, 400)
(lstm): LSTM(400, 512, num_layers=2, batch_first=True, dropout=0.5)
(dropout): Dropout(p=0.5, inplace=False)
(fc): Linear(in_features=512, out_features=3, bias=True)
)

Now here my loss function:

lr=0.001

criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(net.parameters(), lr=lr)

Now I want to test this trained network and also get the confidence score of classification.
Here is my current implementation of test function:

def test_func(loader, model_current, criterion,folder_name):
    train_on_gpu = torch.cuda.is_available()
    labels_CM = []
    predictions_CM = []
    print_and_write(folder_name,"Test result for Test Data")
    test_losses = []  # track loss

    correct = 0
    total = 0

    model_current.eval()
    # iterate over test data
    for inputs, labels in loader:

        if (train_on_gpu):
            inputs, labels = inputs.cuda(), labels.cuda()

        # get predicted outputs
        output = model_current(inputs)

        # calculate loss
        test_loss = criterion(output.squeeze(), labels)
        test_losses.append(test_loss.item())

        # convert probablities to integer
        pred = torch.round(output.squeeze())

        pred = pred.argmax(dim=1, keepdim=True)
        predictions_CM += list(pred.cpu().numpy())

        labels = labels.unsqueeze(dim=1)
        labels_CM += list(labels.cpu().numpy())

        correct += (pred.cpu() == labels.cpu()).sum()

    print(folder_name,"Test loss: {:.3f}".format(np.mean(test_losses)))

    test_acc = correct.item() / len(loader.dataset)
    print(folder_name,"Test accuracy: {:.3f}".format(test_acc))

Now my question isas follows.

  1. Here pred is just the output from Fully connected layer from my network without softwax (as required by loss function). Is this(pred = pred.argmax(dim=1, keepdim=True)) the right way to get the predictions? Or should i pass the output from the network to a softmax layer and then do argmax?

  2. How do I get the confidence score? Should I pass the output from the network to a softmax layer and select the argmax as the confidence of the class?