Measuring uncertainty using Entropy

I have implemented the MC Dropout method on pytorch ,the main idea of this method is to set the dropout layers of the model to train mode. This allows for different dropout masks to be used during the different various forward passes. Below is the implementation of MC Dropout illustrating how multiple predictions from the various forward passes are stacked together

def mcdropout_test(batch_size,model):

    #set non-dropout layers to eval mode

    #set dropout layers to train mode
    test_loss = 0
    correct = 0
    n_samples = 0
    T = 100
    for images,labels in testloader:
        images =
        labels =
        with torch.no_grad():
          output_list = []
          #getting outputs for T forward passes
          for i in range(T):
            output_list.append(torch.unsqueeze(model(images), 0))
        #calculating mean
        output_mean =, 0).mean(0)
        test_loss += F.nll_loss(F.log_softmax(output_mean,dim=1), labels, reduction='sum').data  # sum up batch loss       
        _, predicted  = torch.max(output_mean, 1) 
        correct += (predicted == labels).sum().item() #sum up correct predictions
        n_samples += labels.size(0)    

    acc = 100.0 * correct / n_samples
    print(f'the mcdropout score test is : {acc} %')

To mesure uncertainty for classification tasks we need to calculate entropy
and I am trying to implement this on the code above (should i calculate Entropy after the T forward passes? and how exactly i can do that ?and how exactly should i to choose the samples or classes (high/low entropy) ?)
the code for the test_loader

  #downloading the test set
  testset = torchvision.datasets.CIFAR10(root='./data', train=False,download=True, transform=test_transform)

  #loading the test set
  testloader =, batch_size=batch_size, shuffle=False, num_workers=4)