Are these two functions identical?

I’ve found this mcdropout_test() function :

def mcdropout_test(model):

    model.train()
    test_loss = 0
    correct = 0
    T = 100
    for data, labels in test_loader:
        if args.cuda:
            data, labels = data.cuda(), labels.cuda()
        data, labels = Variable(data, volatile=True), Variable(labels)
        output_list = []
        for i in xrange(T):
            output_list.append(torch.unsqueeze(model(data), 0))
        output_mean = torch.cat(output_list, 0).mean(0)
        test_loss += F.nll_loss(F.log_softmax(output_mean),  labels, size_average=False).data[0]  # sum up batch loss
        pred = output_mean.data.max(1, keepdim=True)[1]  # get the index of the max log-probability
        correct += pred.eq(labels.data.view_as(pred)).cpu().sum()

    test_loss /= len(test_loader.dataset)
    print('\nMC Dropout Test set: Average loss: {:.4f}, Accuracy: {}/{} ({:.2f}%)\n'.format(
    test_loss, correct, len(test_loader.dataset),
    100. * correct / len(test_loader.dataset)))

I did some changes on the mcdropout_test() to have the mcdropout() and i’m not completely sure if these two have the same function

def mcdropout():

    model.train()
    test_loss = 0
    correct = 0
    T = 100

    with torch.no_grad():
        for images,labels in testloader:
            images = images.to(device)
            labels = labels.to(device)
            output_list = []
        
            for i in range(T):
                output_list.append(torch.unsqueeze(model(images), 0))
            output_mean = torch.cat(output_list, 0).mean(0)         
            test_loss += F.nll_loss(F.log_softmax(output_mean,dim=1), labels, reduction='sum').data  # sum up batch loss       
            _, predicted  = torch.max(output_mean, 1) # get the index of the max log-probability
            correct += (predicted == labels).sum().item()

        test_loss /= len(testloader.dataset)
        print('\nMC Dropout Test set: Average loss: {:.4f}, Accuracy: {}/{} ({:.2f}%)\n'.format(
        test_loss, correct, len(testloader.dataset),
        100. * correct / len(testloader.dataset)))

Hy, can you properly indent the code.
Both the functions are evaluating the model.

This shouldn’t help, since the function is not training the model.

The second one is more better implementation

I have adjusted the indets

So, Both the functions are quite the same except

This will reduce memory usage and speed up computations but you won’t be able to backprop.
Except everything is the same, second function has much better implementation.
Also

I don’t see any use of this.