Getting the proper prediction and comparing it to the true value

Hello,

I am making a neural network to make a binary classification and I would like to check the predictions made in the testing phase of my network, but I don’t seem to be getting the proper values.
What I want is not the loss over the whole batch but each prediction over every test sample to compare it to the true value.
My testing goes like this :

net.eval()    
for i_batch, sample_batched in enumerate(dataloader):
        data = Variable(sample_batched['tensor'].view(batch_size, -1, max_size * 20), requires_grad=False, volatile=True)
        if gpu_used >= 0:
            target = Variable(sample_batched['interaction'].cuda(gpu_used), requires_grad=False, volatile=True)
        else:
            target = Variable(sample_batched['interaction'], requires_grad=False, volatile=True)
        output = net(data)
        #test_loss += criterion(output, target).data[0]                                                                                                                                                             
        test_loss += F.nll_loss(output, target, size_average=False).data[0] #fsize_average=False to sum, instead of average losses                                                                                  
        pred = output.data.max(1, keepdim=True)[1]

        correct += pred.eq(target.data.view_as(pred)).cpu().sum() # to operate on variables they need to be on the CPU again

I would appreciate if anyone could give me some pointers to how I could proceed.

1 Like

The code looks fine, besides some minor unnecessary indexing.
What do you mean by “I don’t seem to be getting the proper values”? Could you explain a bit more about the problem?

Here is a small example, which basically computes, what your code does:

batch_size = 10
n_classes = 5
output = F.log_softmax(Variable(torch.randn(batch_size, n_classes)), dim=1)
target = Variable(torch.LongTensor(batch_size).random_(n_classes))

_, pred = torch.max(output, dim=1)

pred.eq(target)

Thank you for your quick reply :smiley:

What i mean is I am trying to get every prediction from this throughout the testing and have at the end a list of

prediction : target

to be able to make an F-measure table from the results to have more information on the network’s answer quality.

I am also curious as to what the unnecessary indexing is, I have been working on many iterations of my code and I probably have left things I no longer need, I’d be happy to remove them.

Edit : I just thought that maybe having a testing batch of one would be a solution, wouldn’t it ?

It’s probably user’s preference, but I would remove the keepdims=True ans .view_as, like in my code. :wink:

Ah ok, I understand.
You could just store them in a list.

preds = []
targets = []
for i in range(10):
    output = F.log_softmax(Variable(torch.randn(batch_size, n_classes)), dim=1)
    target = Variable(torch.LongTensor(batch_size).random_(n_classes))

    _, pred = torch.max(output, dim=1)
    preds.append(pred.data)
    targets.append(target.data)

preds = torch.cat(preds)
targets = torch.cat(targets)

Or you could of course cast them to numpy arrays, if you would like to calculate statistics using another framework.

2 Likes

Thanks, I don’t know why I didn’t think of storing them in a list :sweat_smile: .

Have a great day !

Thank you very much ptrblck, I will try to incorporate it in my work.

Heaps, heaps of thanks and stay blessed.

Cheers.

1 Like

Hi ,
can someone help me in understanding this line of code.Why do we use _,pred.

Thanks in advance

You can use pred to get the predicted classes from the output (logits or probabilities) of the model in a multi-class classification use case.

Thank you .
why do we add "_, "

You can add it as a placeholder to indicate you don’t want to use this return value (the max. values) and only want to use the max. indices.
Alternatively, you could also directly use pred = torch.argmax(output, dim=1).

2 Likes

Thank you so much for the help