Tensorboard example in pytorch has a mistake

I can not help myself asking this.
Here we have:

with torch.no_grad():
    for data in testloader:
        images, labels = data
        output = net(images)
        class_probs_batch = [F.softmax(el, dim=0) for el in output]
        _, class_preds_batch = torch.max(output, 1)

        class_probs.append(class_probs_batch)
        class_preds.append(class_preds_batch)

but shouldn’t we have instead:

with torch.no_grad():
    for data in test_iterator:
        images, labels = data
        output = net(images)
        class_probs_batch = [F.softmax(el, dim=0) for el in output]
        _, class_preds_batch = torch.max(output, 1)

        class_probs.append(class_probs_batch)
        #class_preds.append(class_preds_batch)
        class_preds.append(labels)

?

The class_preds will be used to compare them to the real labels in add_pr_curve_tensorboard:

tensorboard_preds = test_preds == class_index
tensorboard_probs = test_probs[:, class_index]

If you would store the labels instead, you would just compare the ground truth to itself, wouldn’t you?

EDIT: I misunderstood the code and it should indeed compare the class_index to the targets.
The issue is tracked here.

1 Like