Built-in 0-1 loss function?

PyTorch have a built-in 0-1 loss function?

Goodfellow et al. p. 102:

The 0-1 loss on a particular example is 0 if it is correctly classified and 1 if it is not.

A custom 0-1 loss function could easily be implemented:

class ZeroOneLoss(torch.nn.Module):
    def __init__(self):
        super(ZeroOneLoss, self).__init__()

    def forward(self, input, target):
        if input.equal(target):
            return torch.tensor(0., requires_grad=True)
        else:
            return torch.tensor(1., requires_grad=True)

Hi Geremia!

No.

A 0-1 loss function would not be (usefully) differentiable. That is, after backpropagation,
the gradients computed for a model’s parameters would all be zero (hence giving the
optimizer no information about how to modify those parameters to reduce the loss).

This if-else construct is not differentiable. Setting requires_grad = True for the
return value doesn’t fix this.

(You could write a soft loss that is close to one when input and target are nearly
equal, but is close to zero when input and target differ significantly.)

Best.

K. Frank

@KFrank Thanks. I was going to write an answer to my own question and say something similar: that this function is not differentiable, so is useless for backprop.