Weights in BCEWithLogitsLoss

  1. Yes, that’s the default interpretation. You could of course interpret the values as you wish and redefine the positive and negative, as well as the recall, precision etc.

  2. By definition 0s would be the negative class. The pos_weight usage is shown in the formula in the docs so you can re-interpret it if needed (see point 1). No, binary classification/segmentation with nn.BCEWithLogitsLoss expect a single output channel for the binary classes. Multi-label classification/segmentation (where each sample/pixel can belong to zero, one, or more classes) expect an output channel per class.

  3. Yes, as seen here:

x = torch.randn(2, 1, 24, 24, requires_grad=True)
y = torch.zeros(2 * 1 * 24 * 24)
y[torch.randint(0, y.nelement(), (100,))] = 1.
y = y.view_as(x)
print('y: 0s: {}, 1s: {}, nelement: {}'.format(
    (y==0.).sum(), y.sum(), y.nelement()))


criterion = nn.BCEWithLogitsLoss()
loss = criterion(x, y)
print(loss)

criterion_weighted = nn.BCEWithLogitsLoss(pos_weight=(y==0.).sum()/y.sum())
loss_weighted = criterion_weighted(x, y)
print(loss_weighted)
3 Likes