[SOLVED] Class Weight for BCELoss

(Miguel Varela Ramos) #1

Hey there,

I’m trying to increase the weight of an under sampled class in a binary classification problem.

torch.nn.BCELoss has a weight attribute, however I don’t quite get it as this weight parameter is a constructor parameter and it is not updated depending on the batch of data being computed, therefore it doesn’t achieve what I need.

What is the correct way of simulating a class weight, similar to the way Keras does?

Cheers

3 Likes
Loss weighting imbalanced data
[SOLVED] Class Weighed Binary Crossentropy not working, even with equal weights
[SOLVED] Should I use torch.clamp after torch.sigmoid?
(Miguel Varela Ramos) #2

Solved with a custom loss function:

def weighted_binary_cross_entropy(output, target, weights=None):
        
    if weights is not None:
        assert len(weights) == 2
        
        loss = weights[1] * (target * torch.log(output)) + \
               weights[0] * ((1 - target) * torch.log(1 - output))
    else:
        loss = target * torch.log(output) + (1 - target) * torch.log(1 - output)

    return torch.neg(torch.mean(loss))
7 Likes
(htt210) #3

According to the doc here
http://pytorch.org/docs/nn.html#bceloss
the weight parameter is a tensor of weight for each example in the batch. Thus, it must have the size equal to the batch size. You can set the weight at the beginning of each batch, for example:
criterion = nn.BCELoss() for batch in data: input, label, weight = batch criterion.weight = weight loss = criterion.forward(predct, label) ...

8 Likes
(Miguel Varela Ramos) #4

That’s a neat solution as well… I ended up changing my loss function again to NLLLoss, which supports class weights, and it’s probably the easiest native solution. My custom loss was giving me NaNs towards the end of training and I have no idea why!

(htt210) #5

Did you apply LogSoftmax before computing the loss. NLLLoss takes log probability as input, not the probability.

1 Like
(Miguel Varela Ramos) #6

Yes, I did change the softmax to a log-softmax. The custom loss however, is working with a regular softmax, but I guess it could be related to the lack of an epsilon term to prevent the system to output a “hard one or zero”.

1 Like
(Qinqing Liu) #7

Thank you. This works for me.

(Alex Choy) #8

Ref to the c code, there is a safe_log function which returns log(1e-12) if the input is 0.

(Chuong Nguyen) #9

That may be the numerical unstable. Applying clamp may help:

output = torch.clamp(output,min=1e-8,max=1-1e-8)  
loss =  pos_weight * (target * torch.log(output)) + neg_weight* ((1 - target) * torch.log(1 - output))
2 Likes
(Miguel Varela Ramos) #10

yes, that’s the easiest way… But you can simply use the NLLoss, it supports class weights now

#11

there should be a “-” in loss function

#12

sorry I’m wrong, I ignored the torch.neg:sweat:

(Mohamed Ouftou) #13

Hello thanks for ur costum function i use it but i had this eror
The size of tensor a (32) must match the size of tensor b (2) at non-singleton dimension 1
can u help with this problem ?

(Alex Fann) #14

Just a quick question. When applying BCELoss with weights, do we need to normalize the weights with the batch size? Or raw weights would be fine?

(David Ruhe) #15

Hi Miguel,

I’m wondering how you used NLLLoss for a binary classification problem?

(The Bloodthirster) #16

Thanks,I just don’t know how to use weight to join NNLLoss.