I’m trying to increase the weight of an under sampled class in a binary classification problem.
torch.nn.BCELoss has a weight attribute, however I don’t quite get it as this weight parameter is a constructor parameter and it is not updated depending on the batch of data being computed, therefore it doesn’t achieve what I need.
What is the correct way of simulating a class weight, similar to the way Keras does?
According to the doc here http://pytorch.org/docs/nn.html#bceloss
the weight parameter is a tensor of weight for each example in the batch. Thus, it must have the size equal to the batch size. You can set the weight at the beginning of each batch, for example: criterion = nn.BCELoss() for batch in data: input, label, weight = batch criterion.weight = weight loss = criterion.forward(predct, label) ...
That’s a neat solution as well… I ended up changing my loss function again to NLLLoss, which supports class weights, and it’s probably the easiest native solution. My custom loss was giving me NaNs towards the end of training and I have no idea why!
Yes, I did change the softmax to a log-softmax. The custom loss however, is working with a regular softmax, but I guess it could be related to the lack of an epsilon term to prevent the system to output a “hard one or zero”.
Hello thanks for ur costum function i use it but i had this eror
The size of tensor a (32) must match the size of tensor b (2) at non-singleton dimension 1
can u help with this problem ?
can you explain me how this works ?
I am working on classification problem on celebA dataset, in which, for some features there’s a huge imabalance. How can I rectify this issue with your above mentioned code ?