Dealing with imbalanced datasets in pytorch

Thus, I am not sure if F.binary_cross_entropy_with_logit is different from nn.BCEWithLogitsLoss and that’s why my code is not running?!

As far as I know both of these methods are mostly the same, but the difference is in the way weight is parametrized.
As far as I see it, the docs say that the weight will be broadcased, but in my case either of these approaches worked with F.binary_cross_entropy_with_logits (if I remember correctly):

  • Make your weights be WxH
  • Or make your weights be BxCxWxH
  • If you try BxWxH or CxWxH - I guess there will be an error
1 Like

Thank you for your feedback. Could you please explain further what kind of loss weighting you did in here? By that I mean, what were the weights that you used? And what is the main difference of a F.binary_cross_entropy_with_logits with a weightargument vs nn.BCEWithLogitsLoss with weight / pos_weight argument?

For what I see, by applying pos_weight in BCEWithLogitsLoss loss, the total loss is indeed getting higher, which is what was intended, but the results are the same, or even worse actually. Maybe the loss becomes harder to minimize?

If the overall loss increases, I would try to lower the learning rate to help the model converge.

I am using Adam so it should not make much difference :confused:

Probably, but it’s still worth a try :wink:

Hi,

I am trying to deal with imbalanced data. Based on what I read in discussions above,
nr_samples_of_label_i / total_number_of_samples

For instance on 250000 samples, one of the imbalanced classes contains 150000 samples:
So
150000 / 250000 = 0.6
One of the underrepresented classes:
20000/250000 = 0.08

So to reduce the impact of the overrepresented imbalanced class, I multiply the loss with 1 - 0.6 = 0.4
To increase the impact of the underrepresented class, 1 - 0.08 = 0.92

Is that an acceptable way of working?

Thanks

Let’s say i have to train an image classifier an a highly unbalanced dataset. Then I would like to penalize the losses belonging to the dominating classes less and vice versa !

Can you pls show with a few lines of code how exactly weights in nn.CrossEntropyLoss is passed ?

Imagine we have a dataset in which we have three classes with the following number of examples:
classA: 900
classB: 90
classC: 10
now how would you define ur loss function and how would you pass the weights argument?

would it be like
loss_fn = nn.CrossEntropyLoss(weight = [900/1000, 90/1000, 10/1000]) ???

2 Likes

What about continuous data for regression tasks? Is there any way to handle an imbalanced dataset in that case?

1 Like

You could still use weights to sample your data, but you would have to define how the imbalance is defined (e.g. do you have different clusters of neighboring numbers?) and then use this definition to create weights.

2 Likes

Hi, I’m also struggling about how to assign weights to my imbalance data for a regression task.
In my case I’m building a model based on LSTMs to predict a float number that varies between 30.0 to 81.3
For example in the range from 30.6 to 39.6 I have 716056 samples in total, whereas in the range from 70.0 to 81.3 I have only 135010 samples.
I would like to use weights to, counteract for this problem of imbalance data, but I don’t know how should I proceed in this case of a regression task where the target could be a number between 30 and 80.
Thank you very much in advance and, help would be highly appreciated.

Your idea of clustering the regression targets to a few clusters and assigning weights to these seem reasonable.
You could either do it manually (as it seems to be the case now) or use something like k-means.
Once you have the clusters, you could count the samples similar to a classification task and calculate the weights based on the number of samples for each cluster and create a mapping between the cluster and weight.
After creating the weights, you could write a function, which accepts the current output batch with the regression prediction, as well as your cluster centers (k-means dict), and returns a batch of weights, which can then be multiplied to create the final loss.

1 Like

weight = torch.tensor([900/1000, 90/1000, 10/1000], dtype=torch.float, device=‘cuda:0’)

Thank you very much for your answer. That makes sense.
I will need some time to implement it, I will get back to you when I have something to share.

Hi,
I am working on very imbalanced data. I have binary targets, 0 and 1, where the 1 ratio is just ~0.27% of my data.
I want to penalize my loss (BCELoss) so I made my class weights as follows:

from sklearn.utils import class_weight
class_weights = class_weight.compute_class_weight('balanced',np.unique(np.ravel(y_train,order='C')),np.ravel(y_train,order='C'))
class_weights=torch.tensor(class_weights,dtype=torch.float)

I am new in PyTorch. In Keras, I just need to pass class weight in my fit function and it does it for me.

However, in PyTorch, I am not able to figure out how I do it. I used:

torch.nn.functional.binary_cross_entropy(out, labels, class_weights)

and I am receiving the following error:


RuntimeError: output with shape [250, 1] doesn't match the broadcast shape [250, 2]

I have read this post and the others, but could not find any good answer to how to do this on Pytorch.
Am I missing something?
Any help would be really appreciated!!

The weight argument is used to weight each sample in the inputs, not the classes.
I think you might want to use the pos_weight argument in nn.BCEWithLogitsLoss instead to counter the imbalance.

Thanks, @ptrblck!
I have used pos_weight and did not get a good result. How can I use weight arguments? I used it in a way that I explained in my previous question above. My last layer is a linear layer with a sigmoid.
self.classifier = nn.Sequential( nn.Linear(32, 12), nn.ReLU(True), nn.Dropout(0.3), nn.Linear(12, 1), nn.Sigmoid())
and I have received the error. Why do I receive this error?

Also, would you explain more what does that mean?

The weight argument is not used as a “class weight”, since nn.BCE(WithLogits)Loss allows for floating point targets. While you can interpret a 0 and 1 target as class0 and class1, respectively, you could also use e.g. 0.9 as the target value.
This is why you are specifying the weight for each sample in the batch to weight the loss of this particular sample.

Thanks so much, @ptrblck for your support!!

Sorry for not being clear in asking my question. I meant why the error says:
RuntimeError: output with shape [250, 1] doesn't match the broadcast shape [250, 2]?
As you explained the weight is being used by loss to penalize the outcome. The class weight that I put in my loss is [1,365]. As I use binary cross-entropy loss, I thought that I have to use a sigmoid and one nod for the last layer.
However, if I understood correctly, the error is saying that it expects to have 2 numbers (250 is my batch size). Should I change it to two nodes and use softmax?

Also, do you recommend using WeightedRandomSampler to have a balanced class in each batch during the training time?

That’s the issue, as the weight argument is not a class weight, but a sample weight.

Yes, I think balancing the samples in each batch is a good approach.

1 Like