I’m pretty new to pytorch and neural networks in general. My project involves segmenting an image into two classes. My input is a 512x512x3 (rgb) image and my output is a 512x512x2 image where the first channel is a binary image with the positives for the first class and the second channel is a binary image with the positives for the second class.
The thing is, my dataset is pretty imbalanced. For example, in the first class there are about 11 positives for every 250 negatives. I’m using BCEWithLogitsLoss, and from browsing these forums I’ve seen that I should probably use the ‘pos_weight’ input. Here’s what the documentation for pos_weight says:
pos_weight – a weight of positive examples. Must be a vector with length equal to the number of classes.
Okay, I have two classes, so I made my vector, where the first entry is negatives/positives of class 1 and the second entry is the negatives/positives of class 2:
pw = torch.FloatTensor([21.956,0.04554]) criterion = nn.BCEWithLogitsLoss(pos_weight=pw)
But here’s the error message I get:
The size of tensor a (2) must match the size of tensor b (512) at non-singleton dimension 3
The code runs fine if I don’t try to pass anything to pos_weight, so that’s definitely what’s causing the error. I also don’t understand why it’s expecting a vector of size 512 when I only have two classes!
Any help would be greatly appreciated.