let’s say I have 1/14 sample in class 0 and 13/14 in class 1. how to choose the properly the alpha or the weight in BCE and focal loss ?
is it .92 for class zero and .07 for class one or 1 for class zero and 0.07 for class one or else ?
I’m using BCEWithLogitsLoss while my batch size is 24 and one single node output.
I’m defining the loss as
weight = torch.FloatTensor([1.0, .25])
criterion =nn.BCEWithLogitsLoss(weight=weight )
So I’m having this error
RuntimeError: The size of tensor a (24) must match the size of tensor b (2) at non-singleton dimension 0
Do I need to re-initiate the weight in each call for the loss with an arranged weight of the batch size?