Weighted binary cross entropy loss using pos_weight

Hi,
I have a unbalanced dataset, so i tried to use pos_weight in BCEwithlogit loss:

torch.nn.BCEWithLogitsLoss(pos_weight=weights)(outputs,targets)
But I observed the loss is fluctuating very badly and results are also bad. shouldn’t they atleast be on par with the results i got without using class weights. Can anyone tell why this is happening and suggest another way for using class weights.

What are the metrics that you’re monitoring?
If you set weights equal to \#mojority/\#minority, then you should be checking average recall.

I’m using F1 score (both micro and macro)

image

F1 score is calculated from the precision and recall .
by change p_c you’re changing these two metric in opposite way.
and in practice you’re gaining some recall percentage for way worse precision.