I would likw to custom BCE to the following form:
L = sigma_over_i( w_pos * a_i * log(p(a_i | x_i)) + w_neg * (1 - a_i) * log(1 - p(a_i | x_i)))
Means, to add two sets of weight vectors to the regular BCE. The first relates to the positive prediction case and the second to the negative case.
The pytorch BCE loss is based on this function: torch.binary_cross_entropy_with_logits that can get only pos_weight.
Is there any modification that can be done in order to implemet this function, besided the idea of re-writing it independently.
What do you want to accomplish with this?
Also, could you please specify the variables a_i and x_i you’re using?
This might be worth checking out to make sure that you actually need the second set of weights About BCEWithLogitsLoss’s pos_weights.
a_i is an indicator in the index i. if for the i’s cordinate the gt is true a_i = 1 .
As I see it w_pos is determine the cost/punishmentof miss-predicting positive label and w_neg is the punishment for miss-predicting negative label
So a_i is your label?
So you want different influences of different classes on the loss function - how is that different from using pos_weights?