I’m working on my first deep learning project. When you define BCEwithlogitloss, you have the option to include pos_weights which defines how much importance should be placed on positive labels in a multilabel classification with lots of negative ones.
I’m using mini batch and I want to calculate pos_weight by following the suggestion of pos_weight = total_neg/total_pos.
Does that value need to be across all in the dataset or within each mini batch. Does it matter if i keep redefining the loss function for each mini batch iteration, Is any information lost by doing that?