Loss Function Weight and Single Output

Let’s say we have a logistic regression model that takes num_features and outputs num_classes. Traditionally, num_classes is equal to one.

Now if I want to weight my loss function, I would make something like this:
F.binary_cross_entropy(probas, target, weight=torch.tensor([2]))

Where my weights were calculated by:

# n samples / n_classes * bincount
((183473+47987) / (2 * np.array([183473, 47987])))
[0.63074419 2.41213088] # [0, 1]

My question is, the weight in the loss function, as I can only give it one value, which class is it for, 0 or 1?

target, in our data set, contains either zeros or ones, like a traditional binary regression problem.

nn.BCEWithLogitsLoss allows you to pass a weight as well as a pos_weight argument.
The former tensor should have the shape [batch_size] and will be applied to each sample, while the latter should have the shape [nb_classes] and will be applied to the positive examples as described in the docs.
Based on your code snippet I assume you would like to use pos_weight.

Thanks so much. It’s silly that I overlooked this.