Weights in BCEWithLogitsLoss

@ptrblck @velikodniy I think the docs is ambiguous in how they define negative examples for calculating pos_weight. For a multihot encoding, is number negative examples an int of all-zero labels, or is it a vector counting the number of zeros in each multihot label columns?

In other words:

  def pos_weights(class_counts):
    pos_weights = np.ones_like(class_counts)
    neg_counts = [len(data)-pos_count for pos_count in class_counts]  # <-- HERE 
    for cdx, pos_count, neg_count in enumerate(zip(class_counts,  neg_counts)):
      pos_weights[cdx] = neg_count / (pos_count + 1e-5)

    return torch.as_tensor(weights, dtype=torch.float)

OR, are “negative examples” where there are no class labels, i.e.:


  def negative_count(data):
    neg_count = 0
    for idx in range(len(data)):
      _, labels = data[idx]
      if sum(labels) == 0:
        neg_count += 1

    return neg_count
3 Likes