One-hot encoding multi-label predicting all zeros

Hi,
I’m trying to implement a multi-label classification model starting from resnet18,
In my dataset I have 188 classes and each element can belong to one, two or three classes, I’m using one hot encoding.
So about 99% of attributes are zeros and my model predict only vectors with zeros and without 1.
Which metrics and loss functions can I use to measure my model correctly?
I’m trying to use

criterion = nn.BCEWithLogitsLoss(pos_weight = torch.ones([188]).to(device))

optimizer = optim.SGD(model.parameters(), lr=0.001)

Am I doing it right? ‘cause it seems doesn’t solve my problem.
Thank you for help

BCE stands for Binary Cross Entropy and is for binary classification only. Try using cross entropy loss

You are right that nn.BCEWithLogitsLoss is used for binary classification use cases, but can also be used for multi-label classifications, while nn.CrossEntropyLoss is used for multi-class classifications (each sample has a single label only, which isn’t the case here).

@Krys try to overfit a small dataset (e.g. just 10 samples) and make sure your model is able to do so by playing around with some hyperparamters. Once this is done you could try to scale up the use case again.