I’m working on training a model where I have to calculate the loss only for specific classes, i.e. all ground truth masks (targets) doesn’t contain all the classes, so I want to calculate the loss for only those class that is present in the target. For this, I made a vector that holds the indices of the channels that are present in the target, and when I’m using the CrossEntropyLoss function, I’m passing my predictions and targets as:
loss += loss_fn( pred[instance,class_present,:,:],targets[instance,class_present,:,:] )
here instance is the index in the batch and class_present is a vector containing the indices of the classes present in targets. My problem is, this is giving significantly very high training error (ex: 5000.5…). Is my approach correct? Can anyone help me to find the problem? Thanks!
Edit: Can I use the
weight parameter of the
CrossEntropyLoss function to specify the above?