It works when there is no ignore labels. When there is ignore labels such as lb_ignore=255, I hope the one hot vector at this position to be all zeros. How could I do this please?
For example, there is a label=[0,1,2] whose one hot embedding is: [[1,0,0], [0,1,0], [0,0,1]]. If the label is label=[0,1,255], where 255 is the ignored label, the output should be: [[1,0,0], [0,1,0], [0,0,0]].
Do I have a way to do it?
The label tensor might not be 1d tensor, I can be tensor of shape N,C,H,W.
One possible approach would be to set the lb_ignore to nb_classes+1, create an approprialty shaped zero tensor, and remove this additional dimension using slicing.
Probably not the best approach, but it should work.
Thanks for the labels.clone() hint! A version ignoring all labels larger than num_classes by mapping them to 0 and reusing the mask to remove their encoding: