Make one hot encoding with ignore label for semantic segmentation?

Hello all, I want to make one hot encoding with ignoring label for semantic segmentation. My labels has 22 values from 0 to 20 and one value is 255, called an ignored label. I want to convert the labels to one-hot encoding without considering the ignored label.

def make_one_hot(labels, num_classes):
    '''
    Converts an integer label torch.autograd.Variable to a one-hot Variable.

    Parameters
    ----------
    labels : torch.autograd.Variable of torch.cuda.LongTensor
        N x 1 x H x W, where N is batch size.
        Each value is an integer representing correct classification.
    Returns
    -------
    target : torch.autograd.Variable of torch.cuda.FloatTensor
        N x C x H x W, where C is class number. One-hot encoded.
    '''
    one_hot = torch.cuda.FloatTensor(labels.size(0), num_classes, labels.size(2), labels.size(3)).zero_()
    target = one_hot.scatter_(1, labels.data, 1) 
    return target

How can I make one hot encoding to handle with ignore label? Thanks so much

1 Like

Would you like to ignore the specific label in your loss function?
If so, which loss function would you like to use?

I’m asking, because the usual loss functions don’t take one-hot encoded targets.

Hi. I am using nn.BCEWithLogitsLoss() for training the GAN for semantic segmentation. Do you have any suggestion to deal with it?

Do you only have two classes in your segmentation target or can one pixel belong to more than one class?

No. one pixel must follow 1 class. but the image includes multiple class. I am implementing adversarial loss

Ok, then you shouldn’t use one-hot encoding, but keep the target segmentation map as [batch_size, height, width], where each pixel position holds a class label.
I’ve created a small code snippet to illustrate, what I’m meaning:

batch_size = 1
n_classes = 5
h, w = 24, 24

x = torch.randn(batch_size, n_classes, h, w) # channels correspond to the class logits
output = F.log_softmax(x, dim=1)
target = torch.empty(batch_size, h, w, dtype=torch.long).random_(n_classes)

criterion = nn.NLLLoss()
loss = criterion(output, target)
1 Like

Great. But where is ignored class. Because my target has label from 0 to 5 ( label 5 is ignored class)

Oh, sorry, I forgot about this issue.
Just add it to the criterion:

criterion = nn.NLLLoss(ignore_index=5)
1 Like

Thanks for your solution. it worked. but I am not sure your implementation works same as BCEWithLogitsLoss ? Because I saw it like Crossentropywithloss function. My Expected loss function must be BCEWithLogitsLoss

My implementation doesn’t work as BCEWithLogitsLoss, and I’m not sure you need it.
In my approach the logits or probabilities will be calculated over all classes:

print(torch.exp(output[0, :, 0, 0]))
> tensor([ 0.2759,  0.3390,  0.0344,  0.3208,  0.0299])
print(torch.exp(output[0, :, 0, 0]).sum())
> tensor(1.0)

If you use BCEWithLogitsLoss, you will apply a nn.Sigmoid on your output.

output = F.sigmoid(x)
print(output[0, :, 0, 0])
> tensor([ 0.7093,  0.7499,  0.2333,  0.7394,  0.2091])

Do you need this kind of probabilities?
Using this approach you could predict all classes above a thrashold of e.g. 0.5 to be in the pixel position.
In my example classes 0, 1 and 3 would be in the first pixel position.

Thanks for explaination. SORRY @ptrblck, may be we have misunderstood the question. In my question, given a targets (labels) with ignored class (=num_class+1), I want to convert the targets/labels to the one hot encoding and ignore the ignored class during converting. I can do it by using the second solution make_one_hot_v2 but it takes time-consuming than the first way.

import torch
import random
import numpy as np

def make_one_hot(labels):
    labels.unsqueeze_(1)
    one_hot = torch.cuda.FloatTensor(labels.size(0), num_classes, labels.size(2), labels.size(3)).zero_()
    target = one_hot.scatter_(1, labels.data, 1)

    return target

def make_one_hot_v2(labels):
    labels = labels.data.cpu().numpy()
    one_hot = np.zeros((labels.shape[0], num_classes,labels.shape[1], labels.shape[2]), dtype=labels.dtype)
    # handle ignore labels
    for class_id in range(num_classes):
        one_hot[:, class_id,...] = (labels==class_id)
    return torch.cuda.FloatTensor(one_hot)

num_classes=5
h,w=24,24
batch_size=1
#+1 because ignore class=num_classes+1
labels = torch.empty(batch_size,  h, w, dtype=torch.long).random_(num_classes+1)
labels= labels.cuda()
make_one_hot (labels)

labels = torch.empty(batch_size,  h, w, dtype=torch.long).random_(num_classes+1)
labels= labels.cuda()
make_one_hot_v2(labels)

If I used make_one_hot_v2, we have no error because I handle the ignore class by using loop, however, it may slow, thus my expected way is the first function make_one_hot but it got error because the ignored class issue.

RuntimeError: cuda runtime error (59) : device-side assert triggered at /opt/conda/conda-bld/pytorch_1524590031827/work/aten/src/THC/generic/THCTensorCopy.c:21

Do you have any suggestion to fix the first way?

Could you create a sample tensor or labels?
Is it a tensor of dimensions [batch_size, height, width]?

Could you run the code completely on the CPU?
This might give a better error message.

Yes. The targets/label size is BxHxW but I have extend 1 dims to BX1xHxW using unsqueeze_. The error is at scatter_ in which one_hot size of Bxnum_classxHxW but data copy has size of Bx(num_class+1)×HxW ( due to ignored class), thus we have to ignore the ignore class during copy. I have not find the solution yet.

How would you like to ignore the class in your one-hot encoded tensor?
Do you want to remove it completely?
This code should just remove the unwanted class channel:

batch_size = 10
n_classes = 5
h, w = 24, 24
labels = torch.empty(batch_size, 1, h, w, dtype=torch.long).random_(n_classes)
one_hot = torch.zeros(batch_size, n_classes, h, w)

one_hot.scatter_(1, labels, 1)
one_hot = one_hot[:, :n_classes-1]
4 Likes

Great solution. Thanks so much