Memory efficient one hot tensor


I am currently working on a Convolutional Neural Network architecture that is fed by semantic segmentation. However, semantic segmentation is fed in the form of one hot tensor of size:

BatchSize x NumberClasses x H x W

The code I’m currently to create this is the following:

   def make_one_hot(labels, C=151):
    Converts an integer label torch image to a one-hot tensor of probabilities.

    labels : torch.cuda.LongTensor N x 1 x H x W, where N is batch size.
        Each value is an integer representing semantic label.
    C : integer.
        the number of classes in labels.

    target : torch.cuda.FloatTensor N x C x H x W, where C is class number. One-hot encoded.

    # Semantic Labels
    one_hot = torch.cuda.FloatTensor(labels.size(0), C+1, labels.size(2), labels.size(3)).zero_()

    # Create tensor
    target = one_hot.scatter_(1, labels, 1)

    return target

I was wondering if there is a more memory efficient way to handle this kind of tensors. For instance, is it possible to create from it, a sparse tensor? And if so, is after possible to use a sparse tensor as an input to a CNN?



Unfortunately, we only have a limited support for sparse Tensors at the moment see here for more info.
This is the best you can do memory wise using dense Tensors.

I’ll ckeck that information! Thank you!