Modifying semantic-segmentation pretrained model for 10 classes instead of 50

I have a pretrained model that I use for semantic segmentation. It predicts one of 50 classes for each pixel.
I want to further train it on my own dataset, but for predicting only 1 of 10 possible classes (a subset of the 50 classes). The other 40 classes don’t appear in the new dataset.

Obviously, I can just ignore the other classes, but this way I lose much of the accuracy already obtained in the pretrained model.

I want to modify the last layer of the model, and change the saved pretrained model to fit it.

The last layers of the model are:

num_classes = 50
self.last_layer = nn.sequential(
    nn.Conv2d(2048, 512, kernel_size=3, stride=1, padding=1, bias=False),
    nn.ReLU(inplace=True),
    nn.Conv2d(512, num_classes, kernel_size=1))

The forward pass does that:

x = .......
x = self.last_layer(x)
x == nn.functional.log_softmax(x, dim=1)

And evaluates loss like that:

loss = my_criterion(x, gt)  # (my_criterion = nn.NLLLoss(ignore_index=-1) )

The weights are loaded like that:

model.load_state_dict(torch.load(weights_path, map_location=lambda s, l: s), strict=False)

I really don’t want to retrain the whole modified model.

I understand that another possibility is to just keep the model and take the predictions, but before running the softmax - to zero all of the predictions of the other classes. But it feels like keeping parts of the model that I don’t need and making the training more difficult.This text will be hidden

Your use case sound like a perfect fit for transfer learning.
You would have to replace your last layer(s) and retrain them or the whole model.
The strategy how “far” to retrain into the model depends on the similarity between your new and old data and the amount of new data you have.
If you have very little data, I would freeze most of the model and just retrain the final classifier. Otherwise you could be braver and retrain more. :wink:
Have a look at the transfer learning turorial for a good introduction.