Multiclass Semantic Segmentation (UNet++)

I am trying to use Unet++ for semantic segmentation.
input shape is: [N, H, W] i.e: [16, 256, 256]
target shape is: [N, 1, H, W] i.e: [16, 1, 256, 256]
and output shape is: [N, 1, H, W] i.e: [16, 1, 256, 256]
loss function is: BCEWithLogitsLoss
I have trained my model using this model architecture. And it’s working fine for 2 classes.

Now I am trying to use this trained model for 3 classes. I have converted my mask pixel value to corresponding class values already. Now I don’t understand how could I use this same model for 3 classes. As in the model architecture, the out_channels = 1 which is fixed.

I have tried with
class_number = 3
model.final1.out_channels = class_number
model.final2.out_channels = class_number
model.final3.out_channels = class_number
model.final4.out_channels = class_number
But seems like it stills generate the output to [16, 1, 256, 256] which I hoped to get [16, 3, 256, 256]

I am new in PyTorch. So your suggestions would highly be appreciated :slight_smile:

Since the last convolution layer has only a single channel, you won’t be able to use it for 3 classes.
Instead you would have to assign a new nn.Conv2d layer with out_channels=3 to and retrain this layer for your new use case.

Thank you so much. It worked :smiley:

What I have done is:

class AddnewConvLayer(nn.Module):
    def __init__(self):
        self.convolution_layer = nn.Conv2d(32, 3, kernel_size=1)

    def forward(self, x):
        return self.convolution_layer(x)

model.final1 = AddnewConvLayer()
model.final2 = AddnewConvLayer()
model.final3 = AddnewConvLayer()
model.final4 = AddnewConvLayer()