Hi!

I am trying to use Unet++ for semantic segmentation.

input shape is: [N, H, W] i.e: [16, 256, 256]

target shape is: [N, 1, H, W] i.e: [16, 1, 256, 256]

and output shape is: [N, 1, H, W] i.e: [16, 1, 256, 256]

loss function is: BCEWithLogitsLoss

I have trained my model using this model architecture. And it’s working fine for 2 classes.

Now I am trying to use this **trained** model for **3 classes**. I have converted my mask pixel value to corresponding class values already. Now I don’t understand how could I use this same model for 3 classes. As in the model architecture, the out_channels = 1 which is fixed.

I have tried with

class_number = 3

model.final1.out_channels = class_number

model.final2.out_channels = class_number

model.final3.out_channels = class_number

model.final4.out_channels = class_number

But seems like it stills generate the output to [16, 1, 256, 256] which I hoped to get [16, 3, 256, 256]

I am new in PyTorch. So your suggestions would highly be appreciated