Implement a model similar to the UNet

What size is your input data? And in which layer is the occurring?

it’s 320 * 240. It’s occurring in x_up2 = self.up2(x_up1, x4)

1 Like

Hmm… If we divide 240 by 2 each time we maxpool we get 240/32=7.5. I guess this is causing some errors as it can become a size mismatch. Can you choose other input sizes? Otherwise I’m not sure what to do, but googling might help :slight_smile: I’ll tune in tomorrow as I’m heading to bed.

I’m getting a different error though. I’m getting Got 14 and 15 in dimension 3

I can’t change the size of image. thank you :slight_smile:

If resizing the input can’t be done I can only think of one way and that is to create padding. Note that this probably won’t work if you have input size different from what you had.

I created this layer in Unet -> self.pad = nn.ZeroPad2d((1, 0, 0, 0))

It’s used in the forward function to alter the x_up1, right before self.up2 -> x_up1 = self.pad(x_up1)

Let me know if you figure out a better way :smiley:!

1 Like

@Oli thanks for letting me know. I will try your way as well.

I did resize my images to 256 * 256, and it works. The results aren’t bad, but still U-Net model itself is performing better than this U-Net 1.

I might post here later about the model earlier discussed in this thread (U-Net 2)

1 Like

@Oli regarding the first implementation (U-Net 2), I have an issue to figure out the self.conv_trans1 layers right same as attached pic. (Please note that the kernel size value corresponds to the number of feature maps (dimensionality of the output of the layer).

Your model is fine for down layers and also all layers in up Conv(such as first and second Conv) except self.conv_trans1. I did change the UpConv like this code but still isn’t identical with the attached pic (U-Net 2).
For example output of up1.conv_trans1 layer is 768 while it should be 384.

class UpConv(nn.Module):
    def __init__ (self, in_channels, in_channels_skip, out_channels, kernel_size, padding, stride):
        super(UpConv, self).__init__()
        self.act = nn.ReLU()
        self.conv_trans1 = nn.ConvTranspose2d(in_channels, out_channels, kernel_size=2, padding=0, stride=2)
        self.b3 = nn.BatchNorm2d(out_channels, eps=1e-05, momentum=0.1, affine=True)
        self.conv3 = nn.Conv2d(in_channels=in_channels, out_channels= out_channels, kernel_size=kernel_size, padding=padding, stride=stride)
        self.b4 = nn.BatchNorm2d(out_channels, eps=1e-05, momentum=0.1, affine=True)
        self.conv4 = nn.Conv2d(out_channels, out_channels, kernel_size=kernel_size, padding=padding, stride=stride)
        self.b5 = nn.BatchNorm2d(out_channels, eps=1e-05, momentum=0.1, affine=True)


    def forward(self, x, x_skip):
        x = torch.cat((x, x_skip), dim=1)
        x = self.conv_trans1(x)
        x = self.act(self.b3(x))
        x = self.conv3(x)
        x = self.act(self.b4(x))
        x = self.conv4(x)
        x = self.act(self.b5(x))
        return x

Here is print of model for up layers:

up1.conv_trans1.weight          torch.Size([768, 384, 2, 2])
up1.conv_trans1.bias          torch.Size([384])
up1.b3.weight          torch.Size([384])
up1.b3.bias          torch.Size([384])
up1.b3.running_mean          torch.Size([384])
up1.b3.running_var          torch.Size([384])
up1.b3.num_batches_tracked          torch.Size([])
up1.conv3.weight          torch.Size([384, 768, 3, 3])
up1.conv3.bias          torch.Size([384])
up1.b4.weight          torch.Size([384])
up1.b4.bias          torch.Size([384])
up1.b4.running_mean          torch.Size([384])
up1.b4.running_var          torch.Size([384])
up1.b4.num_batches_tracked          torch.Size([])
up1.conv4.weight          torch.Size([384, 384, 3, 3])
up1.conv4.bias          torch.Size([384])
up1.b5.weight          torch.Size([384])
up1.b5.bias          torch.Size([384])
up1.b5.running_mean          torch.Size([384])
up1.b5.running_var          torch.Size([384])
up1.b5.num_batches_tracked          torch.Size([])
up2.conv_trans1.weight          torch.Size([384, 192, 2, 2])
up2.conv_trans1.bias          torch.Size([192])
up2.b3.weight          torch.Size([192])
up2.b3.bias          torch.Size([192])
up2.b3.running_mean          torch.Size([192])
up2.b3.running_var          torch.Size([192])
up2.b3.num_batches_tracked          torch.Size([])
up2.conv3.weight          torch.Size([192, 384, 3, 3])
up2.conv3.bias          torch.Size([192])
up2.b4.weight          torch.Size([192])
up2.b4.bias          torch.Size([192])
up2.b4.running_mean          torch.Size([192])
up2.b4.running_var          torch.Size([192])
up2.b4.num_batches_tracked          torch.Size([])
up2.conv4.weight          torch.Size([192, 192, 3, 3])
up2.conv4.bias          torch.Size([192])
up2.b5.weight          torch.Size([192])
up2.b5.bias          torch.Size([192])
up2.b5.running_mean          torch.Size([192])
up2.b5.running_var          torch.Size([192])
up2.b5.num_batches_tracked          torch.Size([])
up3.conv_trans1.weight          torch.Size([192, 96, 2, 2])
up3.conv_trans1.bias          torch.Size([96])
up3.b3.weight          torch.Size([96])
up3.b3.bias          torch.Size([96])
up3.b3.running_mean          torch.Size([96])
up3.b3.running_var          torch.Size([96])
up3.b3.num_batches_tracked          torch.Size([])
up3.conv3.weight          torch.Size([96, 192, 3, 3])
up3.conv3.bias          torch.Size([96])
up3.b4.weight          torch.Size([96])
up3.b4.bias          torch.Size([96])
up3.b4.running_mean          torch.Size([96])
up3.b4.running_var          torch.Size([96])
up3.b4.num_batches_tracked          torch.Size([])
up3.conv4.weight          torch.Size([96, 96, 3, 3])
up3.conv4.bias          torch.Size([96])
up3.b5.weight          torch.Size([96])
up3.b5.bias          torch.Size([96])
up3.b5.running_mean          torch.Size([96])
up3.b5.running_var          torch.Size([96])
up3.b5.num_batches_tracked          torch.Size([])
up4.conv_trans1.weight          torch.Size([96, 48, 2, 2])
up4.conv_trans1.bias          torch.Size([48])
up4.b3.weight          torch.Size([48])
up4.b3.bias          torch.Size([48])
up4.b3.running_mean          torch.Size([48])
up4.b3.running_var          torch.Size([48])
up4.b3.num_batches_tracked          torch.Size([])
up4.conv3.weight          torch.Size([48, 96, 3, 3])
up4.conv3.bias          torch.Size([48])
up4.b4.weight          torch.Size([48])
up4.b4.bias          torch.Size([48])
up4.b4.running_mean          torch.Size([48])
up4.b4.running_var          torch.Size([48])
up4.b4.num_batches_tracked          torch.Size([])
up4.conv4.weight          torch.Size([48, 48, 3, 3])
up4.conv4.bias          torch.Size([48])
up4.b5.weight          torch.Size([48])
up4.b5.bias          torch.Size([48])
up4.b5.running_mean          torch.Size([48])
up4.b5.running_var          torch.Size([48])
up4.b5.num_batches_tracked          torch.Size([])
out.weight          torch.Size([2, 48, 3, 3])
out.bias          torch.Size([2])

Could you please point me in the right direction? thank you in advance. At the moment the model will run with size of 256 * 256 and 320 * 240

Sorry Neda, I’m not sure. You can print debug these stuff to find out for yourself. print(tensor_name.size()) will get you far. ConvTranspose2d weight matrix might be in a different shape compared to a normal conv layer.

I suggest you read up on it in the docs and print the sizes of the in and out channels when the layer is created