Hello, just a conceptual question. I see that when we use conv2d it’s usually followed by batchnorm and activation. Is this also true for conv2d transpose? If yes, is it because of the same logic as conv2d transpose?
Goal: I’m trying to stack
self.up1 = nn.ConvTranspose2d(in_channels=out3, out_channels=out2, kernel_size=3, padding=1,
stride=2, output_padding=1)
self.up2 = nn.ConvTranspose2d(in_channels=out2, out_channels=out1, kernel_size=3, padding=1,
stride=2, output_padding=1)
self.up3 = nn.ConvTranspose2d(in_channels=out1, out_channels=n_out_channels, kernel_size=3, padding=1,stride=2, output_padding=1)
So I’m just wondering if I need to use both batchnorm and ReLU between those layers.