I’m using a pretrained UNet model whose first encoder has the following architecture
UNet(
(encoder1): Sequential(
(enc1conv1): Conv2d(3, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(enc1norm1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(enc1relu1): ReLU(inplace=True)
(enc1conv2): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(enc1norm2): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(enc1relu2): ReLU(inplace=True)
)
The model takes an input which has been normalized using min-max normalization. Instead, I want to add a batch/layer norm layer at the beginning so that I can feed the image as it is without normalization.
I don’t want to use torchvision.transforms to normalize the image, instead I want to add a layer at the beginning that does the same work for me.
Sorry, if this question has some flaws, I’m new to Pytorch.