I am designing encoder decoder network. In which number of convolutional layers are four and number of Transposed convolutional layers are four. My image size is : 8x 1x 256x256. 8 is batch size, 1 is number of channels ( gray image). Size is H = 256 W is 256. Kernel size is 3 x 3. Stride is 2 x2. Padding 1. However, I have to fix number of filters as 96 each layer…I don’t understand how to do that .
first convltional layer will be having in_channels =1, out_channels = 96 . why since image is gray
second convolutional layer will be having in_channles = 96, out_channels= ? #------------------------------------------------------------------------------------------------------#
I am having four convolutional layers… And each layer is having 96 filters… #--------------------------------------------------------------------------------------------------------#
I am not understanding, how It can be done ?
#-----------------------------------------------------------------------------#
b = self.b(p4) #------------------------------------------------------------------------------#
if name == “main”:
# inputs = torch.randn([1,1,256,256])
image = cv2.imread (r’C:\Users\Idrees Bhat\Desktop\Research\Insha\Dataset\o_gray\1.png’,0)
convert_tensor = transforms.ToTensor()
inputs=convert_tensor(image)
# print (inputs.shape)
# print (type(inputs))
model = build_unet()
y= model (inputs)
print(y.shape)
``
Error : RuntimeError: Sizes of tensors must match except in dimension 1. Expected size 31 but got size 32 for tensor number 1 in the list.
Note: Image size gray and size is [256 256]… I want fix filters per layer i.e. 96 filters per layer