Get size of random layer output encoder-decoder network

Hi.

I am using a resnet34 model to create an encoder-decoder network. My problem is when I reach I decoder side, the output shape is not the same as the input. I am defining my network like this:

m = models.resnet34(pretrained = False).cuda()
m = nn.Sequential(*list(m.children())[:-3])
conv = nn.Conv2d(256, code_sz, kernel_size=(2,2), stride=2).cuda()
m.add_module('CodeIn',conv)
conv = nn.Conv2d(32, 8, kernel_size=(2,2), stride=2).cuda()
m.add_module('Conv_2',conv)
conv = nn.Conv2d(8, 2, kernel_size=(2,2), stride=2).cuda()
m.add_module('Conv_3',conv)

This is the decoder:

add_layer(m,1,2,'CodeOut_1',scale=2)
add_layer(m,2,8,'CodeOut_2')
add_layer(m,8,32,'CodeOut_3')
add_layer(m,32,256,'CodeOut_4',scale=2)
add_layer(m,256,128,'Upsample0')
add_layer(m,128,64,'Upsample1')
add_layer(m,64,32,'Upsample2')
add_layer(m,32,3,'Upsample3',act='sig', scale=2)

Where add_layer just adds a new layer. One argument that I can assign to the function is output_size but I don’t know how to get this value or dynamically assign it while training because the size of my input images is also different. Please let me know what should I do.

Is there some way I can use m._module to get the size?