I am trying to recreate a GAN paper implemented using Keras in Pytorch. The models used in the paper are flexible in terms of the number of convolutional layers they have depending upon the resolution of the training images. Also the number of filters in the convolutional layers depend upon the resolution.
Now, because Pytorch requries input channels to be specified while creating the layers, I’ve run into a bottleneck because the input channels will be different while upsampling images with different resolutions.
Is there any way around this without having to switch to Keras?
Here is the bottleneck step implemented in Keras:
while crt_res != resolution:
cnn.add(UpSampling2D(size=(2, 2)))
if crt_res < resolution/2:
cnn.add(Conv2D(
256, (5, 5), padding='same',
activation='relu', kernel_initializer='glorot_normal', use_bias=False)
)
else:
cnn.add(Conv2D(128, (5, 5), padding='same',
activation='relu', kernel_initializer='glorot_normal', use_bias=False))
crt_res = crt_res * 2
assert crt_res <= resolution,\
"Error: final resolution [{}] must equal i*2^n. Initial resolution i is [{}]. n must be a natural number.".format(resolution, init_resolution)
cnn.add(Conv2D(channels, (2, 2), padding='same',
activation='tanh', kernel_initializer='glorot_normal', use_bias=False))