Specifying input size for layers

I am trying to recreate a GAN paper implemented using Keras in Pytorch. The models used in the paper are flexible in terms of the number of convolutional layers they have depending upon the resolution of the training images. Also the number of filters in the convolutional layers depend upon the resolution.
Now, because Pytorch requries input channels to be specified while creating the layers, I’ve run into a bottleneck because the input channels will be different while upsampling images with different resolutions.

Is there any way around this without having to switch to Keras?

Here is the bottleneck step implemented in Keras:

while crt_res != resolution:
            cnn.add(UpSampling2D(size=(2, 2)))
            if crt_res < resolution/2:
                    256, (5, 5), padding='same',
                    activation='relu', kernel_initializer='glorot_normal', use_bias=False)

                cnn.add(Conv2D(128, (5, 5), padding='same',
                                      activation='relu', kernel_initializer='glorot_normal', use_bias=False))

            crt_res = crt_res * 2
            assert crt_res <= resolution,\
                "Error: final resolution [{}] must equal i*2^n. Initial resolution i is [{}]. n must be a natural number.".format(resolution, init_resolution)

        cnn.add(Conv2D(channels, (2, 2), padding='same',
                              activation='tanh', kernel_initializer='glorot_normal', use_bias=False))

The number of input channels (and output channels) will not change with different resolutions.
What kind of “bottleneck” are you seeing? Do you get any errors regarding the channel size?

Let me try to explain with an example.

Suppose we set resolution=64 and init_resolution=8. Before entering the while loop I have the last output as (128,8,8). These are the further steps inside the while loop:

  1. Upsampling to (128,16,16),crt_res=8
  2. Conv layer with input channels=128 and output channels=256
  3. Upsampling to (256,32,32),crt_resolution=16
  4. Conv layer with input channels=256 and output channels=256
  5. Upsampling to (256,64,64),
  6. Conv layer with input channels=256 and output channels=128

Now, the Conv layers in points 2. and 4. are being added by the same line of code. How can this be done in Pytorch without defining the resolution beforehand.

I think I still misunderstand the problem, but the definition of a conv layer does not take the resolution as an input.
If you mean the actual if condition in your Keras code, you could apply the same condition in PyTorch and add the conv layers e.g. to a nn.ModuleList (or use this condition in the forward).
Could you point me to the line of code in your Keras snippet, which seems to create problems?