Suppose I want the output of a deconvolution layer to be exactly twice in size(read scale) of its input the parameters stride, padding, etc depends upon the input size.
So, is there a way to pass in expressions rather than constant values for the arguements?
Any help would be much appreciated.
Any other way that the output can be made twice the input?
For nn.Conv2d, I think no. You could calculate the appropriate padding depending on input size and kernel_size though.
For example, here is how it is done in the case of convTranspose: