Autoencoder for different sized inputs

Hello,

I am working on a project where I want to have one network take a 4 x4,7x7 or 14x14 be reconstructed to a 28 x28 image. (Working with MNIST) My current architecture would be as such:

blow the smaller image up to the target dimensions, then encode it to the original input dimensions(so the network can learn the mapping) and then from there decode it until it is the desired 28 x28 dimension. But as far as I know I would have to make a different network for each of these cases.

Is there anyway to do this dynamically in PyTorch?

If you are thinking about how to resize the images, you can just do that in your dataset class. IIUC, the resizing doesn’t need to be backprop-able.