How to generate result of desirable size?

I have a model which was trained on images of (512,512) size.
Now I have an image which is of dimension (x,y) where x,y are not equal to 512,512. Which means I’ll either need to pad them or crop them for making them (512,512).
Now what I want to do is, I need to generate image of the size (x,y) at the end.
Any Ideas on how to do that?

PS- I’m talking about taking crops from the main image and passing them from the model and finally stitching them back together.

Thank you in advance!

I guess what you’re asking for is generative adversarial models (GAN’s) for image reconstruction. You can find a tutorial of face generation here.
https://pytorch.org/tutorials/beginner/dcgan_faces_tutorial.html
As for crops, you can use torchvision.transfroms to do that

No no! I’m not looking for GANs!
Suppose I have 2 images of (40,40) and I want to stitch them together.
That’s what I’m looking for!

Oh I see, I guess this helps you.

import torch
import matplotlib.pyplot as plt

image1 = torch.randn(size= (40,40))
image2 = torch.randn(size = (40,40))

width = image1.size(0) + image2.size(0)
height = image1.size(1)

image3 =torch.zeros(size = (height,width))

image3[:height,:int(width/2)] = image1
image3[:height,int(width/2):width] = image2

plt.imshow(image3)
plt.show()

Or you can check make_grid function in torchvision.utils