Resize image data as a part of preprocessing

I have image of size (320,576,3)( 3 indicating RGB image) and their respective masks of size (640,1176)(Grayscale). I need to bring them to a common size for further processing but I am unable to figure out how to achieve it.
Also this has to be performed on the complete dataset of 417 images. Can someone suggest any way?

This might do the trick

import torchvision.transforms as transforms
compose_img = transforms.Compose(transforms.Resize((x,y))]) ## x,y is the common size of your choice
img = compose_img(img)

You can use cv2 to resize the images.
‘’’ im=cv2.resize(image,(height ,width))’’’