Inplace resizing with interpolation options

Hi, I am working on a deployment server where I want to resize a bunch of images to a fixed size. The problem is that I don’t want to create a new tensor when doing interpolation/resizing since it requires a memory allocation and it will have to reassign the values to the ‘images’ tensor.

My current code:

images = torch.empty((len(original_images), 3, input_size[0], input_size[1]), device=device, dtype=dtype)

        for i, image in enumerate(original_images):
            images[i] = F.interpolate(image, size=input_size, mode='bilinear', align_corners=False)

Is there something like:

F.interpolate(input_tensor=image, output_tensor=images[i], size=input_size, mode='bilinear', align_corners=False)

I found a few functions which don’t support any interpolation mode:

torch._copy_from_and_resize
torch.resize_as_

I think writing optimal code for better performance is important for servers. However, the options seem to be limited.

Do you have a possible solution or workaround?

Thank you!

Why don’t you pass all images to F.interpolate as it also accepts batched inputs:

image = torch.randn(1, 3, 500, 500)
input_size = (224, 224)
out = F.interpolate(image, size=input_size, mode='bilinear', align_corners=False)
print(out.shape)
# torch.Size([1, 3, 224, 224])

image = torch.randn(16, 3, 500, 500)
input_size = (224, 224)
out = F.interpolate(image, size=input_size, mode='bilinear', align_corners=False)
print(out.shape)
# torch.Size([16, 3, 224, 224])

@ptrblck Sorry for the lack of explanation in the question. The original_images is a list of images which are in different sizes.

Idea for a “workaround”: If your images arent of completely different sizes, you can pad them all to the size of the biggest images, and then interpolate using bilinear. With bilinear not too many artifacts should remain (maybe slightly blurry edges) , the rest of the code would then be as @ptrblck suggested

@SchulzKilian hi, thank you for your message. However, I should remind that my question is not how to resize images with batching. I’m looking for a solution that doesn’t allocate any new memory/tensors. The solution you suggest allocates new tensors (paddings) or expands the existing tensors and then resizes with batching.