How do I resize ImageNet image to 224 x 224?

Hi, I’m gonna train VGG16 on Imagenet data.
Original VGG network have 224 x 224 input size, but each original Imagenet data have different size.
So, what is the standard way to resize Imagenet data to 224 x 224?

This might help ->
https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html#load-data

from torchvision import transforms
preprocess = transforms.Compose([
    transforms.Resize(256),
    transforms.CenterCrop(224),
    transforms.ToTensor(),
    transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),
])
1 Like

Thank you for your reply!