Get a dataloader of resized images from a single dataset

Hi, I would like to train a net with images of different resolutions (from cifar10). My strategy was the following :

  • download the cifar dataset with resolution = 224, with
transform = transforms.Compose([transforms.Resize(size=(224, 224)), transforms.ToTensor(),
    transforms.Normalize((0.5,), (0.5,))])

trainset = torchvision.datasets.CIFAR10(root= path, train=True, transform=transform, download=True)
  • then from this trainset, get a dataloader with images of the desired resolution, by downsampling.

I don’t know how to do this… As I’m going to work with different resolutions, I do not want to download the cifar dataset every time.

If you are using torchvision.datasets.CIFAR10, the dataset will be downloaded once and stored in the location passed via root.
Note that you are not downloading the CIFAR10 dataset in a resolution of 224x224, but you are resizing each image to this resolution.
CIFAR10 contains RGB images with the resolution 32x32.
If you want to apply different resolutions, I would recommend to create different datasets passing the transformation with the desired shape.

1 Like

Finally I changed my method : every time I call net(image), I make sure to interpolate the image to the desired size with torch.nn.functionnal.interpolate. This way I only have one dataset.
Thank you !
`