Resize 96*96*3 image in STL10 dataset into 48*48*3 image

I use the following codes, but it doesn’t work.

dataset0 = torchvision.datasets.STL10('./Drive/training/stl10', split='train+unlabeled', download=True,
                                     transform=torchvision.transforms.Compose([
                                         torchvision.transforms.ToTensor(),
                                            torchvision.transforms.Resize(64),
                                     ]))

also tried
torchvision.transforms.Resize(int(64)),

I find that there is another parameter,
Is there anyone can give me some suggestions?

torchvision.transforms. Resize (size, interpolation=2 )

interpolation (int, optional) – Desired interpolation enum defined by filters. Default is PIL.Image.BILINEAR. If input is Tensor, only PIL.Image.NEAREST, PIL.Image.BILINEAR and PIL.Image.BICUBIC are supported.

You need to put the resize before the ToTensor() transform.

Thank you for your declearation