Oddly getting 32 dimensions when I resize a 3D tensor?

I’m trying to resize a tensor through the Resize function of torchvision.transforms module; but - as the tile is self-descriptive - I’m getting a 32D output when I apply the resize function of a 3D tensor. I’ve added the code snippet below. The shape of the input and the output are (32, 32, 3) and (32, 75, 75), respectively. Could you please guide me through the thing I miss?

from torchvision import datasets, models, transforms as T
tensor_x = torch.Tensor(X)  # converting the NumPy array to Tensor
resize_fn = T.Resize((75, 75))
out = resize_fn(tensor_x)

Also, I’d like to ask why the transforms that I’ve provided did not apply to the dataset below. My aim is to resize the samples of CIFAR-10 from (32, 32, 3) to (75, 75, 3). But I still get the original shape for the samples, (32, 32, 3).

from torchvision import datasets, models, transforms as T

m_transform = T.Compose([
    T.Resize((75, 75)),
    T.ToTensor()
])

train_set = datasets.cifar.CIFAR10('/PyTorch-Datasets', train=True, download=False), transform=m_transform)
test_set = datasets.cifar.CIFAR10('/PyTorch-Datasets', train=False, download=False), transform=m_transform)

Resize expects a channels-first tensor, so it should have the shape [3, 32, 32] in your case as seen here:

tensor_x = torch.randn(3, 32, 32)
resize_fn = T.Resize((75, 75))
out = resize_fn(tensor_x)