Could someone tell me what’s wong here? I suspect that the resize functionality is not upscaling the images properly but I’m not 100%.
dataset = 'FashionMNIST'
datapath = './data'
ds = getattr(torchvision.datasets, dataset)
transforms = torchvision.transforms.Compose([
torchvision.transforms.Resize((32, 32)),
torchvision.transforms.ToTensor(),
])
train_set = ds(root=datapath, train=True, download=True, transform=transforms)
train_set.data.unsqueeze_(1)
train_set.data = train_set.data.repeat(1, 3, 1, 1)
dummy_data = torch.utils.data.DataLoader(train_set, batch_size=24, shuffle=True, num_workers=4, pin_memory=True)
x, y = next(iter(dummy_data))
I keep getting the following error
Original Traceback (most recent call last):
File "/home/kirk/miniconda3/envs/torch/lib/python3.6/site-packages/torch/utils/data/_utils/worker.py", line 178, in _worker_loop
data = fetcher.fetch(index)
File "/home/kirk/miniconda3/envs/torch/lib/python3.6/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/kirk/miniconda3/envs/torch/lib/python3.6/site-packages/torch/utils/data/_utils/fetch.py", line 44, in <listcomp>
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/kirk/miniconda3/envs/torch/lib/python3.6/site-packages/torchvision/datasets/mnist.py", line 92, in __getitem__
img = Image.fromarray(img.numpy(), mode='L')
File "/home/kirk/miniconda3/envs/torch/lib/python3.6/site-packages/PIL/Image.py", line 2661, in fromarray
raise ValueError("Too many dimensions: %d > %d." % (ndim, ndmax))
ValueError: Too many dimensions: 3 > 2.