transform_list = [transforms.ToTensor(),
transforms.Normalize(0.5, 0.5)]
I am getting an error:
File “train_2.py”, line 171, in
train(epoch)
File “train_2.py”, line 90, in train
for iteration, batch in enumerate(training_data_loader, 1):
File “/home/iiitd/.local/lib/python2.7/site-packages/torch/utils/data/dataloader.py”, line 201, in next
return self._process_next_batch(batch)
File “/home/iiitd/.local/lib/python2.7/site-packages/torch/utils/data/dataloader.py”, line 221, in _process_next_batch
raise batch.exc_type(batch.exc_msg)
TypeError: Traceback (most recent call last):
File “/home/iiitd/.local/lib/python2.7/site-packages/torch/utils/data/dataloader.py”, line 40, in _worker_loop
samples = collate_fn([dataset[i] for i in batch_indices])
File “/home/iiitd/soumyadeep/pix2pix-pytorch_2/dataset.py”, line 29, in getitem
input = self.transform(input)
File “/usr/local/lib/python2.7/dist-packages/torchvision/transforms.py”, line 34, in call
img = t(img)
File “/usr/local/lib/python2.7/dist-packages/torchvision/transforms.py”, line 155, in call
for t, m, s in zip(tensor, self.mean, self.std):
TypeError: zip argument #2 must support iteration
How should I write the transform for grayscale images ??
I am loading gray scale image using ImageFolder and using transform.Normalize((0.5, ),(0.5, )). But i am still receiving the images as 3 channel images.