Error using transforms

Hi while using transforms for an image dataset like this

 train_transform = transforms.Compose([
                                          transforms.Resize((205, 205)), 
                                          transforms.RandomVerticalFlip(p=0.5), 
                                          transforms.RandomCrop((200, 200)),
                                          transforms.ToTensor()])

I got this entire error

---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-12-7a9e35561311> in <module>
     47    # print(PCP(sample1.unsqueeze(0), sample2.unsqueeze(0)))
     48 
---> 49     plotSamples(LSP_dataset)
     50 
     51     plt.tight_layout()

<ipython-input-3-2b3f96239780> in plotSamples(dataset)
      3     for i in range(25):
      4         ax = axes[i // 5, i % 5]
----> 5         sample = dataset[i]
      6         image = sample['image']
      7         joints = sample['joints']

<ipython-input-1-ae4f8f0687a1> in __getitem__(self, idx)
     68         sample = {'image': image, 'joints': joints}
     69         if self.transform:
---> 70             sample = self.transform(sample)
     71 
     72         return sample

~\anaconda3\lib\site-packages\torchvision\transforms\transforms.py in __call__(self, img)
     58     def __call__(self, img):
     59         for t in self.transforms:
---> 60             img = t(img)
     61         return img
     62 

~\anaconda3\lib\site-packages\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs)
    887             result = self._slow_forward(*input, **kwargs)
    888         else:
--> 889             result = self.forward(*input, **kwargs)
    890         for hook in itertools.chain(
    891                 _global_forward_hooks.values(),

~\anaconda3\lib\site-packages\torchvision\transforms\transforms.py in forward(self, img)
    271             PIL Image or Tensor: Rescaled image.
    272         """
--> 273         return F.resize(img, self.size, self.interpolation)
    274 
    275     def __repr__(self):

~\anaconda3\lib\site-packages\torchvision\transforms\functional.py in resize(img, size, interpolation)
    373     if not isinstance(img, torch.Tensor):
    374         pil_interpolation = pil_modes_mapping[interpolation]
--> 375         return F_pil.resize(img, size=size, interpolation=pil_interpolation)
    376 
    377     return F_t.resize(img, size=size, interpolation=interpolation.value)

~\anaconda3\lib\site-packages\torchvision\transforms\functional_pil.py in resize(img, size, interpolation)
    207 def resize(img, size, interpolation=Image.BILINEAR):
    208     if not _is_pil_image(img):
--> 209         raise TypeError('img should be PIL Image. Got {}'.format(type(img)))
    210     if not (isinstance(size, int) or (isinstance(size, Sequence) and len(size) in (1, 2))):
    211         raise TypeError('Got inappropriate size arg: {}'.format(size))
------------------------------------------------------------------------------------------------
TypeError: img should be PIL Image. Got <class 'dict'>

not sure how to fix it.Any suggestions welcome

This error message is thrown from transforms. It is clear enough. No?
It seems you are passing a python dict to the transform instead of the PIL image.

train_transform = transforms.Compose([ transforms.ToPILImage(),
transforms.Resize((205, 205)),
transforms.RandomVerticalFlip(p=0.5),
transforms.RandomCrop((200, 200)),
transforms.ToTensor()
])
I tried to make it PIL using transforms but it still doesnt work
I posted log


TypeError Traceback (most recent call last)
in
6 ])
7 LSP_dataset = LSPLoader(train_transform)
----> 8 plotSamples(LSP_dataset)
9 plt.show()

in plotSamples(dataset)
3 for i in range(25):
4 ax = axes[i // 5, i % 5]
----> 5 sample = dataset[i]
6 image = sample[‘image’]
7 joints = sample[‘joints’]

in getitem(self, idx)
68 sample = {‘image’: image, ‘joints’: joints}
69 if self.transform:
—> 70 sample = self.transform(sample)
71 img = sample[‘image’]
72 label = sample[‘landmarks’]

~\anaconda3\lib\site-packages\torchvision\transforms\transforms.py in call(self, img)
58 def call(self, img):
59 for t in self.transforms:
—> 60 img = t(img)
61 return img
62

~\anaconda3\lib\site-packages\torchvision\transforms\transforms.py in call(self, pic)
177
178 “”"
→ 179 return F.to_pil_image(pic, self.mode)
180
181 def repr(self):

~\anaconda3\lib\site-packages\torchvision\transforms\functional.py in to_pil_image(pic, mode)
217 “”"
218 if not(isinstance(pic, torch.Tensor) or isinstance(pic, np.ndarray)):
→ 219 raise TypeError(‘pic should be Tensor or ndarray. Got {}.’.format(type(pic)))
220
221 elif isinstance(pic, torch.Tensor):

TypeError: pic should be Tensor or ndarray. Got <class ‘dict’>.

you are passing a dictionary to self.transform which the error points out.
try self.transform(image).

ok the error now changed to

IndexError Traceback (most recent call last)
in
6 ])
7 LSP_dataset = LSPLoader(train_transform)
----> 8 plotSamples(LSP_dataset)
9 plt.show()

in plotSamples(dataset)
3 for i in range(25):
4 ax = axes[i // 5, i % 5]
----> 5 sample = dataset[i]
6 image = sample[‘image’]
7 joints = sample[‘joints’]

in getitem(self, idx)
69 if self.transform:
70 sample = self.transform(image)
—> 71 img = sample[‘image’]
72 label = sample[‘landmarks’]
73

IndexError: too many indices for tensor of dimension 3

you can print the type() of variables that you have and understand which variable holds the input PIL and then pass it to transform. Also, please read up more on python programming and different data types. That might help.

I understand what you are saying about self.transform(image)
I have a dict defined in my dataloader here

def __getitem__(self, idx):

        image_name = os.path.join(self.__image_path, self.__image_names[idx])
        image = io.imread(image_name)
        joints = self.__mat_data[:-1, :, idx]
        sample = {'image': image, 'joints': joints}
        if self.transform:
            sample = self.transform(sample)
            img = sample['image']
            label = sample['landmarks']

        return sample

So when applying

train_transform = transforms.Compose([ transforms.ToPILImage(),
transforms.Resize((205, 205)),
transforms.RandomVerticalFlip(p=0.5),
transforms.RandomCrop((200, 200)),
transforms.ToTensor()
])

is there an option to apply self image transform in this command itself?

you can apply self.transform(img) given that img contains the image.

so for a random vertical flip transform for img,we write something like self.transform(img,randomverticalflip)?or should we make a function for this ?

You can refer to the page: torchvision.transforms — Torchvision 0.11.0 documentation to know the available transforms.
vertical flip: torchvision.transforms.RandomVerticalFlip