Transforms Error: TypeError: pic should be Tensor or ndarray. Got <class 'torch.Tensor'>

I’m trying to pre-process an array of images, frames_tensor, of shape (8393, 3, 224, 224) and type torch.Tensor using this transformation sequence:

class Frames_Dataset(Dataset):

            def __init__(self, frames):
                self.preprocess = transforms.Compose([
                    transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),

                self.frames = self.preprocess(frames)

            def __getitem__(self, index):
                return self.frames[index]

            def __len__(self):
                return len(self.frames)

frames_dataset = Frames_Dataset(frames_tensor)

I’m getting error:

Traceback (most recent call last):
  File "Features/", line 33, in <module>
    extract_features(file, subdir, features_name, fps, args.overwrite_features)
  File "/media/wilsonchan/Giratina/Projects/Badi Highlights/v2 AceOne/SoccerNetv2-DevKit/Features/", line 153, in extract_features
    frames_dataset = Frames_Dataset(frames_tensor)
  File "/media/wilsonchan/Giratina/Projects/Badi Highlights/v2 AceOne/SoccerNetv2-DevKit/Features/", line 143, in __init__
    self.frames = self.preprocess(frames)
  File "/home/wilsonchan/anaconda3/envs/AceOne-Features/lib/python3.7/site-packages/torchvision/transforms/", line 49, in __call__
    img = t(img)
  File "/home/wilsonchan/anaconda3/envs/AceOne-Features/lib/python3.7/site-packages/torchvision/transforms/", line 110, in __call__
    return F.to_pil_image(pic, self.mode)
  File "/home/wilsonchan/anaconda3/envs/AceOne-Features/lib/python3.7/site-packages/torchvision/transforms/", line 103, in to_pil_image
    raise TypeError('pic should be Tensor or ndarray. Got {}.'.format(type(pic)))
TypeError: pic should be Tensor or ndarray. Got <class 'torch.Tensor'>.

My array is a Tensor, but F.to_pil_image is telling me it doesn’t recognize this?

Note: I’ve seen this post, but would like to keep my images as 3-channels, instead of 1-channel.

If anyone has any idea what the issue is, I’d greatly appreciate the help.

you can keep your 3-channel, just double check if your input is 3-dimensional [channel, H, W]. if you are using grayscale, then reshape your tensor to [1, H, W]