Problem in Pytorch Transforms

I have defined the following transform.

transform_list_classifier = [transforms.ToPILImage(), transforms.Resize((64,64)),transforms.ToTensor(),transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))]

transform_classifier = transforms.Compose(transform_list_classifier)

I have a variable called ‘fake_img’ which is a cuda tensor which I have converted into a CPU tensor as follows:

fake_img=((fake_b).data).cpu()

After the conversion the type of the tensor is as follows
[torch.FloatTensor of size 5x3x256x256]

When I apply the transform fake_img=transform_classifier(fake_img)
I am getting the error

File “build/bdist.linux-x86_64/egg/torchvision/transforms.py”, line 560, in call
File “build/bdist.linux-x86_64/egg/torchvision/transforms.py”, line 610, in call
File “build/bdist.linux-x86_64/egg/torchvision/transforms.py”, line 96, in to_pil_image
TypeError: pic should be Tensor or ndarray. Got <class ‘torch.FloatTensor’>.

Any help would be appreciated.

You could try just transforming one image item from your fake_img. Your image needs to be 3-D with C x H x W. Yours has Batch size that might be causing the error. Try with just one image from your 4-D tensor and see if that works.

Yes I had done that. But why is it this way ??..I mean most programs will have a batch size, and in that if you will have to apply transform to each image it takes up a lot of time because you will have to store these intermediate results in another tensor.
This is a very inefficient way of performing a transform. Is there any way of doing this differently ??
@richard Do you have any suggestions ??

I believe the most common use is to define a Dataset and pass it the transforms you want to perform. In this way, every sample you take from the dataset will be transformed. You can then take arbitrary batch size samples without having to store transformed items.
Have you tried that?

I think you need to wrap this with transforms.Compose(<list of transforms>)

Oh yes. It might work this way.