Possible bug in Transforms

I have a network ‘net’ which is producing a cuda tensor.

b = net(a)

now this b is an image which I need to resize to 224,224. To do this I am converting this cuda tensor into a numpy array as follows:

img=((b).data).cpu().numpy() # this runs fine.

Now I am trying to use the transform to resize and normalize the image ‘b’ as follows:

transform_list_classifier = [transforms.ToTensor(),transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),transforms.Resize((224,224))]

transform_classifier = transforms.Compose(transform_list_classifier)

However then doing this is giving me an error:

input_classifier = transform_classifier(b)

The error is:

TypeError: pic should be PIL Image or ndarray. Got <type ‘numpy.ndarray’>

I have tried to convert the ‘b’ into a PIL image but that is leading onto other errors.
Can’t we use the pytorch transform for a <type ‘numpy.ndarray’> ??

Any help would be appreciated.

It’s better to know which kind of type the transforms.xx is allowed.

for example:

  1. ToTensor: (224x224 is not allowed, 224x224x1)
    Converts a PIL Image or numpy.ndarray (H x W x C) in the range
    [0, 255] to a torch.FloatTensor of shape (C x H x W) in the range [0.0, 1.0]
  2. Resize : Resize the input PIL Image to the given size (You can not Resize a tensor object)
1 Like