PyTorch PIL to Tensor and vice versa

HI,
I am reading an image using pil, then transforming it into a tensor and then back into PIL. The result is different from the original image.

import glob
from PIL import Image

images=glob.glob("/root/data/amz//train_small/*jpg")
for image in images:
img = Image.open(image)
trans = transforms.ToPILImage()
trans1 = transforms.ToTensor()
plt.imshow(trans(trans1(img)))

amazonas

1 Like

Anyone can answer this…?

http://pytorch.org/docs/master/torchvision/transforms.html#conversion-transforms

implies you’ll go from [0,255] to [0,1]…

1 Like

I think it’s because your input is PNG (or 4 channel).

plt.imshow(trans(trans1(img).convert("RGB")))

will work.

This will solve https://github.com/pytorch/vision/pull/189

Thank you all,
I have a working POC here:
https://github.com/QuantScientist/Data-Science-PyCUDA-GPU/blob/master/day%2002%20PyTORCH%20and%20PyCUDA/PyTorch/09%2BPyTorch%2BKaggle%2BImage%2BData-set%2Bloading%2Bwith%2BCNN.ipynb

the same question.

the link has been dead
i’d apprieciate if you post you method

1 Like

https://bit.ly/2FdGYm0 Here is the link to the above notebook.

This link is not working.

This example worked for me:

print("t is: ", t.size())
from torchvision import transforms
im = transforms.ToPILImage()(t).convert("RGB")
display(im)
print(im)
print(im.size)

image

10 Likes

I converted viceversa as

pil_img = Image.open(img)
print(pil_img.size)  
 
pil_to_tensor = transforms.ToTensor()(img).unsqueeze_(0)
print(pil_to_tensor.shape) 

tensor_to_pil = transforms.ToPILImage()(pil_to_tensor.squeeze_(0))
print(tensor_to_pil.size)


(1200, 1200)
torch.Size([1, 3, 1200, 1200])
(1200, 1200)

thanks to ptrblk. :slight_smile:

3 Likes

Thank you so much! You’ve made it very clear