I first padded my PIL image list, for that reason I had to transform my PIL list into numpy-array. Now for doing torchvision transforms on my images I need to first transform my images back to PIL. But my images look like this:
Is there anyone who could save me?
Your screenshots are a bit hard to read and it’s better to post the code directly by wrapping it into three backticks ```
Did you call
view in order to swap some axes of the image?
If looks like you are interleaving the image data.
Note that you should call
x.permute(2, 0, 1) e.g. to change a
[H, W, C] tensor to
[C, H, W].
Sorry for bad image.
data_dir = "C:/Users/Majid/Desktop/data/LSP" # Loading dataset
joints_dir = pjoin(data_dir, "joints")
mat_file = sio.loadmat(joints_dir) # Reading joints.mat
joints = mat_file['joints']
angle = 
image_list = 
for filename in glob.glob('C:/Users/Majid/Desktop/data/LSP/images/*.jpg'): # Reading images from dataset
then i did padding like this:
i = 0
padded = 
shifted_x = 
shifted_y = 
for x in image_list:
fram=np.zeros((y_max, x_max,3),dtype='uint8') # Creating a blank space with dimension of maximum x and y
# Location each images inside of each blank space with shifting to the center
fram[int((y_max-y_shape[i])/2):y_shape[i]+int((y_max-y_shape[i])/2), int((x_max-x_shape[i])/2):x_shape[i]+int((x_max-x_shape[i])/2)] = x
i = i+1
padded = np.array(padded)
padded = padded/255
for j in padded:
image_new = transforms.functional.to_pil_image(j,mode="RGB")
now when I plot it it gave me wrong picture. I also tried to change it to CHW, but it get worse.
Thanks for the code!
I’ve debugged it and it seems
PIL is unable to properly create an image out of the np array.
Image.from_array(j, mode="RGB") creates the same interleaved output.
You could fix it via:
for j in padded:
j = j * 255.
j = j.astype(np.uint8)
image_new = transforms.functional.to_pil_image(j, mode="RGB")
I’m not sure, why the transformation fails using, but maybe it’s expected for
CC @fmassa who might know more about it.