# Transforming numpy padded images to PIL images

Hi everyone,

I first padded my PIL image list, for that reason I had to transform my PIL list into numpy-array. Now for doing torchvision transforms on my images I need to first transform my images back to PIL. But my images look like this:

Is there anyone who could save me? Your screenshots are a bit hard to read and it’s better to post the code directly by wrapping it into three backticks ``` Did you call `reshape` or `view` in order to swap some axes of the image?
If looks like you are interleaving the image data.
Note that you should call `x.permute(2, 0, 1)` e.g. to change a `[H, W, C]` tensor to `[C, H, W]`.

Sorry for bad image.

``````data_dir = "C:/Users/Majid/Desktop/data/LSP"   # Loading dataset
joints_dir = pjoin(data_dir, "joints")
joints = mat_file['joints']
angle = []
image_list = []
for filename in glob.glob('C:/Users/Majid/Desktop/data/LSP/images/*.jpg'): # Reading images from dataset
im=Image.open(filename)

image_list.append(np.asarray(im))

image_list=np.array(image_list)
``````

then i did padding like this:

``````i = 0
shifted_x = []
shifted_y = []
for x in image_list:
fram=np.zeros((y_max, x_max,3),dtype='uint8')        # Creating a blank space with dimension of maximum x and y
# Location each images inside of each blank space with shifting to the center
fram[int((y_max-y_shape[i])/2):y_shape[i]+int((y_max-y_shape[i])/2), int((x_max-x_shape[i])/2):x_shape[i]+int((x_max-x_shape[i])/2)] = x
shifted_x.append(int((x_max-x_shape[i])/2))
shifted_y.append(int((y_max-y_shape[i])/2))
i = i+1
``````
``````images_data=[]
for j in padded:
image_new = transforms.functional.to_pil_image(j,mode="RGB")
images_data.append(image_new)
``````

now when I plot it it gave me wrong picture. I also tried to change it to CHW, but it get worse.

Thanks for the code!
I’ve debugged it and it seems `PIL` is unable to properly create an image out of the np array.
In fact `Image.from_array(j, mode="RGB")` creates the same interleaved output.

You could fix it via:

``````images_data=[]
for j in padded:
j = j * 255.
j = j.astype(np.uint8)
image_new = transforms.functional.to_pil_image(j, mode="RGB")
images_data.append(image_new)
``````

I’m not sure, why the transformation fails using, but maybe it’s expected for `float64`.
CC @fmassa who might know more about it.