Wrong disposition of element after reshape

Hi, i have a torch tensor that represent an RGB image (Channels x Height x Width), after i change it’s channels order to get an BGR image using a permute i’m not able to visualize it because, when change the type from Tensor to Numpy array and perform a reshape to change the dimension order to get ( H x W x C) the value from the channels mix together .

For example if the first channel is full of 1, the second channel full of 2 and the third channel full of 3, after the Numpy reshape every channel get some 1, 2 and 3.

here is the code:


x = #Torch Tensor RGB image (c, h, w)
permute = [2, 1, 0]
x = x[permute, :, :]
y = x.numpy()
c, h, w = y.shape
y = np.reshape(y, (h, w, c))
plt.imshow(y) #display a corrupted image
plt.show()

np.transpose should do the trick:

x = torch.cat((torch.ones(1, 24, 24),
               torch.ones(1, 24, 24)*2,
               torch.ones(1, 24, 24)*3), 0)

permute = [2, 1, 0]
x = x[permute, :, :]
y = x.numpy()
c, h, w = y.shape
y = y.transpose(1, 2, 0)
1 Like

It’s work with a “toy” tensor like that but on the real image is still not work, i don’t know why, thank you anyway

That’s strange. If you like, you can upload the image somewhere and I could have a look.

1 Like

Thank you much for your help, at this location you can find the image, it’s a Float Torch Tensor already in BGR with values between 0 and 255.
https://drive.google.com/open?id=14hgJVOzG8Nklra9nFYhJJwoZqMYhDlbT
thank you again

It seems you’ve reshaped/transposed the saved image somehow, so that the pixel positions do not match anymore.
I could somehow restore it with:

x = torch.load('patchclean.pth')
permute = [2, 1, 0]
x = x[permute, :, :]
y = x.numpy()
a = y.flatten()
a = a.reshape(224, 224, 3)
plt.imshow(a)

You can see some persons, but the colors are still inverted.
Could you explain how you’ve loaded and saved the image?

1 Like

Before storing the image i just did a permute like you did to switch color channels from RGB to BGR, and than i saved it with torch.save().

It seems like performing the permute to get the image in BGR creates problems with the next numpy reshape.

As you show me, made the image back to RGB with an another permute solve the problem but i can’t understand why i can’t visualize the image in BGR

Could you upload the original image and your processing (RGB->BGR)?

1 Like

Here is the original image (x), it’s a Float Tensor RGB with values between 0 and 1, just read from the dataset.
This is how i perform the permute:

permute = [ 2, 1, 0]
x = x[permute, :, :]* 255.
torch.save(x, os.path.join(opt.outf, 'patchclean.pth'))