How to solve this error - Invalid dimensions for image data

def img_convert(tensor):
image = tensor.clone().detach().numpy()
image = image.transpose(1, 2, 0)
image = image*np.array(0.5,) + np.array(0.5,)
image = image.clip(0, 1)
return image

dataiter = iter(training_loader)
images, labels =
fig = plt.figure(figsize=(25, 4))

for idx in np.arange(20):
ax = fig.add_subplot(2, 10, idx+1, xticks=[], yticks=[])

Can someone help me with this error can’t figure out how to resolve this, first I had gray scale problem then after fixing it there is a new issue.
Invalid dimension for image data

Could you print the shape of images[idx] and the returned numpy array from img_convert?

PS: You can add code snippets using three backticks ``` :wink:

1 Like

On printing the shape of images[idx] I find the shape is reversed
torch.Size([1, 28, 28])
(28, 28, 1)

However if I put a transpose command
images[idx].transpose(1, 2, 0)
so which 2 positional arguments should I give

If you are dealing with grayscale images, you should remove the channel dimension for matplotlib:

plt.imshow(np.random.randn(24, 24, 3)) # works
plt.imshow(np.random.randn(24, 24)) # works
plt.imshow(np.random.randn(24, 24, 1)) # fails
1 Like

So what do you suggest I should do in my above mentioned code
Where should I remove the channel dimensions in the img_conver method or in the below for loop

I would add something like:

if image.shape[2] == 1:
    image = image[:, :, 0]
return image

into your img_convert method.

1 Like

Thanks, it worked!!!