Convert Grayscale numpy image shape (H x W x C) into a torch image shape (C x H x W)

Hey I have a gray scale numpy ndarray of shape 224,224 [ Which I assume is in (H x W x C ) format. However I need to convert this into (C x H x W) format. When I printed the shape it showed only the H x W. Where is the channel, or is it not shown because the image is in gray scale?

:smiley:

Use

numpy_array = ... # your numpy array image with shape (224,224)
tensor = torch.from_numpy(numpy_array) #torch.Size([224, 224])
tensor = tensor.unsqueeze(dim=0) # torch.Size([1, 224, 224])
3 Likes

Thank you so much @simaiden It worked.

Also you can use transforms.ToTensor() docs, this transforms a PIL or numpy image in range [0,255] to tensor in range [0,1]

from torchvision.transforms import ToTensor
import torch
numpy_array = ... # your numpy array image with shape (224,224)
tensor = ToTensor()(numpy_array) #torch.Size([1, 224, 224])

1 Like

Hey is transforms.ToTensor() equivalent to numpy_array/255 . Does this normalize the values in the range [0,1].
P.S thanks for the docs link :heart:

1 Like

According to the docs:

Converts a PIL Image or numpy.ndarray (H x W x C) in the range [0, 255] to a torch.FloatTensor of shape (C x H x W) in the range [0.0, 1.0] if the PIL Image belongs to one of the modes (L, LA, P, I, F, RGB, YCbCr, RGBA, CMYK, 1) or if the numpy.ndarray has dtype = np.uint8

You could see the difference doing:

numpy_array = np.ones((224,224))
numpy_array_uint8 = np.ones((224,224,1),dtype=np.uint8)
tensor = ToTensor()(numpy_array)
tensor_uint8 = ToTensor()(numpy_array_uint8)
print(tensor)
print(tensor_uint8)

So only is equivalent if the numpy array has dtype = np.uint8

1 Like