Unable to convert greyscale image to tensors properly

img = Image.open(os.path.join(params['train_data_dir'], "train_04000.png"))
print(img.size)

The above code reads image of size (64,64). When I convert the image to a tensor using torchvision.transforms the size becomes 4 x 64 x 64. I am unable to figure out why.

trans = transforms.Compose([transforms.ToTensor()])
torch_img = trans(img)
print(torch_img.size())
#torch.Size([4, 64, 64])

Solved it by converting the image to RGB first and then converting the image to tensor.

PNG may have an alpha (transparency) channel, which usually comes last, so you could just keep img[:3].

Best regards

Thomas