img = Image.open(os.path.join(params['train_data_dir'], "train_04000.png"))
print(img.size)
The above code reads image of size (64,64). When I convert the image to a tensor using torchvision.transforms the size becomes 4 x 64 x 64. I am unable to figure out why.
trans = transforms.Compose([transforms.ToTensor()])
torch_img = trans(img)
print(torch_img.size())
#torch.Size([4, 64, 64])