I am Implementing an Image Captioning Model in pyTorch using MSCOCO Dataset .But while training the model I have found few(237 out of 123287) Images are gray (channel=1).So I am getting error due to that the trans I am using is given below .
data_transform = transforms.Compose([transforms.Resize((224,224)),
transforms.ToTensor()])
dset=COCODataset(filepath,label,filename,annotation_sen,annotation_tokens,anno_tokens_vector,data_transform)
train_loader=DataLoader(dset,batch_size=1,num_workers=0)
Is there any way to make those gray Image as 3 channel Image in torchvision.transforms ??