Datasets and Dataloaders from single channel images stored in numpy arrays

I am trying to train a CNN using images stored in numpy that have a single channel. I am using Grayscale(num_output_channels=3) to transform them into 3 channels so as to be able to use transfer learning but keep getting the error shown below. Any help will be greatly appreciated.

# Load training images and labels
image_train = np.load('data/Hela_Hoechst_GM130/x_train.npy') #(1848, 135, 135, 3)
label_train = np.load('data/Hela_Hoechst_GM130/y_train.npy') #(1848,)

# Load test images and labels
image_valid = np.load('data/Hela_Hoechst_GM130/X_test.npy') #(113, 135, 135, 3)
label_valid = np.load('data/Hela_Hoechst_GM130/Y_test.npy') #(113,)

# Extracting DNA dataset from  channel 1
train_ds_DNA = image_train[:,:,:,0]
valid_ds_DNA = image_valid[:,:,:,0]

train_tfms = tt.Compose([tt.Grayscale(num_output_channels=3)])
valid_tfms = tt.Compose([tt.Grayscale(num_output_channels=3)])

class MyDataset(Dataset):
    def __init__(self, data, target, transform):
        self.data = torch.from_numpy(data).float()
        self.target = torch.from_numpy(target).long()
        self.transform = transform
        
    def __getitem__(self, index):
        x = self.data[index]
        y = self.target[index]
        
        if self.transform:
            x = self.transform(x)
        
        return x, y
    
    def __len__(self):
        return len(self.data)

 

train_ds = MyDataset(train_ds_DNA, label_train, train_tfms)
train_dl = DataLoader(
    train_ds,
    batch_size=128,
    shuffle=False,
    num_workers=2,
    pin_memory=torch.cuda.is_available())

for i, (images, labels) in enumerate(train_dl):
    print(type(images))

I am getting an error TypeError: Input image tensor should have at least 3 dimensions, but found 2

Hi, after you’re converting from numpy to tensor, make sure you have an additional channel dimension (channel first), as of this transformation requires it if it is working with tensors. You can do it with: x = x.unsqueeze(0), this adds channel dimension to be the first one.
From docs.

The image can be a PIL Image or a Tensor, in which case it is expected to have […, 3, H, W] shape, where … means an arbitrary number of leading dimensions

Thanks for your response. The shape of train_ds_DNA is (1848, 135, 135). Using x = x.unsqueeze(0) after x = self.data[index] in the above code, I get this to (1848, 1, 135, 135). The reason for using tt.Grayscale is to get the shape to (1848, 3, 135, 135) with same data in the three channels. But this transform seems to require three channels in the input image. Is that not what it is supposed to do? The error now is Input image tensor should 3 channels, but found 1
Thanks again for your time.