Creating Dataset From Grayscale Image

Hello everyone. I’m trying to create a custom dataset from grayscale image (as below code) but when i call dataloader, it returns a 3d tensor BatchxRowxCols rather than BatchxChannelxRowxCols. can anyone help me ?

    def __getitem__(self,idx): 
        img = skimage.io.imread(self.filenames[idx])
        return img

and my transformation is

trans = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5,), (0.5,))])```

Try to transform the numpy array to a tensor and unsqueeze dim0:

img = skimage.io.imread(self.filenames[idx])
x = torch.from_numpy(img).unsqueeze(0)
return x

Hi!
I usually add transforms,Grayscale(num_output_channels=1) to Compose list for such purposes.

Thanks for your reply, however I’ve tried this but it doesn’t seem to work! How do you use it ?

Something like that

result = transforms.Compose([
            transforms.toPILImage(),
            transforms.Grayscale(num_output_channels=1),
            transforms.ToTensor(),
            transforms.Normalize([134.96,], [2.077,])
        ])(img)

img is a three channel image btw

1 Like