Dataloader for semantic segmentation

Hi Everyone,

I am very new to Pytorch and deep learning in general. I am having 2 folders one with images and another with the pixel labels of the corresponding images. I would like to know how to use the dataloader to make a train_loader and validation_loader if the only thing I know is the path to these folders. I am trying to do something similar to
but instead of the csv file in the tutorial I have a png pixellabel map for every images. All or Any help is appreciated.

I am assuming there is a folder which contains sub-folders image and mask. The filenames are same. This is a template you can modify it and use. All the best.

import as data

class DataLoaderSegmentation(data.Dataset):
    def __init__(self, folder_path):
        super(DataLoaderSegmentation, self).__init__()
        self.img_files = glob.glob(os.path.join(folder_path,'image','*.png')
        self.mask_files = []
        for img_path in img_files:

    def __getitem__(self, index):
            img_path = self.img_files[index]
            mask_path = self.mask_files[index]
            data = use opencv or pil read image using img_path
            label =use opencv or pil read label  using mask_path
            return torch.from_numpy(data).float(), torch.from_numpy(label).float()

    def __len__(self):
        return len(self.img_files)

@Balamurali_M Thank you very much, I will try this

This helped me a lot, I have a question regarding one of the lines in your code. Why do this

for img_path in img_files:

and not use the glob.glob method for img_files? Wouldn’t the label files be in the same order as the img_files if they both have the same name?

I was assuming two separate folder with same filenames. You can use it accordingly.

I applied a similar approach but I am getting error TypeError: expected np.ndarray (got Tensor). What I am doing wrong?

You need to be more specific in your error. Maybe paste the line in the code to which the error is associated. All I know from looking from your error is that some variable in your code is expecting a numpy variable and your code is providing it with a torch tensor variable. It would be very hard to say what you are doing wrong. It would be like my code gives the error (expected character got integer).

1 Like

I resolved it, though I am encountering some other error. Did you applied same transformations to both images and mask. i have the following transformations-

data_transforms = transforms.Compose([transforms.RandomCrop((512,512)),
                                 transforms.Normalize(mean=train_mean, std=train_std)

Here, rotation, flips are to be applied to both images and mask and rest of the transforms only to the images, I followed this post but I can’t use lambda transforms using this way.So I am just thinking of a workaround it?

I have not used transforms for my image , as I have a very large dataset. But if I am using transforms, the same transforms have to be applied to image and mask. Since you are using random rotations and flip, make sure your image mask pair g through the same random transforms.

i:e if ImageA is randomly rotated by 10 degrees, make sure MaskA is also rotated by 10 degrees.

Did you figure out a way to augment both image and mask together ?

Yes, Thanks.
I did it by customizing the data-loader class.Something like this-

class customdataloader():
       def transform(self,image,mask):
             image = TF.rotate(image,90)
             mask = TF.rotate(mask,90)

Perfect. I did something similar recently.

1 Like