Get more data using data loader

Dear all,

I get a dataset with about 60 pictures(size 500*500) and the mask labels with the same size, and I want to randomly crop each picture into 40*40 patches, so that I can get more data.

But the data.Dataset confused me, because I have to load one image and do crop one time in transform, so that during one epoch I can only get 60 patches. I want to get more patches in one epoch and using data loader.

The code is as follows:

class Data(data.Dataset):

    def __init__(self, imgs_root, gt_root, mode="train"):
        imgs = sorted(glob.glob(os.path.join(imgs_root, '*.png')))
        gts = [os.path.join(gt_root, img.split("/")[-1]) for img in imgs]
        self.mode = mode

        if self.mode == "train":
            self.imgs = imgs[:int(0.7 * len(imgs))]
            self.gts = gts[:int(0.7 * len(imgs))]
        elif self.mode == "val":
            self.imgs = imgs[int(0.7) * len(imgs):]
            self.gts = gts[int(0.7 * len(imgs)):]
        elif self.mode == "test":
            self.imgs = imgs
            self.gts = gts
        else:
            print("the mode must be train / val / test.")
            exit()

    def transform(self, image, mask):
        grayscale = T.Grayscale()
        image = grayscale(image)  # (584, 565)
        mask = grayscale(mask)

        pad = T.Pad(padding=PATCH_SIZE // 2)
        image = pad(image)  # (605, 624)
        mask = pad(mask)

        i, j, h, w = T.RandomCrop.get_params(image, output_size=(PATCH_SIZE, PATCH_SIZE))
        image = F.crop(image, i, j, h, w)  # (40, 40)
        mask = F.crop(mask, i, j, h, w)

        totensor = T.ToTensor()
        image = totensor(image)  # torch.Size([1, 40, 40])
        mask = totensor(mask)

        return image, mask

    def __getitem__(self, index):
        # get path
        img_path = self.imgs[index]
        label_path = self.gts[index]
        # get data
        data = Image.open(img_path)
        label = Image.open(label_path)
        # transforms

        data, label = self.transform(data, label)

        return data, label

    def __len__(self):
        return len(self.imgs)

Can anyone help me?

Regards,
Pt

Firstly why not running multiple epochs, as you’ll crop randomly over period of few of epochs you’ll get different crops and would make more sense in my opinion.
But if you still wanna pursue it, you can do this small hack

    def __getitem__(self, index):
        index = index % len(self.imgs)
        """
        Rest same as before
        Now you'll pass through each sample 20 times, for jth image 
        all indices 20i + j (0<=i<20) you'll get same image
        Make sure you follow proper validation protocol, do evaluation per image only once
        And also maybe it'll be better to keep shuffle on while training
        """
    def __len__(self):
        return 20 * len(self.imgs)

Thanks very much! Your idea really help me a lot.:grinning: