Dataloader shuffle same order with multiple dataset

i have four image dataset. full image, face image, face-mask image, landmarks image

in develope vae, my goal is encode full image and reconstruct image is each face, face-mask, landmarks image

but when i load dataset using custom dataset and dataloader, each dataset shuffled but not corresponding image

is any way to get same shuffled order for multi dataset?

class ImageDataset(Dataset):
    def __init__(self, paths, is_aug=False):
        super(ImageDataset, self).__init__()

        # Length
        self.length = len(paths)
        # Image path
        self.paths = paths
        # Augment
        self.is_aug = is_aug
        self.transform = transforms.Compose([
            transforms.ColorJitter(brightness=0.2, contrast=0.2, saturation=0.2, hue=0.1),
            ImgAugTransform(),
            lambda x: Image.fromarray(x),
        ])
        # Preprocess
        self.output = transforms.Compose([
            
            transforms.ToTensor(),
            ])

    def __len__(self):
        return self.length

    def __getitem__(self, idx):
        # Image
        img = Image.open(self.paths[idx])
        # Augment
        if self.is_aug:
            img = self.transform(img)
        # Preprocess
        img = self.output(img)

        return img
def get_celeba_loaders(batch_train, batch_test, path, total_size):
    test_num = 128
    images = glob.glob(os.path.join(".", "ImageFolder",path, "*.jpg"))
    print(len(images))
    datasets = {
        "train": ImageDataset(images[test_num:total_size], True),
        "test": ImageDataset(images[:test_num], False)
    }
    dataloaders = {
        "train": DataLoader(datasets["train"], batch_size=batch_train, shuffle=True),
        "test": DataLoader(datasets["test"], batch_size=batch_test, shuffle=False)
    }

    return dataloaders
d1 = ud.get_celeba_loaders(args.batch_train, args.batch_test, 'Original', 100000)
d2 = ud.get_celeba_loaders(args.batch_train, args.batch_test, 'face_part', 100000)

for i, x in enumerate(zip(d1['train'], d2['train'])):
    origin = x[0].to(device)
    rec_l = x[1].to(device)
    imsave(origin, rec_l, os.path.join('.', f"epoch", f"lmtrain.png"), 8, 8)
    break

this is my code for dataset, it doesn’t correspond after shuffled

how to get same order with shuffle

If you are trying to sample data from multiple datasets, I would recommend to wrap all these unshuffled dataset in a custom Dataset and shuffle this “wrapper” dataset:

class MyDataset(Dataset):
    def __init__(self, datasetA, datasetB):
        self.datasetA = datasetA
        self.datasetB = datasetB
        
    def __getitem__(self, index):
        xA = self.datasetA[index]
        xB = self.datasetB[index]
        return xA, xB
    
    def __len__(self):
        return len(self.datasetA)
    
datasetA = ...
datasetB = ...
dataset = MyDataset(datasetA, datasetB)
loader = DataLoader(dataset, batch_size=10, shuffle=True)

This would make sure to shuffle the indices for MyDataset, which would apply the same index to each internal dataset.

1 Like

quick question- if my dataset has nlp data on which I can’t create a dataset that returns two items, since both items are dicts that the downstream collator complains about (the collator expects a dict, not a tuple of dicts), what should I do?

For a code example, refer tkn_fin and tkn_fin2 in this notebook:

I don’t think this is true as this code snippet works for me:

class MyDataset(Dataset):
    def __init__(self):
        self.dataA = torch.randn(20, 1)
        self.dataB = torch.randn(20, 1)
        
    def __getitem__(self, index):
        xA = self.dataA[index]
        xB = self.dataB[index]
        return {"dataA": xA}, {"dataB": xB}
    
    def __len__(self):
        return len(self.dataA)
    
    
dataset = MyDataset()
loader = DataLoader(dataset, batch_size=10, shuffle=True)

for dataA, dataB in loader:
    print(dataA)
    print(dataB)