How do you train various dataloaders by multiprocessing?

Hello.

I’m trying to train using multiprocessing.

I have a question.

I want to create 3 dataloaders with different input_size and batch_size, and train them using multiprocessing.

For example, create 3 dataloaders with an input_size of 512x512, 256x256, 128x128 and a batchsize of 2, 4, 8. I am trying to train the created dataloaders to a model using multiprocessing.

Simple dataloader example,

dataloader_list = []
    dataloader_list.append([])
    dataloader_list.append([])
    multi_scales = [0.5, 1.0, 1.5]  # scale value
    multi_batch = [8, 4, 2]  # batch size

    for i in range(0, len(multi_scales)):
        train_loader = torch.utils.data.DataLoader(
            getattr(ds, args.dataset.replace("CULane", "VOCAug") + 'DataSet')(data_list=args.train_list,
                                                                              transform=torchvision.transforms.Compose([
                                                                                  tf.ResizeScale(size=(
                                                                                      multi_scales[i],
                                                                                      multi_scales[i]),
                                                                                      interpolation=(
                                                                                          cv2.INTER_LINEAR,
                                                                                          cv2.INTER_NEAREST)),
                                                                                  tf.GroupRandomCropRatio(),
                                                                                  tf.GroupRandomRotation(),
                                                                                  tf.GroupNormalize()), batch_size=multi_batch[i],
            shuffle=True,
            num_workers=args.workers, pin_memory=False, drop_last=True)

        val_loader = torch.utils.data.DataLoader(
            getattr(ds, args.dataset.replace("CULane", "VOCAug") + 'DataSet')(data_list=args.val_list,
                                                                              transform=torchvision.transforms.Compose([
                                                                                  tf.GroupRandomScale(
                                                                                      size=(
                                                                                          multi_scales[i],
                                                                                          multi_scales[i]),
                                                                                      interpolation=(
                                                                                          cv2.INTER_LINEAR,
                                                                                          cv2.INTER_NEAREST)),
                                                                                  tf.GroupRandomCropRatio(),
                                                                                  tf.GroupNormalize()), batch_size=multi_batch[i],
            shuffle=False, num_workers=args.workers, pin_memory=False)

        dataloader_list[0].append(train_loader)
        dataloader_list[1].append(val_loader)

The code above is an example only and not an exact code.

In a way, it can be seen as similar to multiscale.

Can multiple dataloaders be shared with each other?
If not, how can I train multiple generated dataloaders using multiprocessing?

Thank you in advance.