Continuing the discussion from Memory error when trying to train with two different dataloaders in parallel:
Hello I am trying to do the same thing. I am not getting any memory error but it is taking very long to train. It is taking around 2 hours to complete one epoch. The size of the larger dataset is 25000 images and the size of smaller dataset is 5000 images. The shape of both the images is 5125123 and the I am using a batchsize of 32. I am using 3 1080ti gpu’s for training with a memory of 512 gb. Please help me if there is some efficient way to iterate through the two dataloaders.