Custom Augmentation for Custom Dataset

Requirement: Take MNIST data, apply some distortion, invert that distortion, feed to a cDCGAN and generate samples.
But, the part where I create a custom dataset after applying inversion is erroneous. The current code written throws an error when I attempt to train the model as given below.
Please advise.

Full code can be found here: https://colab.research.google.com/drive/1zNVxBtnLsmTu6sugQ-D_FDabOIXB-NDC

training start!

TypeError Traceback (most recent call last)
in ()
20 y_fake_ = torch.zeros(batch_size)
21 y_real_, y_fake_ = Variable(y_real_.cuda()), Variable(y_fake_.cuda())
—> 22 for x_, y_ in train_loader:
23 # train discriminator D
24 D.zero_grad()

/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py in next(self)
613 if self.num_workers == 0: # same-process loading
614 indices = next(self.sample_iter) # may raise StopIteration
–> 615 batch = self.collate_fn([self.dataset[i] for i in indices])
616 if self.pin_memory:
617 batch = pin_memory_batch(batch)

/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py in (.0)
613 if self.num_workers == 0: # same-process loading
614 indices = next(self.sample_iter) # may raise StopIteration
–> 615 batch = self.collate_fn([self.dataset[i] for i in indices])
616 if self.pin_memory:
617 batch = pin_memory_batch(batch)

/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataset.py in getitem(self, idx)
79 else:
80 sample_idx = idx - self.cumulative_sizes[dataset_idx - 1]
—> 81 return self.datasets[dataset_idx][sample_idx]
82
83 @property

in getitem(self, index)
15 #perform augmentation
16 if self.transform4:
—> 17 img = self.transform4(img, mask) # actually calling the function with necessary attributes
18
19

TypeError: call() takes 2 positional arguments but 3 were given

transforms.Compose currently only handles transformations with a single argument as shown in these lines of code.
You would have to apply your custom transformation separately or maybe wrap it in a tuple and unpack it inside your transformation.

Thanks for your response. I’ve modified the code slightly to work, and the problem I now face is that the data loader reading from these data sets is way too slow. I’ve elaborated a little more here: GAN Training takes too long!

Please advise.

Regards