How can I apply the same transformation to both images in the cycleGAN implementation of Zhu et al. ?
I want to go over the same patches over images instead of random ones but still use an aligned dataset.
This is the unaligned_dataset.py snippet where I have to change the A,B self.transform(img) part but nothing works so far:
def __getitem__(self, index):
A_path = self.A_paths[index % self.A_size]
if self.opt.serial_batches:
index_B = index % self.B_size
else:
index_B = random.randint(0, self.B_size - 1)
B_path = self.B_paths[index_B]
A_img = Image.open(A_path).convert('RGB')
B_img = Image.open(B_path).convert('RGB')
A = self.transform(A_img)
B = self.transform(B_img)```
I guess I have to change A and B in order to undergo the same transformation. Is that right?