I use this to apply transformations on input images through __getitem__ method. I have 3 images in each sample returned by __getitem__.
The first image is the target image which I generate it dynamically from input and my input is the ground truth. Finally, the third image is also generated using some modification on the target image.
Now the question is, I do not want to use RandomNoise and Normalize on the second and third images, but will same random transforms like Crop, Flip, etc happen on ground truth?
I have read some issues on Github and it seems random seed resets somehow.
Here is what I have done:
The problem solved using feeding same seed value before applying each Compose of transforms.
img = Image.open(self.data[index]).convert('RGB')
target = Image.open(self.data_labels[index])
seed = np.random.randint(2147483647) # make a seed with numpy generator
random.seed(seed) # apply this seed to img tranfsorms
if self.transform is not None:
img = self.transform(img)
random.seed(seed) # apply this seed to target tranfsorms
if self.target_transform is not None:
target = self.target_transform(target)
target = torch.ByteTensor(np.array(target))
return img, target
By the way, it works completely fine on a subset of transforms.