I use this to apply transformations on input images through __getitem__ method. I have 3 images in each sample returned by __getitem__.
The first image is the target image which I generate it dynamically from input and my input is the ground truth. Finally, the third image is also generated using some modification on the target image.

Now the question is, I do not want to use RandomNoise and Normalize on the second and third images, but will same random transforms like Crop, Flip, etc happen on ground truth?

I have read some issues on Github and it seems random seed resets somehow.
Here is what I have done:

Githhub transformations applies transformations frame-wise. Thus, different transformations will be applied to both gt and images. It’s not a matter of seed rather than the way torchvision is done. https://github.com/JuanFMontesinos/flerken/blob/master/flerken/dataloaders/transforms/transforms.py
This file reimplements transforms such that you can apply same transformation to a list a frames.
About subsets try to apply common part in one transform and specific parts in other transforms

The problem solved using feeding same seed value before applying each Compose of transforms.

def __getitem__(self,index):
img = Image.open(self.data[index]).convert('RGB')
target = Image.open(self.data_labels[index])
seed = np.random.randint(2147483647) # make a seed with numpy generator
random.seed(seed) # apply this seed to img tranfsorms
if self.transform is not None:
img = self.transform(img)
random.seed(seed) # apply this seed to target tranfsorms
if self.target_transform is not None:
target = self.target_transform(target)
target = torch.ByteTensor(np.array(target))
return img, target

By the way, it works completely fine on a subset of transforms.