Use a subset of Composed Transforms with same random seed

Hi,
I have a custom_transform variable defined as follows:

custom_transforms = Compose([
    RandomResizedCrop(size=224, scale=(0.8, 1.2)),
    RandomRotation(degrees=(-30, 30)),
    RandomHorizontalFlip(p=0.5),
    ToTensor(),
    Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),
    RandomNoise(p=0.5, mean=0, std=0.1)])

I use this to apply transformations on input images through __getitem__ method. I have 3 images in each sample returned by __getitem__.
The first image is the target image which I generate it dynamically from input and my input is the ground truth. Finally, the third image is also generated using some modification on the target image.

Now the question is, I do not want to use RandomNoise and Normalize on the second and third images, but will same random transforms like Crop, Flip, etc happen on ground truth?

I have read some issues on Github and it seems random seed resets somehow.
Here is what I have done:

transform_gt = Compose(custom_transform.transforms[:-1])

Thanks for any advices

Githhub transformations applies transformations frame-wise. Thus, different transformations will be applied to both gt and images. It’s not a matter of seed rather than the way torchvision is done. https://github.com/JuanFMontesinos/flerken/blob/master/flerken/dataloaders/transforms/transforms.py
This file reimplements transforms such that you can apply same transformation to a list a frames.
About subsets try to apply common part in one transform and specific parts in other transforms

1 Like

I tried this code using your transforms.py:

custom_transforms = Compose([
    RandomRotation(degrees=(-30, 30)),
    RandomHorizontalFlip(p=0.5),
])

i1 = Image.open('dataset/sub_test/data/Places365_val_00000002.jpg')
i2 = Image.open('dataset/sub_test/data/Places365_val_00000003.jpg')

it12 = custom_transforms([i1, i2])

But I got these errors:

 File "C:\Users\NIkan\Desktop\Deep Halftoning\Github\Deep-Halftoning\lib\transforms.py", line 69, in __call__
    return self.apply_sequence(inpt)
  File "C:\Users\NIkan\Desktop\Deep Halftoning\Github\Deep-Halftoning\lib\transforms.py", line 79, in apply_sequence
    output = list(map(self.apply_img, seq))
  File "C:\Users\NIkan\Desktop\Deep Halftoning\Github\Deep-Halftoning\lib\transforms.py", line 75, in apply_img
    img = t(img)
  File "C:\Users\NIkan\Desktop\Deep Halftoning\Github\Deep-Halftoning\lib\transforms.py", line 990, in __call__
    self.get_params()
TypeError: get_params() missing 1 required positional argument: 'degrees'

It seems it calls self.get_params() without passing provided degrees.

The problem solved using feeding same seed value before applying each Compose of transforms.

def __getitem__(self,index):      
        img = Image.open(self.data[index]).convert('RGB')
        target = Image.open(self.data_labels[index])
        
        seed = np.random.randint(2147483647) # make a seed with numpy generator 
        random.seed(seed) # apply this seed to img tranfsorms
        if self.transform is not None:
            img = self.transform(img)
            
        random.seed(seed) # apply this seed to target tranfsorms
        if self.target_transform is not None:
            target = self.target_transform(target)

        target = torch.ByteTensor(np.array(target))
    
        return img, target

By the way, it works completely fine on a subset of transforms.

2 Likes

I just rewrote everything so there may be minor issues

Ow, sorry.
Thanks for your help.