Same Random Transform across input/target

Is there a clear resource somewhere on how to apply the same random transforms to both the input and target for semantic segmentation problems? I’ve seen that it’s been discussed by some of the devs but I can’t find documentation on what my code should look like to do it.

In my case, I need to apply some transforms only to target (eg. grayscale), but also the same randomcrop/randomhorizontalflip etc. to both the input and target.

I’ve done this before. It’s not as clean as the torchvision transforms API, but you can make your own dataset class, and manually apply random transforms in __getitem__


This might be the simplest way. I will have a look Simon, thanks.