I think I have a simple solution:
If the images are concatenated, the transformations are applied to all of them identically:
import torch
import torchvision.transforms as T
# Create two fake images (identical for test purposes):
image = torch.randn((3, 128, 128))
target = image.clone()
# This is the trick (concatenate the images):
both_images = torch.cat((image.unsqueeze(0), target.unsqueeze(0)),0)
# Apply the transformations to both images simultaneously:
transformed_images = T.RandomRotation(180)(both_images)
# Get the transformed images:
image_trans = transformed_images[0]
target_trans = transformed_images[1]
# Compare the transformed images:
torch.all(image_trans == target_trans).item()
>> True
Can you please help me with how we can apply image normalization like your example?
As it looks like we cannot apply transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) in this way, like your example? I mean we cannot use
image = transforms.Normalize(image, mean,std)?
(I need something like: norm_image = Normalize(image, mean, std)).
transforms.Normalize(mean, std) will create a transformation object which you could then call directly (similar to any module you are creating e.g. nn.Linear):
If you don’t want to create an object first and apply it later, but directly apply the transformation you could use the functional API (similar to the nn.functional API e.g. via F.linear):