How can I perform an identical transform on both image and target?
For example, in Semantic segmentation and Edge detection where the input image and target ground-truth are both 2D images, one must perform the same transform on both input image and target ground-truth.
PyTorch has an excellent tutorial on data loading. In that tutorial, the author shows how to do transform for both data and target. You can try to mimic the way in the tutorial.
The question hasn’t been answered. The provided references don’t show the ideal practice to do identical transforms for both input image and segmentation label…
Alternatively to the functions from the tutorial, you could use torchvision’s functional API.
Here is a small example for an image and the corresponding mask image:
class MyDataset(Dataset):
def __init__(self, image_paths, target_paths, train=True):
self.image_paths = image_paths
self.target_paths = target_paths
def transform(self, image, mask):
# Resize
resize = transforms.Resize(size=(520, 520))
image = resize(image)
mask = resize(mask)
# Random crop
i, j, h, w = transforms.RandomCrop.get_params(
image, output_size=(512, 512))
image = TF.crop(image, i, j, h, w)
mask = TF.crop(mask, i, j, h, w)
# Random horizontal flipping
if random.random() > 0.5:
image = TF.hflip(image)
mask = TF.hflip(mask)
# Random vertical flipping
if random.random() > 0.5:
image = TF.vflip(image)
mask = TF.vflip(mask)
# Transform to tensor
image = TF.to_tensor(image)
mask = TF.to_tensor(mask)
return image, mask
def __getitem__(self, index):
image = Image.open(self.image_paths[index])
mask = Image.open(self.target_paths[index])
x, y = self.transform(image, mask)
return x, y
def __len__(self):
return len(self.image_paths)
Assuming both Input and ground truth are images. If we can concatenate input and GT along the axis and then pass the concatenated image through torchvision.transforms.RandomHorizontalFlip() [say]. Then it makes sure that the GT is also flipped when the corresponding input is flipped. I am not sure whether it will work or not practically since I have not tried but theoretically makes sense to me.
The current transformations work with PIL.Images, so that your concatenated image might not be recognized as a valid image. Besides that, it seems to be a good idea.
Thanks for the clarification. I did Random Horizontal flip by generating the random number and suppose it is greater than a threshold just do a normal horizontal flip of both input and GT.
Just to add on this thread - the linked PyTorch tutorial on picture loading is kind of confusing. The author does both import skimage import io, transform, and from torchvision import transforms, utils.
For transform, the authors uses a resize() function and put it into a customized Rescale class. For transforms, the author uses the transforms.Compose function to organize two transformations. But they are from two different modules!
To add to the confusion, torchvision transforms also has its own API called Resize() which is the same name with the one in skimage module.
torchvision.transforms often rely on PIL as the underlying library, so you would need to transform each slice separately. (At least I’m not aware of PIL methods working on volumetric data)
That being said, it might be faster to write the transformations, e.g. random cropping, manually and apply them directly on the tensor data.
# random resized crop for two images
class RRC(transforms.RandomResizedCrop):
def __call__(self, imgs):
"""
Args:
img (PIL Image): Image to be cropped and resized.
Returns:
PIL Image: Randomly cropped and resized image.
"""
for im in range(1, len(imgs)):
assert imgs[im].size == imgs[0].size
i, j, h, w = self.get_params(imgs[0], self.scale, self.ratio)
for imgCount in range(len(imgs)):
imgs[imgCount] = tvF.resized_crop(imgs[imgCount], i, j, h, w, self.size, self.interpolation)
return imgs