How to use Generic transforms for the following preprocessing step

I want to apply the following transformation to the image dataset.

  1. N(w, h) = I(w, h) − G(w, h), (1) where N is the normalized image, I is the original image, and G is the Gaussian blurred image with kernel size 65*65 and 0 mean and standard deviation 10.

The code for gaussian blur is-

def gaussian_blur(img):
    image = cv2.GaussianBlur(image,(65,65),10)
    new_image = img - image
return image

I am really not sure how to convert it into lambda function as to use in generic transform.Any other advice on how to apply the above preprocessing step is also welcomed.

This code should work:

def gaussian_blur(img):
    image = np.array(img)
    image_blur = cv2.GaussianBlur(image,(65,65),10)
    new_image = image_blur
    return new_image

x = torch.randn(3, 224, 224)
img = TF.to_pil_image(x)
transform = transforms.Lambda(gaussian_blur)
img = transform(img)
1 Like

Thanks, it worked. I am observing a peculiar behavior, don’t know whether a gap in my knowledge or not.

data_transforms = transforms.Compose([transforms.RandomCrop(512,512),
                             transforms.ToTensor(),
                             transforms.Lambda(gaussian_blur),
                             transforms.Normalize(mean=train_mean, std=train_std),
                             transforms.RandomRotation([+90,+180]),
                             transforms.RandomRotation([+180,+270]),
                             transforms.RandomHorizontalFlip(),
                           ])

When I am showing the shape of the images, it is coming out to be not 512,512. Does Random Crop doesn’t take the required size and then crop it or I am doing something wrong.

for images, labels in final_train_loader:  
    print('Image batch dimensions:', images.shape)

Image batch dimensions: torch.Size([3, 3, 584, 565])
Image label dimensions: torch.Size([3])

Make sure this code is really called in your Dataset, as it should throw an error.
Image transformations like RandomRotation and RandomHorizontalFlip are only defined for PIL.Images.

Thus you would have to use ToTensor() and Normalize() as the last transformations.
I’m not sure if your Lambda(gaussian_blur) transform works on tensors or PIL.Images.

Yes, you were right. I was calling some other dataset and there was also a typo in my implementation.
Lambda(gaussian_blur) works on PIL image but we have to add a conversion of numpy array to PIL image using im = Image.fromarray(new_image) in the gaussian_blur function.

Thank you! for this code!