I want to apply the following transformation to the image dataset.
- N(w, h) = I(w, h) − G(w, h), (1) where N is the normalized image, I is the original image, and G is the Gaussian blurred image with kernel size 65*65 and 0 mean and standard deviation 10.
The code for gaussian blur is-
image = cv2.GaussianBlur(image,(65,65),10)
new_image = img - image
I am really not sure how to convert it into lambda function as to use in generic transform.Any other advice on how to apply the above preprocessing step is also welcomed.
This code should work:
image = np.array(img)
image_blur = cv2.GaussianBlur(image,(65,65),10)
new_image = image_blur
x = torch.randn(3, 224, 224)
img = TF.to_pil_image(x)
transform = transforms.Lambda(gaussian_blur)
img = transform(img)
Thanks, it worked. I am observing a peculiar behavior, don’t know whether a gap in my knowledge or not.
data_transforms = transforms.Compose([transforms.RandomCrop(512,512),
When I am showing the shape of the images, it is coming out to be not 512,512. Does Random Crop doesn’t take the required size and then crop it or I am doing something wrong.
for images, labels in final_train_loader:
print('Image batch dimensions:', images.shape)
Image batch dimensions: torch.Size([3, 3, 584, 565])
Image label dimensions: torch.Size()
Make sure this code is really called in your
Dataset, as it should throw an error.
Image transformations like
RandomHorizontalFlip are only defined for
Thus you would have to use
Normalize() as the last transformations.
I’m not sure if your
Lambda(gaussian_blur) transform works on tensors or
Yes, you were right. I was calling some other dataset and there was also a typo in my implementation.
Lambda(gaussian_blur) works on PIL image but we have to add a conversion of numpy array to PIL image using
im = Image.fromarray(new_image) in the gaussian_blur function.
Thank you! for this code!