I want to apply the following transformation to the image dataset.
N(w, h) = I(w, h) − G(w, h), (1) where N is the normalized image, I is the original image, and G is the Gaussian blurred image with kernel size 65*65 and 0 mean and standard deviation 10.
I am really not sure how to convert it into lambda function as to use in generic transform.Any other advice on how to apply the above preprocessing step is also welcomed.
When I am showing the shape of the images, it is coming out to be not 512,512. Does Random Crop doesn’t take the required size and then crop it or I am doing something wrong.
for images, labels in final_train_loader:
print('Image batch dimensions:', images.shape)
Image batch dimensions: torch.Size([3, 3, 584, 565])
Image label dimensions: torch.Size([3])
Make sure this code is really called in your Dataset, as it should throw an error.
Image transformations like RandomRotation and RandomHorizontalFlip are only defined for PIL.Images.
Thus you would have to use ToTensor() and Normalize() as the last transformations.
I’m not sure if your Lambda(gaussian_blur) transform works on tensors or PIL.Images.
Yes, you were right. I was calling some other dataset and there was also a typo in my implementation. Lambda(gaussian_blur) works on PIL image but we have to add a conversion of numpy array to PIL image using im = Image.fromarray(new_image) in the gaussian_blur function.