If you would like to add it randomly, you could specify a probability inside the transformation and pass this probability while instantiating it.
On the other hand, if you would like to apply it just for specified data indices, you might need to apply the noise inside the training loop and use the data index (return the index additionally in your Dataset).
I have a question, I want to add noise to my original training dataset to have more robust model. It is good to add noise after data normalization or before data normalization my normalization is zero mean and unite variance?
after normalization cause each value of the noise have different effect on the training, but before normalization the effect of the noise on training is same.is not it?
The effect would be the same, but I think it might be easier to define the noise relative to your samples, if each data sample has already a zero mean and a unit variance.
Anyway, I don’t think it should make a difference if you define the noise using the mean of the unnormalized inputs and their stddev.
I want to create a 3 dimensional Gaussian with defined size and standard deviation. My code in Matlab is :
% Gaussian3D Creates 3-D Gaussian Kernel of specified
% width sigma_array=[sigma_x, sigma_y, sigma_z] with
% profile length of size_array=[size_x, size_y, size_z]
a1=t1;
b1=t2;
c1=t3;
if(all([sigma_array ])>0 & length(size_array) == length(sigma_array))
% Make 1D Gaussian kernel
% Filter each dimension with the 1D Gaussian kernels\
sigma_x=sigma_array(1);
size_x=size_array(1);
sigma_y=sigma_array(2);
size_y=size_array(2);
sigma_z=sigma_array(3);
size_z=size_array(3);
Kx = fspecial('gaussian', [1 round(size_x)], sigma_x);
Ky = fspecial('gaussian', [1 round(size_y)], sigma_y);
Kz = fspecial('gaussian', [1 round(size_z)], sigma_z);
% since gaussian Kernel is separable
G=convn(Hz,convn(Hx,Hy));
end
end```
Would you please tell me what is the equivalent of Convn and the fspecial in pytorch?
x = x -size_array/2-0.5
y = y -size_array/2-0.5
z = z -size_array/2-0.5
G = torch.exp(-((x)**2/(2*(sigma_array[0]**2)) +(y)**2/(2*(sigma_array[1]**2)) +(z)**2/(2*(sigma_array[2]**2)) ))
return G```
Many thanks for your reply. Sorry I need t find the local maxima in the 3 dimension. In Matlab I use “imreginalmax” , My input is 12022080 ,the out put is a binary with the same size of the input. which means wherever it is 1 there is a local maximum in the input.
What is the equivalent in pytorch I need to have the same output means the binary in 3D wherever is 1 there is a local maxima in the input.
Hi, I saw your solution and it helps alot! Thank you so much! However, i am quite new to python from zero knowledge, would you be able to explain what the function under call does? and in general what noise adjust?
Oh and also, by adjusting the mean and std will it affect the normalization of the image when we pass it into our dataloader? Thank you!
AddGaussianNoise adds gaussian noise using the specified mean and std to the input tensor in the preprocessing of the data. torch.randn creates a tensor filled with random numbers from the standard normal distribution (zero mean, unit variance) as described in the docs. In AddGaussianNoise.__call__ this noise tensor will be multiplied with self.std and self.mean will be added to scale and shift the distribution.