How are all of you dealing with the noise, causing values to go over 1 or under 0? Isn’t this a problem?
I have this:
class noiseLayer_normal(nn.Module):
def __init__(self, noise_percentage):
super(noiseLayer_normal, self).__init__()
self.n_scale = noise_percentage
def forward(self, x):
if self.training:
noise_tensor = torch.normal(0, 0.2, size=x.size()).to(dev)
x = x + noise_tensor * self.n_scale
mask_high = (x > 1.0)
mask_neg = (x < 0.0)
x[mask_high] = 1
x[mask_neg] = 0
return x
But I think all of these masks are slowing down my training. Why do you not include this?