I’m trying to do conv2d with
non-negative values, but getting negative values in the output.
1 = F.conv2d((input * input), sigma_square, None, self.stride, self.padding, self.dilation, self.groups)
The input pixels are x*x, and the weights are the variance which is non negative by definition (I also made sure it’s non-negative here) .
My guess it relates to
numerical error because the values are very close to zero (10e-5), but I’m not sure.
Any idea how I can solve it? (I have sqrt(z1) after the conv2d, so it can’t be negative).
Could you try to create random tensors, which would reproduce the issue, so that we could take a look at it, please?
I reproduced the case:
GitHub - guyber9/repo
You can find in the link 2 files:
my_tensors.pt - which includes ‘x’ and ‘w’ tensors for the convolution
main.py - read the tensor files and doing the conv2d.
Just run main.py and you’ll see the input is all positive but conv2d results include negative values.
Thanks for the code snippet and the inputs. Since I cannot reproduce the issue, we would need more information about your setup (PyTorch version, GPU, CUDA, cudnn etc.).
x is negative: tensor(False, device='cuda:0')
w is negative: tensor(False, device='cuda:0')
z (= Wx) is negative: tensor(False, device='cuda:0')
v is negative: tensor(False, device='cuda:0')
v isnan: tensor(False, device='cuda:0')
PyTorch version: 1.9.0+cu102
NVIDIA GeForce RTX 2080 Ti
CUDA Version 10.1.105
I’ve found something interesting. If I’m adding:
torch.backends.cudnn.deterministic = True
It solves the issue.
But when running with:
cudnn.benchmark = True
It’s bringing back the problem (even torch.backends.cudnn.deterministic = True).
Hi, I’m encountering the exact same problem.
My environment is:
GeForce 2080 Ti
Your advice seems to work, and it is possibly related to FFT & Winograd implementations in cudnn.
04:11PM - 07 Dec 19 UTC
The gist of the issue is that at one point in my network in my project I feed po