Negative values with non-negative conv2d inputs

Hi,

I’m trying to do conv2d with non-negative values, but getting negative values in the output.

z1 = F.conv2d((input * input), sigma_square, None, self.stride, self.padding, self.dilation, self.groups)

The input pixels are x*x, and the weights are the variance which is non negative by definition (I also made sure it’s non-negative here) .

My guess it relates to numerical error because the values are very close to zero (10e-5), but I’m not sure.

Any idea how I can solve it? (I have sqrt(z1) after the conv2d, so it can’t be negative).

Could you try to create random tensors, which would reproduce the issue, so that we could take a look at it, please?

1 Like

I reproduced the case:

GitHub - guyber9/repo

You can find in the link 2 files:

  1. my_tensors.pt - which includes ‘x’ and ‘w’ tensors for the convolution
  2. main.py - read the tensor files and doing the conv2d.

Just run main.py and you’ll see the input is all positive but conv2d results include negative values.

Thanks for the code snippet and the inputs. Since I cannot reproduce the issue, we would need more information about your setup (PyTorch version, GPU, CUDA, cudnn etc.).
Output:

x is negative: tensor(False, device='cuda:0')
w is negative: tensor(False, device='cuda:0')
z (= Wx) is negative: tensor(False, device='cuda:0')
v is negative: tensor(False, device='cuda:0')
v isnan: tensor(False, device='cuda:0')
1 Like

PyTorch version: 1.9.0+cu102
NVIDIA GeForce RTX 2080 Ti
CUDA Version 10.1.105

I’ve found something interesting. If I’m adding:

torch.backends.cudnn.deterministic = True

It solves the issue.

But when running with:

cudnn.benchmark = True

It’s bringing back the problem (even torch.backends.cudnn.deterministic = True).

1 Like

Hi, I’m encountering the exact same problem.

My environment is:
PyTorch 1.9.1+cu102
GeForce 2080 Ti
CUDA 10.2
Win7

Your advice seems to work, and it is possibly related to FFT & Winograd implementations in cudnn.

Thank you!