CUDNN_STATUS_MAPPING_ERROR with torch.cuda.synchronize()

I was getting RuntimeError: CUDNN_STATUS_MAPPING_ERROR while trying to run
F.conv2d(x, conv_reduce_w, stride=1, padding=0)
The above line of code runs successfully if I am using CPU only.
I then added torch.cuda.synchronize() before executing this line of code, then I get

torch.cuda.synchronize() File "/u/pahujava/anaconda3/envs/venv/lib/python3.6/site-packages/torch/cuda/", line 314, in synchronize return torch._C._cuda_synchronize() RuntimeError: cuda runtime error (59) : device-side assert triggered at torch/csrc/cuda/Module.cpp:267

I am running my script with CUDA_LAUNCH_BLOCKING=1 and torch.backends.cudnn.benchmark = False
though no change in error is observed even when torch.backends.cudnn.benchmark = True
I tried to make both input tensors contiguous, still no change, any help is appreciated.
FYI, I have checked the tensor dimensions, they are compatible. The dimensions of x and conv_reduce_w are torch.Size([10, 1024, 14, 14]) and torch.Size([128, 1024, 1, 1]) respectively.

Pytorch version: 0.3.1.post2
CUDA version: 9.1

Is that the full stack trace?

this was the only relevant line. Actually, I managed to fix the bug. I was accessing some indices out of bounds for some weight tensor, but it wasn’t complaining surprisingly.

That is probably what the device side assert is. Unfortunately cuda doesn’t have a nice way to print error messages from device asserts.

I think the problem is that your GPU is too old to supported by PyTorch. I got the same error before and I notice that there was an UserWarning in the middle part of my log. It was solved after I changed my GPU from K420 to GTX1050Ti.