Conv3D raises RuntimeError: CUDNN_STATUS_NOT_SUPPORTED

Simple code to reproduce the problem:

import torch
import torch.nn as nn
ec=nn.Conv3d(1,1,3,2,1).cuda()
input=torch.autograd.Variable(torch.ones(1,1,511,512,512),requires_grad=True).cuda()
ec(input).sum().backward()

it raises:

RuntimeError: CUDNN_STATUS_NOT_SUPPORTED. This error may appear if you passed in a non-contiguous input.

Even if I add contiguous, it still doesn’t work.

  • if I change input to torch.ones(1,1,510,512,512), it works fine.

  • if I set requires_grad to False, It raise another error:

RuntimeError: CUDNN_STATUS_EXECUTION_FAILED

I don’t know much about cudnn, does anyone has any advice?
torch: 0.1.12_1
cudnn 5.1, cuda 8
tested in GTX1080 and Titan X

it’s worth opening an issue on https://github.com/pytorch/pytorch and i’ll take a look.
Thanks for the test case.
Also, see if updating cudnn to v6 helps (but i’ll test the snippet on v6 myself).

Thanks for the quick reply. I’ll open an issue in github and try cudnn v6.

Was this ever resolved?

Looks like cudnnGet* is returning the wrong algorithm. A workaround is to use torch.backends.cudnn.benchmark=True

I am trying PyTorch on Windows 10 now, if I set torch.backends.cudnn.benchmark=True nothing works :slight_smile:

1 Like

Sorry, but I don’t know much about how PyTorch works in windows.

BY the way, I would like to advise you to paste snippet code to reproduce the problem.

CUDNN_STATUS_NOT_SUPPORTED may result from various situations.

Here is the code:

Just set cudnn to true and see the crash.