CuDNN execution fails on moving parameters to GPU

Hey guys
Im facing the following error on moving my model parameters and state to the GPU -

torch.backends.cudnn.CuDNNError: 8: b’CUDNN_STATUS_EXECUTION_FAILED’

(can produce stacktrace if req.) with the following configuration -

cuda90                    1.0                  h4c72538_0    peterjc123
cudatoolkit               9.0                           1
pytorch                   0.3.1           py36_cuda90_cudnn7he774522_2  [cuda90]  peterjc123

However, with -

cudatoolkit               8.0                           4
pytorch                   0.3.1           py36_cuda80_cudnn6he774522_2    peterjc123

the execution runs with just a minor bump in the road. (incompatible compilation warnings)

Following are my hardware specs -

Card: RTX 2080S
CUDA: 10.1, CuDNN 7.6.5

Any help would be appreciated. Thanks!

PS This is my first post on the Torch forum. It would be fine if minor formatting suggestions are commented too.

PyTorch 0.3.1 is quite old by now, so could you update to the latest stable release (1.8.1) and rerun your script, please?

Thanks for your response, @ptrblck.
The reason I’m holding back on upgrading the Pytorch versions is that the same piece of code works marvellously on my Nvidia GT 730 (which I know is an old fellow). Could you help me out with what this error means? or where I need to look?

Also, other developers who updated to a later version of Pytorch complained of the results going haywire on the same code. I wanted to create an initial version of the thing im trying to build before resolving versioning issues.

Thanks again for your response!

Since you are using a PyTorch, CUDA, and cudnn release from early 2018, you might be facing already fixed issues.
In any case, if you want to use this old setup, you could disable cudnn via torch.backends.cudnn.enabled = False and see, if your script would be working.