When I tried to train the model, there was RuntimeError (RuntimeError: CuDNN error: CUDNN_STATUS_SUCCESS).
First of all, I totally could not understand what above error means.
Furthermore the same code what I tried to train worked well on gtx 1080 ti (the rest environments are totally same as above)
No… I just know about the issue is occurred when I use RNN base model.
For example,
from torchvision import models
vgg = models.vgg19(pretrained = True)
vgg.cuda()
These lines are not problem. They work clearly.
However,
from torch import nn
gru = nn.gru(3, 3, 2)
gru.cuda()
Above code make issue and the problem is in …/torch/nn/modules/rnn.py file
Maybe, this issue is not cause from our environment and/or settings.
I got the same issue, and my gpu is RTX 2080.
I didn’t figure it out, but I found a way to run my code.
The method is that running my code in Atom with the Hydrogen add-on. This error occur when I run the code at the first time, but it disappeared when I run it at the second time.
And I found that if I run my code just after restart my PC, there will be no such error.
So I guess that the reason of this error is that there is some other program using the GPU when I try to run my code.
I had the same error message but realised my pytorch install didn’t have CUDA installed with it so you can try the conda instructions from above (believe for win10 it’s just conda install pytorch cuda92 -c pytorch)
I installed the pytorch with cuda 9.0 before. I just update pytorch with cuda92 ( for win10 it’s just conda install pytorch cuda92 -c pytorch) and then the errer disappear.