I recently changed my gpu to an RTX2080ti and in the process also upgraded from CUDA8 to CUDA9 and the newest pytorch version. Ever since I get the following message when I use pytorch (the program keeps running though):
Thanks! You may be right, I have the exact same problem with the CUDA samples as described here , that indicates that there might be a fundamental problem with RTX2080ti and cuda9 (on linux?).
Would be interested if anyone has experienced otherwise.
I experience the same problem, Iām using CUDA 9 though.
Anyway, it seems that the GPU is used anyway, even though Iām not sure if itās used āfullyā.
By the way, for anyone following this topic, a solution to at least get things running is to set `torch.backends.cudnn.benchmark = Falseā. I found this solution in this thread: A error when using GPU
I put ātorch.backends.cudnn.benchmark = Falseā
at the beginning of my source code, but the error message still appears. Not sure why that happened.
are you sure that later in the code that attribute is not set to true? try to print to screen torch.backends.cudnn.benchmark at various points of execution, to make sure it is indeed False
Same issue here. Pytorch 1.0.0, CUDA 10.0, RTX 2080, on Fedora 28.
Set " torch.backends.cudnn.benchmark=False" doesnāt work. It shows:
cuDNN error: CUDNN_STATUS_EXECUTION_FAILED