CUDA error: all CUDA-capable devices are busy or unavailable

PyTorch throws this “File “”, line 1, in CUDA error: all CUDA-capable devices are busy or unavailable” error when I transfer any tensor/data into cuda device, even though torch.cuda.is_available() returns True. I have came across same error issues, but none of them worked for me.

I downloaded PyTorch v1.1.0 cudatoolkit=9.0 (9.0.176) with conda.

And I have GeForce GTX 1060 with Max-Q Design with driver Version: 460.32.03, the output of nvcc --version is V9.1.85 (on Ubuntu 18.04). I run the code torch.cuda.get_device_properties(device) and returns

_CudaDeviceProperties(name=‘GeForce GTX 1060 with Max-Q Design’, major=6, minor=1, total_memory=6078MB, multi_processor_count=10).

I compiled a cuda hello word code if there is error with cuda itself, but it compiled successfully. Also I checked if my GPU in exclusive process mode; although not, I run the script nvidia-smi -i 0 -c 0 to switch to default mode. Still the same error… Is there anything I missed?

Could you update to the latest stable release or the nightly and rerun your script, please?

I updated it to latest stable versions, it gives the same error. By the way, doesn’t the version need to be specified when downloading? (cudatoolkit=9.0 via conda installer). Since my CUDA version is 9.1.85, the latest version is specified as cudatoolkit=11.1 at pytorch.org.

I did exactly nothing and today, my CUDA and PyTorch work perfectly.