import torch
print(torch.cuda.is_available())
print(torch.version.cuda)
x = torch.tensor(1.0).cuda()
y = torch.tensor(2.0).cuda()
print(x+y)
I have error message:
True
10.1.243
Traceback (most recent call last):
File "example.py", line 8, in <module>
print(x+y)
RuntimeError: CUDA error: no kernel image is available for execution on the device
I am pretty sure that GPU driver and cuda toolkit are properly installed. Output of nvidia-smi:
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 418.74 Driver Version: 418.74 CUDA Version: 10.1 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 Tesla K40c On | 00000000:04:00.0 Off | 0 |
| 23% 39C P8 23W / 235W | 0MiB / 11441MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
I do not have problems when I use PyTorch 1.1 with CUDA 9.0.176. My guess is PyTorch no longer supports K40c as its CUDA compute compatibility is too low (3.5). Or are there any other problems to this? And is there a solution so that I can use PyTorch 1.3 with K40c?
CUDA10.1 should support GPUs with compute capability 3.0 to 7.5.
Are you using Windows? If so, the minimal driver seems to be a bit higher than for Linux systems, i.e. 418.96.
@ptrblck Why the stable 1.3 is also affected by this change? I observed exactly the same issue @zhaopku mentioned on my K40c gpus with PyTorch 1.3 installed via conda and drivers both 418 and 440. Shouldn’t this only affect for PyTorch 1.4+?
So apparently the support was dropped at pytorch 1.3.1. I believe merging such change in minor revision is not nice or at least it should be clearly written and announced somewhere.