Minimum CUDA compute compatibility for PyTorch 1.3

I am using K40c GPUs with CUDA compute compatibility 3.5. I installed PyTorch via

conda install pytorch torchvision cudatoolkit=10.1 -c pytorch

However, when I run the following program:

import torch

print(torch.cuda.is_available())
print(torch.version.cuda)
x = torch.tensor(1.0).cuda()
y = torch.tensor(2.0).cuda()

print(x+y)

I have error message:

True
10.1.243
Traceback (most recent call last):
  File "example.py", line 8, in <module>
    print(x+y)
RuntimeError: CUDA error: no kernel image is available for execution on the device

I am pretty sure that GPU driver and cuda toolkit are properly installed. Output of nvidia-smi:

+-----------------------------------------------------------------------------+
| NVIDIA-SMI 418.74       Driver Version: 418.74       CUDA Version: 10.1     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  Tesla K40c          On   | 00000000:04:00.0 Off |                    0 |
| 23%   39C    P8    23W / 235W |      0MiB / 11441MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+

I do not have problems when I use PyTorch 1.1 with CUDA 9.0.176. My guess is PyTorch no longer supports K40c as its CUDA compute compatibility is too low (3.5). Or are there any other problems to this? And is there a solution so that I can use PyTorch 1.3 with K40c?

CUDA10.1 should support GPUs with compute capability 3.0 to 7.5.
Are you using Windows? If so, the minimal driver seems to be a bit higher than for Linux systems, i.e. 418.96.

No, I am using Linux, and according to nvidia, the minimum driver for Linux is 418.39. And ideas of why I am having the above problem?

Please see my below comments. I don’t think I am using a too low version of gpu driver.

Yes, that’s why I asked about Windows. :wink:
The driver should be new enough for Linux.

It seems the minimal compute capability is now 3.7 based on this commit for the binaries, so you might need to build from source.

1 Like

@ptrblck Why the stable 1.3 is also affected by this change? I observed exactly the same issue @zhaopku mentioned on my K40c gpus with PyTorch 1.3 installed via conda and drivers both 418 and 440. Shouldn’t this only affect for PyTorch 1.4+?

So apparently the support was dropped at pytorch 1.3.1. I believe merging such change in minor revision is not nice or at least it should be clearly written and announced somewhere.

It might have been updated in 1.3.1 and I agree with you regarding mentioning it in the release notes.

It would be great if the minimum CUDA compute compatibility is mentioned at the downloads page.

3 Likes

Can you link the page please?


related:

This would have helped here (but is still open):