GPU compute capability support for each pytorch version

I’m looking for the minimal compute capability which each pytorch version supports.

This question has arisen from when I raised this issue and was told my GPU was no longer supported. All I know so far is that my gpu has a compute capability of 3.5, and pytorch 1.3.1 does not support that (i.e. include the relevant binaries with the install), but pytorch 1.2 does.

Any pointers to existing documentation well received. Have searched for “compute capability” to no avial.

2 Likes

Hi James!

Speaking from memory …

I went through a similar issue. Based on what I was told, my
understanding is:

There is no table or clean record of which versions of pytorch
support which compute capabilities. Even if a version of pytorch
uses a “cuda version” that supports a certain compute capability,
that pytorch might not support that compute capability.

The installation packages (wheels, etc.) don’t have the supported
compute capabilities encoded in there file names.

Pytorch has a supported-compute-capability check explicit in its
code. (I’m not sure where.) If you can figure out what version
of the source a given installation package was built from you
can check the code.

Short of that, I think you have to run pytorch and see whether
it likes your gpu. (I’m not aware of a way to query pytorch for
its supported compute capability without running it against
the gpu of interest, but there might be one.)

Best.

K. Frank

1 Like

If you have the whl’s, you can inspect the libtorch.so (or other so’s) in the torch/lib directory: cuobjdump build/lib.linux-x86_64-3.7/torch/lib/libtorch.so | grep arch | sort | uniq.

Best regards

Thomas

4 Likes

This no longer works (or perhaps the object is stripped), @tom shared the latest approach on slack and it’s:

$ python -c "import torch; print(torch.cuda.get_arch_list())" 
['sm_37', 'sm_50', 'sm_60', 'sm_70', 'sm_75', 'sm_80']

Thank you, @tom.

1 Like
$ python -c "import torch; print(torch.cuda.get_arch_list())"

Returns this error:

Traceback (most recent call last):
  File "<string>", line 1, in <module>
AttributeError: module 'torch.cuda' has no attribute 'get_arch_list'

Env:

Python 3.7.9 (default, Aug 31 2020, 12:42:55) 
[GCC 7.3.0] :: Anaconda, Inc. on linux

torch: 1.4.0     torchvision: 0.5.0

Cuda compilation tools, release 11.2, V11.2.142
Build cuda_11.2.r11.2/compiler.29558016_0
1 Like

Your PyTorch version is too old and you would need to update it or use @tom’s suggestion using cuobjdump.

I extracted the compute capabilities for which each pytorch package on conda is compiled:

package architectures
pytorch-1.0.0-py3.7_cuda10.0.130_cudnn7.4.1_1 sm_30, sm_35, sm_50, sm_60, sm_61, sm_70, sm_75
pytorch-1.0.0-py3.7_cuda8.0.61_cudnn7.1.2_1 sm_20, sm_35, sm_37, sm_50, sm_52, sm_53, sm_60, sm_61
pytorch-1.0.0-py3.7_cuda9.0.176_cudnn7.4.1_1 sm_35, sm_37, sm_50, sm_52, sm_53, sm_60, sm_61, sm_70
pytorch-1.0.1-py3.7_cuda10.0.130_cudnn7.4.2_0 sm_35, sm_50, sm_60, sm_61, sm_70, sm_75
pytorch-1.0.1-py3.7_cuda10.0.130_cudnn7.4.2_2 sm_35, sm_50, sm_60, sm_61, sm_70, sm_75
pytorch-1.0.1-py3.7_cuda8.0.61_cudnn7.1.2_0 sm_35, sm_37, sm_50, sm_52, sm_53, sm_60, sm_61
pytorch-1.0.1-py3.7_cuda8.0.61_cudnn7.1.2_2 sm_35, sm_37, sm_50, sm_52, sm_53, sm_60, sm_61
pytorch-1.0.1-py3.7_cuda9.0.176_cudnn7.4.2_0 sm_35, sm_50, sm_60, sm_61, sm_70
pytorch-1.0.1-py3.7_cuda9.0.176_cudnn7.4.2_2 sm_35, sm_50, sm_60, sm_70
pytorch-1.1.0-py3.7_cuda10.0.130_cudnn7.5.1_0 sm_35, sm_50, sm_60, sm_61, sm_70, sm_75
pytorch-1.1.0-py3.7_cuda9.0.176_cudnn7.5.1_0 sm_35, sm_50, sm_60, sm_61, sm_70
pytorch-1.2.0+cu92-py3.7_cuda9.2.148_cudnn7.6.2_0 sm_35, sm_50, sm_60, sm_61, sm_70
pytorch-1.2.0-py3.7_cuda10.0.130_cudnn7.6.2_0 sm_35, sm_50, sm_60, sm_61, sm_70, sm_75
pytorch-1.2.0-py3.7_cuda9.2.148_cudnn7.6.2_0 sm_35, sm_50, sm_60, sm_61, sm_70
pytorch-1.3.0-py3.7_cuda10.0.130_cudnn7.6.3_0 sm_30, sm_35, sm_50, sm_60, sm_61, sm_70, sm_75
pytorch-1.3.0-py3.7_cuda10.1.243_cudnn7.6.3_0 sm_30, sm_35, sm_50, sm_60, sm_61, sm_70, sm_75
pytorch-1.3.0-py3.7_cuda9.2.148_cudnn7.6.3_0 sm_35, sm_50, sm_60, sm_61, sm_70
pytorch-1.3.1-py3.7_cuda10.0.130_cudnn7.6.3_0 sm_35, sm_37, sm_50, sm_60, sm_61, sm_70, sm_75
pytorch-1.3.1-py3.7_cuda10.1.243_cudnn7.6.3_0 sm_35, sm_37, sm_50, sm_60, sm_61, sm_70, sm_75
pytorch-1.3.1-py3.7_cuda9.2.148_cudnn7.6.3_0 sm_35, sm_37, sm_50, sm_60, sm_61, sm_70
pytorch-1.4.0-py3.7_cuda10.0.130_cudnn7.6.3_0 sm_35, sm_37, sm_50, sm_60, sm_61, sm_70, sm_75
pytorch-1.4.0-py3.7_cuda10.1.243_cudnn7.6.3_0 sm_35, sm_37, sm_50, sm_60, sm_61, sm_70, sm_75
pytorch-1.4.0-py3.7_cuda9.2.148_cudnn7.6.3_0 sm_35, sm_37, sm_50, sm_60, sm_61, sm_70
pytorch-1.5.0-py3.7_cuda10.1.243_cudnn7.6.3_0 sm_35, sm_37, sm_50, sm_60, sm_61, sm_70, sm_75
pytorch-1.5.0-py3.7_cuda10.2.89_cudnn7.6.5_0 sm_35, sm_37, sm_50, sm_60, sm_61, sm_70, sm_75
pytorch-1.5.0-py3.7_cuda9.2.148_cudnn7.6.3_0 sm_35, sm_37, sm_50, sm_60, sm_61, sm_70
pytorch-1.5.1-py3.7_cuda10.1.243_cudnn7.6.3_0 sm_35, sm_37, sm_50, sm_60, sm_61, sm_70, sm_75
pytorch-1.5.1-py3.7_cuda10.2.89_cudnn7.6.5_0 sm_35, sm_37, sm_50, sm_60, sm_61, sm_70, sm_75
pytorch-1.5.1-py3.7_cuda9.2.148_cudnn7.6.3_0 sm_35, sm_37, sm_50, sm_60, sm_61, sm_70
pytorch-1.6.0-py3.7_cuda10.1.243_cudnn7.6.3_0 sm_35, sm_37, sm_50, sm_60, sm_61, sm_70, sm_75
pytorch-1.6.0-py3.7_cuda10.2.89_cudnn7.6.5_0 sm_35, sm_37, sm_50, sm_60, sm_61, sm_70, sm_75
pytorch-1.6.0-py3.7_cuda9.2.148_cudnn7.6.3_0 sm_35, sm_37, sm_50, sm_60, sm_61, sm_70
pytorch-1.7.0-py3.7_cuda10.1.243_cudnn7.6.3_0 sm_35, sm_37, sm_50, sm_60, sm_61, sm_70, sm_75
pytorch-1.7.0-py3.7_cuda10.2.89_cudnn7.6.5_0 sm_35, sm_37, sm_50, sm_60, sm_61, sm_70, sm_75
pytorch-1.7.0-py3.7_cuda11.0.221_cudnn8.0.3_0 sm_37, sm_50, sm_60, sm_61, sm_70, sm_75, sm_80
pytorch-1.7.0-py3.7_cuda9.2.148_cudnn7.6.3_0 sm_35, sm_37, sm_50, sm_60, sm_61, sm_70
pytorch-1.7.1-py3.7_cuda10.1.243_cudnn7.6.3_0 sm_35, sm_37, sm_50, sm_60, sm_61, sm_70, sm_75
pytorch-1.7.1-py3.7_cuda10.2.89_cudnn7.6.5_0 sm_35, sm_37, sm_50, sm_60, sm_61, sm_70, sm_75
pytorch-1.7.1-py3.7_cuda11.0.221_cudnn8.0.5_0 sm_35, sm_37, sm_50, sm_60, sm_61, sm_70, sm_75, sm_80
pytorch-1.7.1-py3.7_cuda9.2.148_cudnn7.6.3_0 sm_37, sm_50, sm_60, sm_61, sm_70
pytorch-1.8.0-py3.7_cuda10.1_cudnn7.6.3_0 sm_35, sm_37, sm_50, sm_60, sm_61, sm_70, sm_75
pytorch-1.8.0-py3.7_cuda10.2_cudnn7.6.5_0 sm_35, sm_37, sm_50, sm_60, sm_61, sm_70, sm_75
pytorch-1.8.0-py3.7_cuda11.1_cudnn8.0.5_0 sm_35, sm_37, sm_50, sm_60, sm_61, sm_70, sm_75, sm_80, sm_86
pytorch-1.8.1-py3.7_cuda10.1_cudnn7.6.3_0 sm_35, sm_37, sm_50, sm_60, sm_61, sm_70, sm_75
pytorch-1.8.1-py3.7_cuda10.2_cudnn7.6.5_0 sm_35, sm_37, sm_50, sm_60, sm_61, sm_70, sm_75
pytorch-1.8.1-py3.7_cuda11.1_cudnn8.0.5_0 sm_35, sm_37, sm_50, sm_60, sm_61, sm_70, sm_75, sm_80, sm_86
10 Likes

One question though: If my card (GeForce RTX 3090) has compute capability 8.6, does that mean that PyTorch 1.8 with CUDA 11 is the first version to support it?

No, all 1.7 releases with CUDA11 + sm_80 will also support it.

1 Like