GPU compatibility: mobile RTX A2000?

I find it hard to understand which NVIDIA GPUs will work with which versions of PyTorch and under which OS.

For a project, somebody wants to purchase a laptop that has RTX A2000 built in and I am wondering which PyTorch versions this card would work with? Would it work under Ubuntu 22.04? Under Windows?
If the spec shows it to use CUDA 8, will it still work with a current pytorch release that is compiled with cuda 11?

How can one find out in general if Card X will work with PyTorch versions X…Y under OS Z?

or looking at it from the other side, which laptop cards would work with current pytorch releases?

All NVIDIA GPUs >= compute capability 3.7 will work with the latest PyTorch release with the CUDA 11.x runtime.

1 Like

Thanks!
Sadly the compute capability is not something NVIDIA seems to like to include in their specs, e.g.

Also this page, which has all kinds of info about many GPUs does not list the “compute capability”: NVIDIA RTX A2000 Embedded Specs | TechPowerUp GPU Database

But the specs say that the card is based on the Ampere architecture which seems to have compute capability 8, from which I assume that I should be fine for some time.

Is there a table somewhere which maps pytorch versions to the supported compute capabilities?
Is there just a minimal compute capability version per pytorch version or is it a specific range of versions that is supported?

Thanks again!

You might want to check CUDA - Wikipedia as it shows the compute capabilities for (all) GPUs.

Yes, it’s an Ampere GPU with cc=8.6 and is supported.

No, I don’t think there is a mapping and we keep the min. cc=3.7 to support old K80 cards, which are still used in Colab if I’m not mistaken.
Every newer Architecture is still supported and you can generally check the supported architectures via: print(torch.cuda.get_arch_list()).

The min. cc is 3.7 for (I believe) ~2 years now. It was bumped from 3.5 at one point, but I’m unsure when it was changed. Every newer architecture is supported.

1 Like

Thank you, that is very helpful!