Rtx 3070/3080 support

So I’ve got a machine with ubuntu 20.04 and rtx 3070. Is it possible to run pytorch at this time with support to the new GPUs?
From my understanding for the rtx 3070 I need cudnn 8.0.5 and cuda 11.1, is there a way to get pytorch to work this these versions?

  1. What are my current options to install pytorch? (for example, should I install cuda 11.1, cudnn 8.0.5 and build from source, or can I install binaries)
  2. For the currently supported options, are there speed problems? And if so, how much of a speed decrease should I expect in comparison to optimized setup?



  1. Current binaries for cuda 11.0 will work with these cards
  2. There are perf issues with these because the current cuda libraries are not properly optimized for them. We will release 1.7.1 soon to update to cudnn 8.0.5 to fix some of these. But it won’t fix everything I’m afraid and we’ll have to wait for newer versions of the binaries.

You can compile from source with 11.1 and cudnn 8.0.5 but I am not sure if it will fix all of the perf regressions. I don’t remember if there is another version of cudnn coming with more fixes soon or not? cc @ptrblck


@albanD Thanks for answering!
So just make sure:

  1. If I will use the current binaries (1.7.0), I should install cuda 11.0, and which cudnn version?
  2. Anothe option is to install cuda 11.1, cudnn 8.0.5, and then build compile pytorch from source.
  3. Last option is to wait for 1.7.1, and then install it with cuda 11.1 and cudnn 8.0.5

Is this right?

  1. Both cuda and cudnn are shipped with the binary. No need for you to worry about that. But the current binaries ship with cudnn 8.0.4 (the only thing that was available when they were built).
  2. Yes that is the only option to get these two versions
  3. The 1.7.1 binaries will ship with cuda 11.0 (EDIT: see @ptrblck comment below, this might change) because conda still does not support 11.1 :confused: But it will have cudnn 8.0.5 which should fix most perf regressions!

Minor corrections:

  • 1.7.0 ships with CUDA11.0 + cudnn8.0.3. Unfortunately, we didn’t trigger a rebuild of the binaries after bumping cudnn to 8.0.4. :confused:
  • We are working on workarounds for the pruning issues with 11.1 and think we could ship 1.7.1 with CUDA11.1 + cudnn8.0.5

cudnn 8.0.5 ships with updated heuristics for 3090. In 8.1.x the full sm86 heuristics should be added.


Hey, so now that the 1.7.1 version is out, will it ship with cuda 11.1? because currently the stable version I saw is 1.7.1 with cudnn 8.0.5 and cuda 11.0

No, the build team focused on the Python3.9 support and didn’t have enough resources to target CUDA11.1 as well for this release.

Ok thanks,
and do you know if there are supposed to be any performence difference between cuda 11.0 and 11.1 (both using cudnn 8.0.5), with rtx 3070?
And also, should there be any difference between performence of pytorch nightly and pytorch 1.7.1 right now reagarding rtx 3070?

Not as of now, since most (if not all) native PyTorch kernels are memory bandwidth bound and wouldn’t benefit from an increase in compute performance, while future cudnn and cublas versions should ship with sm86-specific instructions to increase the performance on the 30xx series.

The nightly release gets the latest updates, e.g. such as the usage of fastAtomicAdd for trilinear upsampling, and could thus improve the performance.
However, while I don’t have a bad experience with the stability of the nightly binaries, you might want/need to stick to the stable release.


Hi. I tested 3d convolution models with 3070, nvidia’s official containers, pytorch 1.7.1, 1.8 nightly release, cudnn 8.0.4 and 8.0.5 and all combinations show the same speed.

Also I’m expecting similar to better performance than 2080 Ti in FP16 inference (don’t know if I’m right though !), and I’m instead observing about 75% the perf right now.

I knnow this isn’t a benchmark post, but just to say : this soft will only technically support rtx 3000 in the “short” term.