Is it possible to use rtx 3080 gpu with pytorch 1.6

I have an rtx 3080 gpu, using Windows 10. Is it possible to use this gpu with
pytorch version 1.6 and any version (as necessary) of CUDA?

The reason I am asking is using the command
conda install pytorch torchvision cudatoolkit=11 -c pytorch-nightly
(recommended from the github issue https://github.com/pytorch/pytorch/issues/45028)
resulted in an install of a 1.8 dev nightly build that was having errors I was not encountering previously with the torchvision.models.detection repository where they
seemed to have added a new file anchor_utils.py . The authors seem to have refactored the anchor generator to its own file I guess, where previously the anchor_generator class was in rpn.py.

Thanks

You could use the 1.6 binaries with e.g. CUDA10.2, which will then however JIT compile the CUDA kernels in the first CUDA operation and will thus add a massive overhead to your runs, so I would recommend to adapt the source code and use the nightly binaries with CUDA11.

1 Like

What is the latest stable build of pytorch that supports RTX 3080/ CUDA 11?

1.7.0 is the latest stable release, which ships with CUDA11.0.

2 Likes

Yeah. So it can be “interpreted” as RTX 30 Series are still not well compatible with relative “old” PyTorch versions (< 1.7.0). Am I right?

Yes, and since the framework is not forwards compatible the CUDA11 support will not be backported to already released versions.

1 Like