CUDA freezes! Use RTX 3080 to train a python model

I have an RTX 3080 and I want to use it to train a deep learning model. Both CUDA and Pytorch I installed are version 10.2. However, when I train this deep learning model, it freezes when loaded into CUDA. Is there any solution?

If you’ve installed the CUDA10.2 binaries, the first CUDA operation would call into the JIT and compile the kernels for your compute architecture.
You could use the nightly binaries, which are build with CUDA11 and support sm_80 via:

conda install pytorch torchvision cudatoolkit=11.0 -c pytorch-nightly
1 Like

Thank you very much. I solved the problem this way