GPU Compatibility Error: NVIDIA GeForce GT 710 with CUDA Capability 3.5 not supported by PyTorch

Hello everyone, I’m new here and new to programming. I downloaded a voice cloning program and was using it successfully on Colab. However, when I tried to run it on my PC, I encountered the following error:

runtime\python.exe extract_feature_print.py cuda:0 1 0 0 C:\Users\Rick\Desktop\RVC/logs/test11
[‘extract_feature_print.py’, ‘cuda:0’, ‘1’, ‘0’, ‘0’, ‘C:\Users\Rick\Desktop\RVC/logs/test11’]
C:\Users\Rick\Desktop\RVC/logs/test11
load model(s) from hubert_base.pt
C:\Users\Rick\Desktop\RVC\runtime\lib\site-packages\torch\cuda_init_.py:122: UserWarning:
Found GPU0 NVIDIA GeForce GT 710 which is of cuda capability 3.5.
PyTorch no longer supports this GPU because it is too old.
The minimum cuda capability supported by this library is 3.7.
As I mentioned earlier, I don’t have much programming knowledge. I tried changing the PyTorch version to a lower one, and my current PyTorch version is version = ‘1.11.0+cu113’. I also installed CUDA 11.4, which is the highest version supported by my GPU, NVIDIA GeForce GT 710. I understand that the error message is indicating that my GPU is old and doesn’t meet the required compute capability of at least 3.7. I thought that by installing an older version of PyTorch, the issue would be resolved, but it didn’t work. The program is executed through Gradio, and I’m using Python 3.9.

I would greatly appreciate any guidance or suggestions to overcome this issue. Thank you!"

Feel free to make any further modifications or additions to the question as needed.

PD: I’m using a Windows 10 operating system.

The currently built and released PyTorch binaries support NVIDIA GPUs with a computer capability of 3.7 to 9.0 (if you are using the binaries with the CUDA 11.8 or 12.1 runtime). Your GPU is too old and you might be able to build PyTorch from source for it.

The CUDA version I have installed is 11.4, and the PyTorch version is ‘version = ‘1.11.0+cu113’’. However, I’m still getting the same error. I’m very new to this and have limited computer knowledge. Am I understanding correctly that you’re suggesting I compile from the source code? Is this process complicated? Is there any other way to resolve this, or is my GPU simply too outdated for this?

Your GPU is too old to pip or conda install any recent PyTorch binary and you could thus try to build PyTorch from source explicitly for your GPU. It depends on your experience in building open-source projects. While I might claim it’s easy, I should also mention that I am constantly building PyTorch from source for testing and development.