CUDA/Torch Problem while using Oogabooga/PrivateGPT

I was trying to generate text using the above mentioned tools, but I’m getting the following error:

“RuntimeError: CUDA error: no kernel image is available for execution on the device
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.”

I’m using an old NVIDIA Quadro K2200. I am using Windows 10. I have already downgraded Cuda Toolkit to 12.1 (because it was the last version listed compatible with Torch) und I am using the Pytorch 1.2.1 version for Cuda 11.8 (as I thought it might support the older GPU). Both of those steps were seperate solutions that I found while googling lots of different ways to try and fix this, nothing has helped so far.

It probably has something to do with my version of Torch and compatibility with my graphics card, but as you can tell I am a complete newbie at this and lost at sea.

Does anyone have a solution to this? If you need any more information, please let me know.

Thanks in advance!

Your Quadro K2200 should have a compute capability of 5.0 and would thus be supported in our binary builds. What output do you get when installing the latest PyTorch binary with CUDA 12.1 as well as 11.8 for:

python -c "import torch; print(torch.randn(1).cuda())"

I’m getting the output:"tensor([-0.6384], device=‘cuda:0’).

I didn’t install anything else now, it’s the same setup as before (CUDA 12.1, latest PyTorch binary for CUDA 11.8), but this is probably what you meant, right?

Thanks for confirming the CUDA usage in PyTorch itself. Since PyTorch is able to use your GPU properly, a 3rd party library you’ve installed might not have built its CUDA kernels for compute capability 5.0 and thus fails.