I have an old cuda card, which is detected but unsupported. What do I do?

I have a GeForce GTX 670 which has cuda capability 3.0. torch.cuda.is_available() returns True, however the card is unsupported in current versions, and returns a runtime error when I use the model after it has been moved to cuda: “RuntimeError: CUDA error: no kernel image is available for execution on the device”.

I do not want to remove the cuda support from the project code just to run it on my machine, and I also need to use a modern version of PyTorch. Is there any way to work around this error without substantial code modifications?

Maybe something in the spirit of a use_cuda flag somewhere in the project (e.g. a gpu_utils.py file) and defines tensors directly on the right device could be useful:

FloatTensor = torch.cuda.FloatTensor if use_cuda else torch.FloatTensor
LongTensor = torch.cuda.LongTensor if use_cuda else torch.LongTensor
ByteTensor = torch.cuda.ByteTensor if use_cuda else torch.ByteTensor

Or just enclose all “.cuda()” calls with a condition on use_cuda.
(you will also need to take extra care if you need to load models trained on GPU from CPU…)

Hope that helps !

1 Like

Another approach would be to use the .to(device) method and specify the device at the beginning.
This would avoid repeated if conditions.

1 Like