i have issue with using cuda on pytorch and i get the following error:
print(torch.randn(1).cuda()) / print(torch.rand(5, 3, device=torch.device(‘cuda’)))
RuntimeError: CUDA error: no kernel image is available for execution on the device
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
TORCH_USE_CUDA_DSAto enable device-side assertions.
- installed packages:
python-cuda-12.1.0-2 (*i wondered the issue might be this but i did downgrade cuda to 12.1 as well)
*this build does work on another user’s GPU
[‘sm_52’, ‘sm_53’, ‘sm_60’, ‘sm_61’, ‘sm_62’, ‘sm_70’, ‘sm_72’, ‘sm_75’, ‘sm_80’, ‘sm_86’, ‘sm_89’, ‘sm_90’, ‘compute_90’]
Driver Version: 545.29.02
CUDA Version: 12.3
Linux version: 6.6.1-arch1-1, not using conda
GPU: Nvidia GTX960M, maxwell architecture (sm_50 OR sm_52 OR sm_53, but i think the 900 series are sm_52 according to here)
i downgraded cuda to 12.1 but as issued here*1, seems like cuda 12.3 is compatible and the python-cuda compatibility didn’t change anything.
*1 https:// discuss.pytorch .org /t/question-i-have-a-question-about-installing-pytorch/191829/6
i wanted to build from source but it got a bit confusing on whether i should add arguments and if so, what arguments.
i tried installing via pip but i kept getting cache error or something. may be able to use wget and install locally.
though i have no issue when running on device=torch.device(‘cpu’)