It seems the error is thrown from the compiled kernels using nvcc in YOLO2 (line of code), not PyTorch directly.
Could you try so get -arch to -arch=sm_61, as it seems your M1200 is built using the Pascal architecture (doc).
I 'm facing the same error here,
I’m using Titan Xp GPUs to run neural-motifs .
I have both cuda-10.0 and cuda-9.0 installed, but the driver is for cuda-10.0
Any idea?
I have been facing this problem for two days since i am leaning pytorch by the Official Pytorch Tutorial. I am stopped at
y = torch.ones_like(x, device=device)
when python3.6 reported this cuda error.
I searched the internet and got no anwser.
That’s disappointing …
I am on CentOS 7 with a GT650M gpu and CUDA 10.1
nvidia-smi information:
Your GT650M has compute capability 3.0 based on this table, which isn’t supported anymore using the prebuilt binaries. You could build from source following this instruction.