I’m building a network to run on CUDA and i’ve always used the Nvidia GTX 850M with 2GB, which never gave me any problems, but since i need more memory i’m trying out the Nvidia GT 730 with 4 GB.
When i try to run my network i get this error:
RuntimeError: cuda runtime error (8) : invalid device function at /py/conda-bld/pytorch_1493680494901/work/torch/lib/THC/generic/THCTensorMathPairwise.cu:40
It happens when in a line where i simply subtract a scalar from a tensor inside a Variable.
I’m sure it’s not a code problem, since on the 850M all my code works perfectly, so it’s probably a CUDA-Pytorch compatibility issue.
Any suggestions? The 730 should be supported since it’s compute capability is 3.5 (while the 850M is 3.0).
If you’re using the card on the same system, you should reinstall pytorch.
To reduce compilation time and size, the cuda code is only compiled for the detected GPUs, so if you change GPU, you need to recompile.
Thanks, compiling from source worked and in fact it detected the 2.1 variant. I didn’t know it exists.
Now i get this error when using torch.nn.UpsamplingBilinear2d:
RuntimeError: cuda runtime error (7) : too many resources requested for launch at /home/user/pytorch/torch/lib/THCUNN/generic/SpatialUpSamplingBilinear.cu:63
i also get this error in TX2:
RuntimeError: cuda runtime error (7) : too many resources requested for launch at /home/user/pytorch/torch/lib/THCUNN/generic/SpatialUpSamplingBilinear.cu:63
How do you solve this problem? Could U give me a hand? i will be grateful.
I didn’t solve it. I just stopped using that GPU because it was not supported. All the other (more recent) GPUs I’ve used haven’t given me any problem.