CUDA error when changing the machine

Hello! I have these lines of code (part of a larger code, but this is where I am getting the error):

from torch.distributions import MultivariateNormal
prior = MultivariateNormal(torch.zeros(data.size(1)).cuda(), torch.eye(data.size(1)).cuda())

and I am getting this error (for the second line):

File "/path/to/lib/python3.6/site-packages/torch/distributions/", line 149, in __i\
    self._unbroadcasted_scale_tril = torch.cholesky(covariance_matrix)
RuntimeError: CUDA error: no kernel image is available for execution on the device

I ran the exactly same code on a different machine, and it works fine there, so I am not sure what is wrong. Can someone help me? Thank you!


This happens because the pytorch you’re running does not have the code to run on this GPU.
If you compiled from source, you want to recompile while this GPU is visible (or specify the correct architecture with this option:
If you use a binary, make sure that you have the right binary for your cuda version.

Thank you for your reply! I am not compiling anything. I am just using a python code from GitHub and running it the way it is, with the exception of adding .cuda() to some of the parameters. Here is the link: As I said, adding .cuda() woks just fine on the other machine, but not on the one I am using right now. However the machine I am using right now, handles .cuda() for other scripts (I ran lots of code on GPU on this machine), I am not sure what is wrong with this particular one.

How did you install pytorch on that particular machine?
Can you try in the command line to create a Tensor, send it to the GPU and try to pass it to torch.cholesky ? To make sure this is the problematic function.