Cannot initialize CUDA without ATen_cuda library

I tried to run my python code, however, I got this error:

/Users/Apple/anaconda3/bin/python /Users/Apple/Downloads/
Traceback (most recent call last):
  File "/Users/Apple/Downloads/", line 39, in <module>
  File "/Users/Apple/anaconda3/lib/python3.6/site-packages/torch/nn/modules/", line 258, in cuda
    return self._apply(lambda t: t.cuda(device))
  File "/Users/Apple/anaconda3/lib/python3.6/site-packages/torch/nn/modules/", line 185, in _apply
  File "/Users/Apple/anaconda3/lib/python3.6/site-packages/torch/nn/modules/", line 191, in _apply = fn(
  File "/Users/Apple/anaconda3/lib/python3.6/site-packages/torch/nn/modules/", line 258, in <lambda>
    return self._apply(lambda t: t.cuda(device))
RuntimeError: Cannot initialize CUDA without ATen_cuda library. PyTorch splits its backend into two shared libraries: a CPU library and a CUDA library; this error has occurred because you are trying to use some CUDA functionality, but the CUDA library has not been loaded by the dynamic linker for some reason.  The CUDA library MUST be loaded, EVEN IF you don't directly use any symbols from the CUDA library! One common culprit is a lack of -Wl,--no-as-needed in your link arguments; many dynamic linkers will delete dynamic library dependencies if you don't depend on any of their symbols.  You can check if this has occurred by using ldd on your binary to see if there is a dependency on * library.

Any idea?

How did you install PyTorch, from source or using conda?
What does torch.cuda.is_available() return?

Thanks for your reply, torch.cuda.is_available() returns False
I’ve installed PyTorch using conda install pytorch torchvision -c pytorch

But I assume you do have a GPU installed?
Could you check the NVIDIA kernel drivers?
It looks like your GPU isn’t properly installed.

I am an old iMac (2011) I am not use that I have GPU on board… how can I check?
also, incase I don’t have GPUs, is there a way to emulate them? (for development and learning purpose)

I’ve never used a Mac, but apparently you can check it in “About this mac > More info > Graphics/Displays”.
If you have a suitable GPU, note that you would have to build PyTorch from source to get GPU support on a Mac.
You can find the build instructions here.

Also, I’m not familiar with emulating a GPU.
You could have a look at Google Colab, which provides a free GPU for your notebook for a certain amount of time.

Mac binaries do not ship with CUDA support. Moreover, from this page, it seems your machine don’t have a CUDA GPU anyways. However, your code calls model.cuda(). Hence the error.

The error message is misleading indeed. I have opened an issue at

1 Like

Thanks for info, now my code looks like this for every cuda:

if torch.cuda.is_available():
if torch.cuda.is_available():
        labels = Variable(torch.from_numpy(y_train).cuda())
        labels = Variable(torch.from_numpy(y_train))

You shout just define one device object and use that through out your script though… This is probably helpful

1 Like

Hey @ptrblck, so I am having the same exact error after building pytorch from source. Do you have any idea why this might be happening ?

How did you build PyTorch and which OS are you using?
Also, which PyTorch version are you building (master or an older tag)?

I built pytorch from master on Ubuntu 20.04 using this gist I made:

Not sure, if Ubuntu 20.04 is causing the issue, but the provided script works fine in a docker using Ubuntu 16.04.


Thanks for your message.
So I started the whole process over and was able to successfully build from source!!
I don’t know why it was not working before.
Maybe I missed a LD_LIBRARY_PATH :


Thanks !

1 Like