Cublas runtime error : library not initialized at work/aten/src/THC/THCGeneral.cpp:250

Hello. I’m having some trouble with running pytorch with GPU.

The code is as below:
import torch
device = ‘cuda’
a = torch.Tensor([[1, 2]]).to(device)
fc1 = torch.nn.Linear(2, 4).to(device)

It is a very simple code, however, it came up with this error message:
Traceback (most recent call last):
File “”, line 6, in
File “lib/python2.7/site-packages/torch/nn/modules/”, line 489, in call
result = self.forward(*input, **kwargs)
File “lib/python2.7/site-packages/torch/nn/modules/”, line 67, in forward
return F.linear(input, self.weight, self.bias)
File “lib/python2.7/site-packages/torch/nn/”, line 1352, in linear
ret = torch.addmm(torch.jit._unwrap_optional(bias), input, weight.t())
RuntimeError: cublas runtime error : library not initialized at …conda/conda-bld/pytorch_1549614443593/work/aten/src/THC/THCGeneral.cpp:250

The code runs fine with CPU.
Also, simple operations such as bias+bias work normally.
The error message seems strange since the file conda/condabld/pytorch_1549614443593/work/aten/src/THC/THCGeneral.cpp:250 even doesn’t exist.

I found some solutions saying that removing ~/.nv or monitoring GPU usage with nvidia-smi, but they did not work for me.

Any help would be appreciated.
Thank you!


I would make sure that you only have one cuda install on your system. And that you have a version of pytorch that is built for that same version of cuda.

Thank you for your reply.
My home directory was on NFS and there was an issue about NFS file locking, which prevented ~/.nv to be written.
I got a hint from the link:

1 Like