"cublas runtime error : library not initialized" when running nn.Linear with Cuda

Hey all,
I keep getting this error when trying to use nn.Linear with cuda:
cublas runtime error : library not initialized at ../aten/src/THC/THCGeneral.cpp:216

On the CPU it runs fine though.
PyTorch is 1.2.0, python 3.7, Cuda 10.1, CuDNN 7.6.1, and my GPU is a GTX 970. I’m running macOS and I installed PyTorch from source.

Anyone know why this might be happening or how to debug this further? I’m a bit new to these tools.

1 Like

Could you try some suggestions from this issue?

Thank you for the reply! It appears that most of the info in that thread is about two GPU systems, which mine is not. The only potentially relevant suggestion I’m seeing is about removing the .nv file, which I don’t seem to have on my system.

I’m thinking my issue may be something like an incomplete install or missing dependencies or something?

If I use

linear = nn.Linear(2, 2)
x = torch.randn(2, 2)
print(linear(x))

everything works fine, but if instead I use

linear = nn.Linear(2, 2).cuda()
x = torch.randn(2, 2).cuda()
print(linear(x))

I get

RuntimeError                              Traceback (most recent call last)
<ipython-input-3-81612b3daf91> in <module>
      4 linear = nn.Linear(2, 2).cuda()
      5 x = torch.randn(2, 2).cuda()
----> 6 print(linear(x))

~/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
    537             result = self._slow_forward(*input, **kwargs)
    538         else:
--> 539             result = self.forward(*input, **kwargs)
    540         for hook in self._forward_hooks.values():
    541             hook_result = hook(self, input, result)

~/anaconda3/lib/python3.7/site-packages/torch/nn/modules/linear.py in forward(self, input)
     85 
     86     def forward(self, input):
---> 87         return F.linear(input, self.weight, self.bias)
     88 
     89     def extra_repr(self):

~/anaconda3/lib/python3.7/site-packages/torch/nn/functional.py in linear(input, weight, bias)
   1363     if input.dim() == 2 and bias is not None:
   1364         # fused op is marginally faster
-> 1365         ret = torch.addmm(bias, input, weight.t())
   1366     else:
   1367         output = input.matmul(weight.t())

RuntimeError: cublas runtime error : library not initialized at ../aten/src/THC/THCGeneral.cpp:216

Hello, I hit this exact error trying to get pytorch/examples/mnist/main.py to work. The above print(linear(x)) code also generates the same error. I’m on a 2014 MacBook Pro with a GTX 750M. Built pytorch from GitHub src a couple days ago, rebuilding just now to see if it makes any difference. Would love to see this fixed! Thanks!

print(sys.version)
print(torch.version)
print(torchvision.version)
print(torch.cuda.is_available())
print(torch.backends.cudnn.enabled)
print(torch.version.cuda)
print(torch.backends.cudnn.version())

3.7.3 (default, Mar 30 2019, 03:44:34)
[Clang 9.1.0 (clang-902.0.39.2)]
1.2.0a0+5395db2
0.3.0a0+487c9bf
True
True
10.1
7601

Just rebuilt pytorch from GitHub src. Same error.
1.2.0a0+30e03df
Thanks!

Well I’m glad I’m not alone at least. Sorry you’re suffering this too though.

John_G, you still encountering this? This is an ongoing problem for me. Perhaps I should open an issue on the github.