Cublas Runtime Error on GPU

This is the environment I’m running in:

  1. CUDA 10.1
  2. Python 3.7
  3. Titan X
  4. Pytorch 1.5.1

The network used to work normally, but after I added a method in my model to compute graph normalizations, this error starts to happen.

The error tracing with CUDA_LAUNCH_BLOCKING=1 gives:’

Traceback (most recent call last):----------------------------------| 0.1% Training epoch 0;
  File "", line 201, in <module>
    cross_validation_with_val_set(model, params)
  File "/afs/", line 110, in cross_validation_with_val_set
    epoch_index=epoch-1, params=params, writer=writer)
  File "/afs/", line 209, in train_test_eval
    train_acc, train_loss = train(model, loaders['train'], opt, params)
  File "/afs/", line 250, in train
    return run_windowed_model(model, loader, opt, params)
  File "/afs/", line 343, in run_windowed_model
    out = model(x_curr)
  File "/afs/", line 550, in __call__
    result = self.forward(*input, **kwargs)
  File "/afs/", line 12, in forward
    x = self.MLP(x)
  File "/afs/", line 550, in __call__
    result = self.forward(*input, **kwargs)
  File "/afs/", line 38, in forward
    x = F.dropout(F.relu(self.bn1(self.lin1(x))),p=self.dropout,
  File "/afs/", line 550, in __call__
    result = self.forward(*input, **kwargs)
  File "/afs/", line 87, in forward
    return F.linear(input, self.weight, self.bias)
  File "/afs/", line 1610, in linear
    ret = torch.addmm(bias, input, weight.t())
RuntimeError: CUDA error: CUBLAS_STATUS_NOT_INITIALIZED when calling `cublasCreate(handle)`

What might be possible causes? Thank youl.

Is your model running fine on the CPU?
This should give you a better stack trace than the current one.

If it’s running fine of the CPU, could you check, if you might be running out of memory and reduce the batch size if possible, since sometimes library errors might mask an actual OOM issue?


Thanks for the reply. It was indeed a memory issue.