Trying to run cpu instead of cuda

Hi guys,

I’ve been trying to get cuda to work but ended up realising that my GPU driver is too old and my graphics card has a compute capability of 3.0 which is not supported anymore as per this discussion,

I’m new to pytorch and i’m trying to replicate a tutorial. At the start of the notebook i’ve tried to use,

torch.cuda.is_available = lambda : False
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')

but I still get the error,

RuntimeError                              Traceback (most recent call last)
Cell In[153], line 1
----> 1 eval_review("I really don't like this thing at all.")

Cell In[151], line 11, in eval_review(rev)
      8 inputs = tokenizer(rev, return_tensors="pt")
     10 # move the input tensor to the same device as the model
---> 11 inputs = {k: v.to('cuda') for k, v in inputs.items()}
     13 # pass the input through the model to get the output logits
     14 with torch.no_grad():

Cell In[151], line 11, in <dictcomp>(.0)
      8 inputs = tokenizer(rev, return_tensors="pt")
     10 # move the input tensor to the same device as the model
---> 11 inputs = {k: v.to('cuda') for k, v in inputs.items()}
     13 # pass the input through the model to get the output logits
     14 with torch.no_grad():

File ~\anaconda3\lib\site-packages\torch\cuda\__init__.py:247, in _lazy_init()
    245 if 'CUDA_MODULE_LOADING' not in os.environ:
    246     os.environ['CUDA_MODULE_LOADING'] = 'LAZY'
--> 247 torch._C._cuda_init()
    248 # Some of the queued calls may reentrantly call _lazy_init();
    249 # we need to just return without initializing in that case.
    250 # However, we must not let any *other* threads in!
    251 _tls.is_initializing = True

RuntimeError: The NVIDIA driver on your system is too old (found version 10020). Please update your GPU driver by downloading and installing a new version from the URL: http://www.nvidia.com/Download/index.aspx Alternatively, go to: https://pytorch.org to install a PyTorch version that has been compiled with your version of the CUDA driver.

Can I just replace all the calls to cuda with cpu? or do I have to install pytorch with cpu only?
e.g.

# set the model to evaluation mode
model.eval()

def eval_review(rev):
    # set the model to evaluation mode

    inputs = tokenizer(rev, return_tensors="pt")

    # move the input tensor to the same device as the model
    inputs = {k: v.to('cuda') for k, v in inputs.items()}

    # pass the input through the model to get the output logits
    with torch.no_grad():
        outputs = model(**inputs)

    # convert the logits to probabilities using a softmax function
    probs = torch.nn.functional.softmax(outputs.logits, dim=-1)

    # get the predicted label by selecting the index with the highest probability
    predicted_label = torch.argmax(probs, dim=-1)

    # return the predicted label
    return(predicted_label.item())

You should be able to replace al to("cuda") calls to to("cpu") or just remove them. As long as you are not trying to initialize a CUDA context by calling into any CUDA operation, your training should run fine.

Legend ptrblck!

I know it’s a trivial fix for those that are well versed in pytorch but thankyou!