This is probably a silly question but…
I am installing PyTorch on my laptop, which does not have a GPU. Ive been using a pre-installed version of PyTorch on Kaggle and GCP.
I assume I should select ‘None’ for CUDA on the “getting started” page?
I still want access to any methods or libraries that deal with CUDA like
torch.cuda.is_available() so that I can write my code locally and then run it on a GPU when I need it.
Hey why dont u try this method in colab with hardware acceleration set as none.
That’s actually not what I’m asking. I want to know if I lose access to any torch packages that involve CUDA interaction if I select ‘None’ for CUDA during local installation.
Collab, if it is anything like GCP or Kaggle’s kernels, already has PyTorch installed.
The CPU package will include these method, so that you can write device-agnostic code.
If you try to execute a CUDA operation, an error will be raised. However, no errors should be raised for utilitz functions e.g. to check of a GPU is available.