Torch.cuda.is_available() returns different value in different scripts

Dear all,

I have two projects in Pycharm, both using the same interpreter. I have checked and ensured that the cuda, cuda driver and pytorch (GPU version) versions are right.

Nvidia GEFORCE RTX 2080 TI
CUDA:
DRIVER VERSION: 516.94
cudatoolkit: 10.2
python: 3.8

The first project is able to use cuda, and torch.cuda.is_available() returns True.

But the second project returns False.

Then, I tried to create a new environment for the second project, and reinstall torch, cuda, and python.
but torch.cuda.is_available() still False.

The strange thing is, when I check torch.cuda.is_avaiable() at the python console, I get True, but in the script, I get False.

Can anyone enlighten me on what’s the underlying issue?

Your IDE might use a different virtual environment or could set some env variables (e.g. CUDA_VISIBLE_DEVICES to a wrong values etc.), so check your PyCharm setup and which envs are used.

hi thanks so much! indeed that’s the issue, problem solved.