Torch.cuda.is_available() returns false debugging but true when execute on the console

I’ve checked the driver , cuda and torch version over and over again, however , gpu doesn’t work when I try to run a program. and the results differ when this instruction run in debug and console

console:

!python -m torch.utils.collect_env
Collecting environment information...
PyTorch version: 1.10.0+cu113
Is debug build: False
CUDA used to build PyTorch: 11.3
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 11 Home China
GCC version: Could not collect
Clang version: Could not collect
CMake version: Could not collect
Libc version: N/A
Python version: 3.9.5 (tags/v3.9.5:0a7dcbd, May  3 2021, 17:27:52) [MSC v.1928 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.22000-SP0
Is CUDA available: True
CUDA runtime version: Could not collect
GPU models and configuration: GPU 0: GeForce GTX 1650
Nvidia driver version: 462.75
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A

debug:

!python -m torch.utils.collect_env
Collecting environment information...
PyTorch version: 1.10.0+cu113
Is debug build: False
CUDA used to build PyTorch: 11.3
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 11 Home China
GCC version: Could not collect
Clang version: Could not collect
CMake version: Could not collect
Libc version: N/A
Python version: 3.9.5 (tags/v3.9.5:0a7dcbd, May  3 2021, 17:27:52) [MSC v.1928 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.22000-SP0
Is CUDA available: False
CUDA runtime version: Could not collect
GPU models and configuration: GPU 0: GeForce GTX 1650
Nvidia driver version: 462.75
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A

Could you describe what “run in debug” means?
Is this “debug” mode changing your PATH somehow?

I mean pycharm’s debug mode
the mode didin’t change the path of torch, is there any other path to be checked?

I’m not a Windows expert so I don’t know how exactly Windows finds dependencies, but on Linux systems the dynamic linker is responsible to load the needed dependencies from e.g. libtorch.so via direct linking, by setting the RUNPATH (or RPATH), or by calling into dlopen.
Based on your description it seems as if pycharm manipulates your environment in debug mode in some way which disallows PyTorch to communicate with the GPU driver.
You could check if this behavior is also seen using other CUDA-enabled libraries (e.g. python-cuda) or if this is specific to your PyTorch installation.

oh, this problem has been solved.
The error is caused by this command :os.environ[“CUDA_VISIBLE_DEVICES”]=‘1’
the problem is solved by changing it to os.environ[“CUDA_VISIBLE_DEVICES”]=‘0’

thanks for suggestion!

Masking the GPU explains why PyTorch couldn’t find it, but could you explain why and how Pycharm was setting this environment variable in debug mode for you without your knowledge?

I’m sorry that I can’t explain your problem. I haven’t understood some features of pycharm. I only know that it will have device-related variable errors at some time, which may be related to VRAM and so on.

Do you know where os.environ[“CUDA_VISIBLE_DEVICES”]=‘1’ was set? Was it in your script somewhere or did Pycharm add this variable automatically somehow in debug mode?

I didn’t set it . The device was invisible before I added the variable os.environ[“CUDA_VISIBLE_DEVICES”]=‘0’ in my script.
os.environ[“CUDA_VISIBLE_DEVICES”]=‘1’ maybe a default value because its value is right this when lib os is loaded.