Cuda.is_available returns True in consel but False in program

I am a beginner in pytorch and deep learning. When I trying to run the program with pytorch, it shows

torch._C._cuda_init()
RuntimeError: No CUDA GPUs are available

Then I added the test code

print(t.cuda.is_available())

and it returns False.

But it comes the most wired thing that when I typed line by line in the consel:

python
>>>import torch
>>> torch.version.cuda
'11.5'
>>> torch.cuda.is_available()
True

I am also sure that I have GPUs in the server, but why does the program cannot use the GPU?

Often these issues are caused by (accidentally) using different environments and binaries.
Check the torch.__path__ in both setups as these could be different.

Thanks very much and I have solved this problem. The key is that the program has stated only the No.2 GPU to be visible but in the server, I only have No.0. I deleted that line and wrote torch.cuda.set_device(0) and it worked.