What I’d like to do is
import os, torch
print(torch.cuda.is_available()) # True
os.environ["CUDA_VISIBLE_DEVICES"]="-1"
print(torch.cuda.is_available()) # False
os.environ["CUDA_VISIBLE_DEVICES"]=""
print(torch.cuda.is_available()) # True
But os.environ["CUDA_VISIBLE_DEVICES"]=""
does not make cuda available again. How can I do that?
Setting these environment variables inside a script might be a bit dangerous and I would also recommend to set them before importing anything CUDA related (e.g. PyTorch).
It looks as if you would like to mask the GPU dynamically inside your script.
If that’s the case, I would rather use the device
variable and write device-agnostic code than rely on CUDA_VISIBLE_DEVICES
.
2 Likes
Thank you for the great response!
You are right, I hide devices, for example, to show tensorboard, then all GPUs are used (via export CUDA_VISIBLE_DEVICES=""
in a console)
…I never fought about consequences of dynamic masking in the code, because didn’t have problems. Thanks