there shouldn’t be semicolon. with the semicolon, they are on two different lines, and python won’t see it.
even with the correct command CUDA_VISIBLE_DEVICES=3 python test.py, you won’t see torch.cuda.current_device() = 3, because it completely changes what devices pytorch can see. So in pytorch land device#0 is actually your device#3 of the system. You can verify that from nvidia-smi.
According to the tutorial, it’s better to use environment variable
Sets the current device.
Usage of this function is discouraged in favor of device. In most cases it’s better to use CUDA_VISIBLE_DEVICES environmental variable.
Parameters: device (torch.device or int) – selected device. This function is a no-op if this argument is negative.
Just want to add to this answer, this environment variable should be set at the top of the program ideally. Changing the CUDA_VISIBLE_DEVICES var will not work if it is called after setting torch.backends.cudnn.benchmark.
This might also be true for other torch/cuda related calls as well so it’s better to set the environment vars at the program start or use export CUDA_VISIBLE_DEVICES="NUM" before starting the program.