Why pytorch use 2 gpus? But I never set this

In my code, I set the same gpu_id in every “var.to(gpu_id)” sentence.
But sometimes it shows two gpus are occupied when I check “nvidia-smi”.

Hi,

Which version are you using?
There was a bug where a small amount of memory was always allocated on gpu 0 when any gpu tensor was printed. This has been fixed in master.
To avoid this, you can use CUDA_VISIBLE_DEVICES=1 to hide other gpus and be sure you don’t use them.

I’m using torch-0.4.1. I think set CUDA_VISIBLE_DEVICES=1 can fix this! Thank you!