I have 4 GPU. If I do not use CUDA_VISIBLE_DEVICES and run such shell:
import torch
a = torch.tensor([1,1], device=“cuda:1”)
a.device
device(type=‘cuda’, index=1)
Then I check the GUP usage by nvidia-smi. I get such result:
±----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 15254 C python 519MiB |
| 1 15254 C python 519MiB |
±----------------------------------------------------------------------------+
It seems that my python shell (PID=15254) uses 2 GPU! I’m wondering why the first GPU is used?