Isolate processess and instances on separate GPU's, same machine - How to?

I am running two separate pytyhon virtual environments in Linux, on a machine with three GPUs (GPU-0 is for the display). nvidia-smi shows each of the non display GPUs shows its own 4 MB /usr/lib/xorg/Xorg process before I do anything.

When I launch a training run on one of the GPU-1, nvidia-smi shows GPU-1 being loaded with 3680 MB of /bin/python3.11 program. So far-so good.

When, from the second virtual python environment, I launch a second training program on GPU-2, nvidia-smi shows GPU-2 being loaded up with 3644 MB of /bin/python3.22 program. HOWEVER, at the same time a new program appears on GPU-1 that nvidia-smi shows as being /bin/python3.11 for 306 MB.

Can someone explain the reason that launching a second training program, from a second virtual environment, on the second GPU on the same machine, causes a process to appear on the first GPU?

Is there something that can be done to completely isolate the processes?

| 1 N/A N/A 2154 G /usr/lib/xorg/Xorg 4MiB |
| 1 N/A N/A 62386 C …/bin/python3.11 3680MiB |
| 1 N/A N/A 62772 C …/bin/python3.11 306MiB |
| 2 N/A N/A 2154 G /usr/lib/xorg/Xorg 4MiB |
| 2 N/A N/A 62772 C …/bin/python3.11 3644MiB |

You are most likely allowing the process to see all devices, might then select cuda:1 for the main work but are also calling into methods, which would use the “default” device (cuda:0) in this case.
As a quick workaround, use CUDA_VISIBLE_DEVICES=1 python script.py and use cuda:0 internally.

That works. Thank you again @ptrblck.