GPU tensor always use gpu 0 memory?

my pytorch version is 0.3.1
When I try to use tenserA=tenserA.cuda(1), the code always uses 400Mb of gpu 0 's memory, how to slove this. As the rule of the server does not allow me to use gpu 0 ,i have to slove it.


How do you check memory usage? If you use nvidia-smi, you should know that by default, the numbering from nvidia smi is not the same as the one for other programs.
You can set export CUDA_DEVICE_ORDER=PCI_BUS_ID to make all applications use the same numbering as nvidia-smi.

Also to avoid using GPU you should not, you can use CUDA_VISIBlE_DEVICES=1 to prevent your application from seeing any other gpu that the number 1 (this ordering is the other programs one).