I am run my code on server that does not allow to upgrade to pytorch 0.4.1. It is still using version of 0.3.1. It has multiple GPU (4 GPUs)
| 0 GeForce GTX TIT... Off | 0000:05:00.0 Off | N/A |
| 72% 86C P2 209W / 250W | 12065MiB / 12204MiB | 100% Default |
+-------------------------------+----------------------+----------------------+
| 1 GeForce GTX TIT... Off | 0000:06:00.0 Off | N/A |
| 80% 87C P2 229W / 250W | 11714MiB / 12206MiB | 100% Default |
+-------------------------------+----------------------+----------------------+
| 2 GeForce GTX TIT... Off | 0000:09:00.0 Off | N/A |
| 48% 82C P2 193W / 250W | 11714MiB / 12206MiB | 90% Default |
+-------------------------------+----------------------+----------------------+
| 3 GeForce GTX TIT... Off | 0000:0A:00.0 Off | N/A |
| 22% 26C P8 16W / 250W | 2MiB / 12206MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
I want to use the GPU 3 only because it is free now. I have set up in python code as
os.environ['CUDA_VISIBLE_DEVICES']=3
torch.cuda.set_device(3)
model.cuda()
I have to print successful the current GPU device as
torch.cuda.device_count() =', 4L
torch.cuda.current_device() =', 3L
However, it still use GPU 0 as defalut. How should I correct the above code to use GPU #3?