How can i run on the second GPU not first one

I have 4 GPU, how can I train my model in the second GPU not the first cuz the first I am using it to train another model
RuntimeError: CUDA out of memory. Tried to allocate 14.00 MiB (GPU 0; 5.56 GiB total capacity; 43.77 MiB already allocated; 10.19 MiB free; 52.00 MiB reserved in total by PyTorch)

so I need to train in 2 , 3 , 4 GPU not the first is that possible?

If you are using the terminal you can tell it which cuda devices it can access with the device number from nvidia-smi. The first gpu is named indexed with 0. For eg to run on your second gpu: CUDA_VISIBLE_DEVICES=1 python

nvcc: NVIDIA ® Cuda compiler driver
Copyright © 2005-2019 NVIDIA Corporation
Built on Fri_Feb__8_19:08:26_Pacific_Standard_Time_2019
Cuda compilation tools, release 10.1, V10.1.105

This is not relevant.

what do u mean with that is not relevant ?

Just run CUDA_VISIBLE_DEVICES=gpu_number python

CUDA_VISIBLE_DEVICES=1 can i use it in code cuz i cant in terminal
iam using window 10

CUDA_VISIBLE_DEVICES’ is not recognized as an internal or external command,
operable program or batch file.