How to select the larger GPU to train the model?

I have two GPUs,one is of 11G, the other is of 23G. In previous test, my code always runs in the GPU of 11G, it is out of memory. What can I do to select the second GPU?

You could specify the device when pushing the model parameters and data to the device via .to('cuda:1') or use CUDA_VISIBLE_DEVICES=1 python script.py args to execute your script. In the latter approach GPU1 will be mapped to 'cuda:0' in your script.