Hi,
I am trying to load my model to the gpu using model=model.cuda()
I have 2 gpus in my system and one gpu has 10gb memory. How to load my model to gpu with more memory?
Hi,
@Ratan you can use nvidia-smi
command to find out which GPU 0 or 1 has more memory. Accordingly you can follow one of the following ways to load the model to the manually to particular GPU:
Inside the python file you can add:
os.environ['CUDA_VISIBLE_DEVICES'] = "0"
In command line
CUDA_VISIBLE_DEVICES=0 python ...
In the python file you can add:
torch.cuda.set_device(0)
1 Like
@SANTOSH_S thank you