[SOLVED]Resuming from snapshot into a different GPU device

I have an issue when I try to resume snapshot of a saved model. I am training a model in GPU 0. For evaluation, I load the model snapshot in GPU 1. Since my model barely fits in GPU 0, I do not have any more space left on GPU 0. When I try to load the snapshot in GPU 1, it seems that a part of the model is instantiated in GPU 0, despite me giving the GPU flag: model.cuda(1). Is there a way to load the snapshot only in GPU 1 without using any part of GPU 0 ?

I realized this could be done using the CUDA_VISIBLE_DEVICES flag and calling model.cuda() within the code. Closing this.