Forcing tensors to be on the same GPU

Is there a way to use only one GPU out of multiple available GPUs when training a model?

I have two GPUs and I havent used nn.DataParallel because of memory overhead issues. However it seems like the code automatically distributes the tensors between the two GPUs leading to this error:

codes = codes + self.latent_avg.repeat(codes.shape[0], 1, 1) 

RuntimeError: Expected all tensors to be on the same device, but found at 
least two devices, cuda:1 and cuda:0!

As I understand, all my tensors are not on the same GPU - is there a way to make this happen?

There is the CUDA_VISIBLE_DEVICES envirionment variable or you could use cuda:0 as the device to get the first GPU.

Best regards

Thomas