torch.cuda keeps track of currently selected GPU, and all CUDA tensors you allocate will be created on it. The selected device can be changed with a torch.cuda.device context manager.
ex:
with torch.cuda.device(1):
w = torch.FloatTensor(2,3).cuda()
# w was placed in device_1 by default.
Or you can specify gpu.id via .cuda() directly.
w = torch.FloatTensor(2,3).cuda(2)
# w was placed in device_2
See more on: CUDA semantics — PyTorch master documentation