Moving model to specific also allocates memory on GPU 0

I have multiple GPUs on my computer and want to use only the second one (GPU 1). I notice that when I transfer my model to that device I still use up some memory on the first GPU (GPU0). I have a minimal working example where this happens.

import torch
import torch.nn as nn
device = torch.device(“cuda:1”)
f = nn.Linear(100, 100).to(device)

when I do this and check the memory usage on the GPUs I can clearly see that my process has allocated memory on both GPUs (although the amount on GPU0 is smaller). Can anyone explain why this is happening

change

device = torch.device(“cuda:0”)

and try executing this way?

CUDA_VISIBLE_DEVICES=1 python your_program.py

Yes that works (I forgot to mention that this was the workaround that I had been using) but I’m curious as to why this happens.

Oh ok. I understand now.

version 0.4.1: Maybe, pytorch uses cuda:0 always by default?
pytorch master 0.5.0a0+ab6afc2: I have verified that this issue is fixed.

Sorry for many edits of the answer :slight_smile:

1 Like

Ok good to know thanks for verifying :slight_smile: