I have multiple GPUs on my computer and want to use only the second one (GPU 1). I notice that when I transfer my model to that device I still use up some memory on the first GPU (GPU0). I have a minimal working example where this happens.
import torch.nn as nn
device = torch.device(“cuda:1”)
f = nn.Linear(100, 100).to(device)
when I do this and check the memory usage on the GPUs I can clearly see that my process has allocated memory on both GPUs (although the amount on GPU0 is smaller). Can anyone explain why this is happening