Module dictionary to GPU or cuda device

If there a direct way to map a dictionary variable defined inside a module (or model) to GPU?
e.g. for tensors, I can do a = a.to(device)
However, this doesn’t work for a dictionary.
In other words, is the only possible way is to map the keys individually like the following?

for key, value in self.dict.items():
    self.dict[key] = self.dict[key].to(device)
1 Like

I think that’s the correct way, as you cannot move tensors inplace to the device.

2 Likes

Thank you @ptrblck
I have 2 dictionary variables in my model and when I run DataParallel, it gives cuda mismatch error.
I thought this will resolve the issue. However on searching, I got to know that only register buffers and parameters are replicated by DataParallel to the corresponding GPU. Could you suggest any solution for that?

If you cannot or don’t want to register these tensors as parameters of buffers, you could manually move them to the corresponding device by using the .device attribute of the input tensor:

def forward(self, x):
    my_tensor = my_tensor.to(x.device)
1 Like

If the tensors are already on the GPU, then do you think it would be more optimal when creating the dictionary to make the keys on the GPU as well, or is easy just as easy to access it from cpu based keys? In my particular case I want to use keys that are generated from some mathematical manipulations of the tensors, so those keys would initially be on the GPU. My intuition is that the map itself does not live on the GPU, so it is not more optimal to have keys on the GPU, that need to be copied back to the CPU each time the dictionary is accessed. Hard to tell what’s really going on here.

The truth would be told by profiling the workload on the GPU vs. CPU, but based on the dict source code I would assume the CPU is used.