Transferring constant member `Tensors` to correct GPUs with `DataParallel`

My network (an nn.Module subclass) has a constant member tensor A involved in some of the computation in its forward method. If I wrap the network in nn.DataParallel to run it across multiple GPUs, I get complaints that A resides on a different GPU than the incoming data (correctly split across the batch dimension by DataParallel). What do I need to do for DataParallel to copy A to each separate GPU when it duplicates the network to each GPU?

Hi,

You can register your constant Tensors with self.register_buffer() on the nn.Module so that it gets moved around with the Module.

1 Like