Using list variable with torch.nn.dataparallel

Hello @albanD, I am using torch.nn.dataparallel to use multiple gpus. Instead of splitting data over multiple gpu’s , data is simply getting coppied on the two gpu’s. The reason for this is that in the forward function of the model instead of passing a tensor variable as input, I am passing a list of tensor variables.
Is it not possible to split data using list of tensor variable?
What changes can i make in my current setup to efficiently use data parallel.

I have the same problem as well. Any solution?