Using torch.nn.DataParallel with variable-sized input tensors

I have a (faster r-cnn) model that currently supports a variable-sized input tensor of size (1,3,H,W) with an image batch size of 1. I’d like to run this model on two GPUs, using 1 image per GPU.

However, I can’t figure out how to send 2 variable-sized input tensors to each gpu. The example here assumes that the variable input_var is a concatenation of the inputs along the batch dimension:

>>> net = torch.nn.DataParallel(model, device_ids=[0, 1, 2])
>>> output = net(input_var)

How would I distribute a model of image batch size 2 and variable input H and W across 2 gpus?

Interesting. I think DataParallel assumes same sizes. One thing you could do is pad both images to some larger size and modify the inputs to specify H and W.

1 Like

One of my colleagues suggested the same thing. Appears that’s the best option at the moment.

I have a similar kind of problem with my model. To deal with variable size input I’ve implemented a custom collate function. Similar to this
I want to parallelize my model too.
Did this work for you?? It would be helpful if you can explain in a bit detail. Thankyou so much in advance.