I have a (faster r-cnn) model that currently supports a variable-sized input tensor of size (1,3,H,W) with an image batch size of 1. I’d like to run this model on two GPUs, using 1 image per GPU.
However, I can’t figure out how to send 2 variable-sized input tensors to each gpu. The example here assumes that the variable input_var is a concatenation of the inputs along the batch dimension:
Interesting. I think DataParallel assumes same sizes. One thing you could do is pad both images to some larger size and modify the inputs to specify H and W.
I have a similar kind of problem with my model. To deal with variable size input I’ve implemented a custom collate function. Similar to this
I want to parallelize my model too.
Did this work for you?? It would be helpful if you can explain in a bit detail. Thankyou so much in advance.