Hi,
I have a dataset comprising variable sizes of noisy 3D input volumes. I would like to feed them into the network simultaneously in mini-batches to smooth out the gradient during backpropagation. I would also like to avoid loops and input padding as much as possible. In principle, this sounds feasible, but I do not know if this is implemented in PyTorch. Please let me know if you are aware of a method to accomplish this.
Update: Running models with different input sizes in parallel could be an option. I know Pytorch allows us to do this on multiple GPUs. Is there a way for running two models with two different input sizes (mini-batch size of 2) on a single GPU, and then share the gradients?