How is Conv2d interpreting batch_size>1?


I am trying to understand how does nn.Conv2d interpret batch_size>1?

My data set is like this (batch_size=64, networks=2, channels=8, H=40, W=40).

Currently I’ve used a for-loop to split up the two networks and then execute each network independently, one after another.

Hence, I have inserted this into nn.Conv2d:
For Network 1: (batch_size=64, channels=8, H=40, W=40).
For Network 2: (batch_size=64, channels=8, H=40, W=40).

Is this a sound and correct approach in terms of defining the dimensions for the input into nn.Conv2d?

Does nn.Conv2d ensure that each batch is processed independently in the forward pass? I suppose no group convolution is needed for this?

Indeed, is there a way to parallelize the two networks and get rid of the for-loop?

Thank you.

I’m not sure I understand your use case completely, but you could most likely use groups=2 which might avoid the loop.
From the docs:

At groups=2, the operation becomes equivalent to having two conv layers side by side, each seeing half the input channels and producing half the output channels, and both subsequently concatenated.

Also, this post gives you a visualization how grouped convolutions are applied.