Conv2d in parallel

conv2d exprects an input of size [batch_size, num_ch, width, height].

i have a sequence of 16 regions of interest per observation (roi pooled to same size) and thus my input is of size
[batch_size, num_rois, num_ch, width, height].

How do I perform the same convolution per roi in parallel? my sequential version is:

convoluted_rois = []
for roi in range(rois.size(1)):
____convoluted_rois.append(self.conv(rois[:, roi, :, :]))
convoluted_rois = torch.cat(convoluted_rois, dim=1)

Why not just .view(bs*roi, num_ch, w, h) before conv2d and .view back after?

1 Like