Sending Bags of images to model

Hello everybody.
I am trying to send the bags of instances with shape torch.Size([8, 6, 3, 224, 224])
to model but getting this error:

RuntimeError: Expected 3D (unbatched) or 4D (batched) input to conv2d, but got input of size: [8, 6, 3, 224, 224]

8 → batch size, aka number of bags
6 → number of instances per bag

if i reshape it to torch.Size([48, 3, 224, 224]) the whole idea of bags dissapear.

How can i send bags to model?

What kind of model are you using (or the downstream task) and how does it use “bags” or sequences of images?

I am using ResNet18. Actually i didn’t chaged the model itself, yet. I am new to this and don’t have any idea what to do in order the model receives the bag.

The idea is that, if the one of the inctaces(images) in bag are positive the whole bag will be classified as positive, if not negative. I need to receive the bag of features and use the aggregation method to get 0 or 1 output

In that case you would want to do something like reshape the 5D input to 4D, and then reshape the model output again afterwards for aggregation.

Something like input.reshape(-1, 3, 224, 224) and output.reshape(#bags, -1).

Thank you, i will try.