Multiple Instance Learning - Implementation of dataset object to load data bag-wise

Hi everyone! I am kind of new in deep learning and pytorch. That’s why I’m having some issues and thought to address them here… :smile:

I need to create a dataset object for loading the data afterwards as a set of labeled bags (Multiple instance learning). Each bag will contain 10 images. And to label the bags (directories), I created a CSV file with two columns - one holding the directory name and the other some certain value as a label (regression task). How can I code the dataset object, such that when I provide it to the DataLoader, my data will be loaded bag-wise?

I could not find any piece of code or any link which could help me further. Any help would be much appreciated!

Could you explain the use case of creating the bags a bit more, please?
If I understand it correctly, you would like to sample from specific indices to create these bags?
Would you like to return a data tensor in [batch_size, 10, ...] or would 10 already be the batch size?

Thank you for your answer @ptrblck! Sorry for not explaining it more concisely. Actually, my batch size should be 10, meaning that it will have 10 bags per batch. And the bags will contain a specific number of images, let’s say 15. Each bag is labelled by a number, from a CSV file.

Is this according to you more reasonable? Or you think sampling from loaded images to create bags makes more sense…

1 Like

Hi, I am doing the same project with medical data.

The images are super large with a resolution of 5000*5000, to solve this I made patches of it and saved in folders. Then created a bag for each folder images

my bag has more than 187 images which I couldn’t in the gpu memory. Is there any other way to solve it ?

I’m not fully understand what “bad” means in this use case.
If your data still doesn’t fit into the GPU memory, you might need to reduce the patch sizes further.

I tried reducing the patch size as well to 100*100, but still can’t able to fit in the GPU memory!!