How to best use large images for training neural networks

I would like to ask you about how I should deal with the images I have. They are really large. They have this shape: (3000, 4000, 3) .

I’m working on a multilabel classification model .

And I want to know if it’s wise to slice the images into equal tiles (using image_slicer ) and to feed them to the model as well as resize the original images to the requisite input size of the model and feed those too.

So the training set would be: resized_original_images + tiles_of_original_image

Thank you

I’m not sure, if slices would work and I assume it depends a bit on your use case.
E.g. if you are dealing with a classification use case, I would assume that smaller patches of the original image could not contain the valid class at all.
So passing a patch with just “sky”, while the prediction could be “dog” could confuse the model.
However, probably the best way is just to try it out and see, if the model is able to catch enough information from the smaller patches.

1 Like

Thanks for your answer.I’m actually going to slice said images before labeling each slice.Same goes for the original image. That way there will be no problem with the labels. I was just wondering if using these patches aas well as the original image would be a bit redundant for the neural network. I want to ask what is an appriximate good number of samples. I don’t wish for the model to overfit.