Training with images too big for memory

Hi,

I am currently working in a project which requires me to train image classification models on medical images. However, these are hi-res images that are too big to fit into memory.

We could chop up the images but that would create more problems and seem very inefficient.

I would like to ask if there exists a way in pytorch to allow partial loading of images but still essentially has the algorithm run over the entire picture?

Any input is appreciated

Thanks
Paul