I have a dataset which comprises over a thousand high-resolution whole-slide digital pathology images, and my goal is to create a classifier.
The problem that I’m facing is, I’m unable to train an image classifer due to high memory usage issues [tried to allocate more memory than is available. Session has restarted.]
and each .tif file has a dimension of (60797, 34007, 3), and I want to scale them down without losing critical information.
Can anyone help me on how to work with these huge .tif files. Thanks
So from the code that you share, it seems like you are reducing the image from (60797, 34007, 3) to (224, 224, 3) then you are applying random rotations and many other transformation. Now the question is where are you getting the memory error? On CPU or on GPU.
Because the Dataloader is returning the size ( 3, 224,224) tensor. It should not cause memory problem in gpu unless you have big batch size.