Instance segmentation on big images

Hello,
I would an instance segmentation program which take 3000x2500 images at the input of the neural network. I tested with detectron2 but the GPU memory exploded and I can’t do it. I have a 24GB RAM GPU.
Thank you for your help.
Best regards

Somewhat related to this answer.
I don’t know if you are planning to train the model or just to use it for inference.
For training you could try to trade compute for memory via torch.utils.checkpoint or CPU-offloading as described in this post.
For inference you should wrap the model in a no_grad() context to save memory. If this is still running out of memory, you might need to use multiple GPUs and apply model sharding.