Image Patching question

Hello everyone!

I had a a simple question regarding image patching.

For exmaple, if we are doign segementation or any other image processing task.

  • If an image is (256,256) and during training nd testing, we split it to patches of size (32,32), why not just resize the image to (32,32) directly? Do we lose information if we do so?

  • I don’t really understand why apply patches in the first place, does it improve segementation performance or is it purely for computational power expenses?

Thank you!

  1. Yes, resizing the image from 256x256=65536 pixels to 32x32=1024 will lose information.
  2. Also yes, using the larger input images might cause OOM issues depending on the model and the used GPU. Segmentation results might thus benefit from a patch approach.