I had a a simple question regarding image patching.
For exmaple, if we are doign segementation or any other image processing task.
If an image is (256,256) and during training nd testing, we split it to patches of size (32,32), why not just resize the image to (32,32) directly? Do we lose information if we do so?
I don’t really understand why apply patches in the first place, does it improve segementation performance or is it purely for computational power expenses?