I have two images, the input image is of the size (64,64,32) and the label or ground truth is (128, 128, 64), I want to create patches of these two images of patch sizes (32,32,32) and (64,64,64) respectively. The problem here is I am unable to create patches of these images at the same positions. That is, first (32,32,32) patch has label in first (64,64,64) of ground truth. But the problem is first (32,32,32) is forming label say at last (64,64,64). Please find the below images as reference to the said issue. I would be very thankful, if I could get some inputs on how to resolve this issue.
Note: I have used sequential patch loader of pytorch ( torch.utils.data.sampler.SequentialSampler)
So I am trying to super resolve an image, that is use images of size (64,64,32) and (128,128,64) for a network to learn to increase the resolution from (64,64,32) to (128,128,64)… I saw my network was over fitting and when I debugged I noticed that the issue with the patch volume mismatch. I am trying to use a patch size of 32 cuboid for low resolution image and 64 cuboid for the high resolution image (which is the target value/label/what the network is learning). So I am unable to map the patch(32 cuboid) from low resolution image with the patch (64 cuboid) in the high resolution image. I have tried to illustrate this with a 2D normal image, but conceptually this is how I aim to provide to 3D MRI data to the network for training
If the target image is scaled by an integer factor (in your case 2x), you could increase the kernel size and stride by the same factor in the unfold method, which should create the same number of patches for the input and target (of course with a different size).