r3a2t10
(rat321)
1
How can I extract a image to smaller images in dataloader instead creating a new dataset that croped the image manually
def __getitem__(self, index):
img = Image.open(self.files[index % len(self.files)])
img_lr = self.lr_transform(img)
img_hr = self.hr_transform(img)
return {"lr": img_lr, "hr": img_hr}
I want to extract the img(2048x2048) crop into four smaller images (1024x1024 each)
__ __ __ __
| | => |__|__|
|__ __| |__|__|
If you don’t want to apply the cropping inside the Dataset.__getitem__
, you could use it in the DataLoader
loop:
for data, target in loader:
# apply the transformation here
To create these four patches, you could either use unfold
or torchvision.transforms.functional.five_crop
and remove the center crop:
data = torch.randn(1, 1, 2048, 2048)
patches = data.unfold(2, 1024, 1024).unfold(3, 1024, 1024)
print(patches.shape)
> torch.Size([1, 1, 2, 2, 1024, 1024])
# reshape if needed
imgs = TF.five_crop(data, size=1024)
# remove center crop, imgs is a list, so you might want to `torch.stack` them
imgs = imgs[:4]