I’m using the patchify
pypi module to first patchify
images and save them as a new dataset and then load them back as individual images in a pytorch dataset.
It’s just:
`patches_img = patchify(image, (patch_size, patch_size, 3), step=patch_size)`
for all images and then I’ll load the individual patches in just like any images.
What I’d like to do is something like this
class Dataset(object):
#images are the full pre patchified images
def __init__(self, images: List[str]):
self.images = images
self.transformer = Transform()
def __getitem__(self, idx: int):
image = self.images[idx]
#IMPORTANT switch to color later
patches = patchify(image, (patch_size, patch_size, 3), step=patch_size)
image_patches_tensor = self.transformer(patches)
return image_patches tensor
def __len__(self):
return len(self.files)
how would I do that?
Thanks in advance!