I want to load a dataset with both size of 224 and it’s acutal size. But if i use transform in DataLoader i can only get one form of dataset, so i want to know how can i load they together?
You may refer to the implementation of ImageFolder
Here is pseudo code that may be helpful:
import torchvision as tv
class MyImageFoler(tv.datasets.ImageFolder):
def __getitem__(self,index):
origin_data = process(self.imgs[data])
transoform_data = transform(origin_data)
return origin_data,transoform_data,label
dataloader = Dataloader(MyImageFoler())
for origin_datas, transoform_datas, labels in dataloader:
train()
Thanks for your elegant method,and I wonder whether the follow implementation is work:
first I just use
transform = transforms.ToTensor()
for ImageLoader to load the original dataset,
then use
scale_transform = transforms.Compose([
transforms.Scale(256),
transforms.RandomCrop(224),
])
data_fixed = scale_transform(data)
for scale the image of the dataset
when you do transform = transforms.ToTensor()
In dataset , it return a tensor,while
transforms.Scale(256),
transforms.RandomCrop(224),
they were both designed for PIL Image. so you need to
scale_transform = transforms.Compose([
transforms.ToPILImage(),
transforms.Scale(256),
transforms.RandomCrop(224),
transforms.ToTensor()
])
Ok, I will try it, Thanks!
Hi guys!
Please consider this idea I just came up with.
The ideas is to load different batches of images randomly, but have only similar images in one batch.