Small Dataset with High Resolution Images (Avg:5100*3500)


(Aysin) #1

I have a small dataset with high-resolution images. I’m using the transform statement below before loading the data

transform = transforms.Compose([transforms.ToTensor(), 
                     transforms.Scale((256,256)), 
                     transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])

data_set = dset.ImageFolder(root="data",transform=transform)
dataloader = torch.utils.data.DataLoader(data, batch_size=4,shuffle=True,num_workers=2)

I’m getting memory error for the below code which I expect.

import matplotlib.pyplot as plt
%matplotlib inline
    
# obtain one batch of training images
dataiter = iter(train_loader)
images, labels = dataiter.next()
images = images.numpy()

Is there any other way to resize rescale and then load the data using pytorch?I. e., Not read the data directly from the folder and resize rescale by code then use DataLoader ?


#2

What kind of error do you get? Are you really running out of memory?
A single image in torch.uint8 would take approx 5100*3500*3/1024**2 = 51MB.
So even with a batch size of 4 and even if the next batch is already loaded using multiple workers, you’ll use less than 500MB.

Could you post the error message and the stack trace if possible?


(Aysin) #3

Sorry, my mistake :slight_smile: I forgot to add this line to use GPU device = torch.device("cuda:0")

This might help me with another project idea! Pytorch doesn’t support .CR2, RAW image, file formats yet, right?


#4

Well, images are loaded in torchvision using PIL. So basically all formats are supported as long as you can use a library to load the data.
rawpy seems to be able to handle raw image data.
You would need to create your own Dataset and write the code to read and process the images in __getitem__ or alternatively you could write a loader and pass it to torchvision.datasets.ImageFolder.


(Aysin) #5

Well, I have at least 500GB RAW images! I can come up with at least one dataset if not more! Thanks for the info.