Hi,
I’m trying to reproduce results from paper Learning to see in the dark this repo using Pytorch. I have referred to implementations from this repo and this repo but I had some issues.
As far as I know, these repos load RAW images in two ways:
- Use PyTorch’s dataloader, read RAW image using rawpy package in custom getitem. In this way, I can use all the image’s pairs for training and validation but speed is extremely low because of these file are quite heavy. 1 epoch took me about 4200 secs to finish. (cydonia999’s implementation)
- Load all images into a dictionary/array variable. Basically images are stored in RAM so this way require lots of RAM memory. Images only need to be loaded once. After that, traning phase and validation phase is done very quickly (about 30s-45s/1 epoch on V100 GPU).
I prefer the second method than the first one because 2000 epochs with 1 epoch takes more than 1 hour could be a really long time. But due to the limited RAM I have using Colab Pro (25GB), I can not load all the images. Now, I’m trying to load half of the images, and random loading again after 100 epoch. Could this be the most suitable way for me? Or is there any way to speed up loading process using dataloader?
Best regards,