Hi,
I am looking for an efficient way to load data and pass it to my UNet3D model during the training process.
Since I can’t pass whole volumes to my network, I am cropping the volumes and passing those crops to my model.
Btw, I am using SimpleITK to load my volumes and annotations (.nrrd files).
I have tried 2 different approaches:
-
Preprocessing data and creating crops prior to the training process. This is very efficient in terms of GPU usage but it takes so much disk space to save those crops. My current dataset is not that big, but if I use a bigger dataset then this approach will become unfeasible.
-
Generating metadata about where to crop the volume and mask. Basically, prior to training, I go through my dataset and generate metadata that looks like this:
crop_info = {
"image_path": imgpath,
"annot_path": annotpath,
"z_start": z_start,
"z_end": z_end,
"y_start": y_start,
"y_end": y_end,
"x_start": x_start,
"x_end": x_end,
"pad": pad_width
}
This second approach is very efficient from disk memory perspective because my metadata does not take that much space at all. During the training, I am iterating over a list of metadata like the one above, then I read the volume image and annotation image and transform them and
finally crop them and pass the crop to my model. Because there is so much I/O operations going on, when I monitor my GPU usage, I see that it goes in spikes mode. Meaning that it goes to 100% usage then 0% then 100 again then 0 and continues like this. So it’s very inefficient from GPU usage perspective.
I tried to find a way where I load the volume and annotation files once and iterate through all the possible crops before going to the next image and annotation but I couldn’t find a proper way to do this.
What is the most efficient approach to both save on disk space and make the most out of the GPU?
Thanks.