3d dataloader for segmentation

Hi all!

I would like to use a 3D U-Net model for segmentation but I am not sure how to create an appropriate 3D dataloader for the dataset. The full dataset is 240x240x155 and I would like to create Bx1x64x64x64 for example. I currently have a dataloader that can output the whole volume chunked up into 64x64x64 voxels but I am having trouble in randomizing the voxel volumes.

Does anyone have any suggestions? Is there a recommended pytorch approach to use?

If you want to use size of 64x64x64 volume to train network, you can random crop volume from initial size of 240x240x155. Then you should make getitem() function output 1depthheight*depth and use DataLoader to warp it.

I’ve seen random crop being used but I want to systemically chunk up each volume into 64x64x64 and then just shuffle them. Is random crop the only way?

You can set stride to crop fixed size of volume

@yw_tt This is what I have right now; I’ve changed the sizes a bit to just use the full image but not all the channels at once.

I’m a little confused because I’ve never used 3D networks but, would the following input shape of 2x8x16x192x192 be correct for something like 3D UNet using a batch of 2? Or should the dataloader dissolve the channels (16) into the batch dimension?

class data(Dataset):
    def __init__(self, img_dir, crop_size=192, voxel_slices=16, shuffle=False):
        self.patients = glob(img_dir)
        self.crop_size = crop_size
        self.slices = 128           # how many slices to take from 155 slice volume
        self.voxel_slices = voxel_slices
        
    def __len__(self):
        return len(self.patients)

    def __getitem__(self, index):
        vol = np.zeros((1, 240, 240,155))

        path = glob(self.patients[index] + '/*_flair.nii.gz')
        vol = nib.load(path[0]).get_data()
        vol = np.swapaxes(vol,-1,0)  # (240,240,155) -> (155,240,240)        
        if self.crop_size is not None:
            start = vol.shape[-1]//2 - self.crop_size//2
            stop = vol.shape[-1]//2 + self.crop_size//2
            num_chunks = self.slices//self.voxel_slices
            
            #remove blank channels (0-8 and 136-155) and crop to crop_size
            vol = vol[8: 8 + self.slices, start:stop, start:stop]
              
        voxels = np.zeros((num_chunks, self.voxel_slices, self.crop_size, self.crop_size))     # 1, 128//16, 16, 192, 192
            
        for i in range(num_chunks):
            voxels[i,:,:,:] =  vol[i*self.voxel_slices : (i+1)*self.voxel_slices, :, :]
                
        _voxels = torch.from_numpy(voxels).float()    
        _gt = torch.from_numpy(voxels[-1]).long()
        print(_voxels.shape)
        return _voxels, _gt

If your input shape is 2x8x16x192x192, so batchsize=2, channel=8, volume size=16192192. Channel is usually set to 1 or 3. I think channel is not True. I think you image data maybe MRI. MRI data may have 4 modality, so the channel maybe 4 or others. You can set 2x1x16x192x192 to train 3DUNet or other 3D NetWork. The meaning of “8” equivalent to 4x2, 4 batch(batchsize is 2).

Yes, my dataset is the BraTs MRI dataset - the dataloader I have right now is for only one modality and I also think it is not correct. If I want to chunk up the data for 3D UNet then I think each voxel chunk (16*192*192) should get put into batch dimension as (2*8 x 1 x 16 x 192 x 192 = 16x1x16x192x192 . Does that seem right?

That’s right. You only to set appropriate batchsize to fit your GPU memory.

thanks for your help! :slight_smile:

should I use this code for my ADNI MRI data for loading the files from directory?
I have nii file format so could it create error to load tham? please someone provide me dataloading code which load the data into my memory in an iterative way.