I currently have a set of 3D nifti images that I’m loading into a Dataset object using @MONAI .
However, instead of having a Dataset object composed of multiple volumes, I wish to have a 2D dataset composed of all the slices from all the volumes.
Is there a way to load the data this way or change the Dataset object after loading?
Or, in case it is not possible, is there a way to change the batches so that instead of batching out volumes, it batches out slices? e.g if batch size is 2, instead of 2 volumes it would send out all slices from the 2 volumes.
So, assuming the volumes were 128x128x32, greyscale and batch was 2, instead of the batch being [2,1,32,128,128] they would be [64,1,128,128].
I’m unsure if this sort of conversion even makes sense in any scenario, but I thought it could be an alternative assume there was no function to read and save the volumes as slices on the Dataset object.
x = torch.randn(2, 1, 32, 128, 128)
x = x.permute(0, 2, 1, 3, 4)
x = x.view(-1, *x.size()[2:])
> torch.Size([64, 1, 128, 128])
This could be an easy way to change the input format.
The alternative approach would be to open the volume, grab some slices, and return only these.
However, the logic inside the __getitem__ method would be a bit more complicated, as you would have to e.g. reuse the same volume to load the missing slices and would have to map the passed index somehow to this logic or use a custom sampler etc.
The flattening solution sounds great!
I’m new to torch so Im sorry if this follow-up question doesn’t make much sense but, to use your solutions on 2D models the only part that would need to be changed would be the training loop logic correct? When iterating each batch, I would flatten it and go from there, correct?