3D volumes Dataset to 2D slices Dataset


I currently have a set of 3D nifti images that I’m loading into a Dataset object using @MONAI .
However, instead of having a Dataset object composed of multiple volumes, I wish to have a 2D dataset composed of all the slices from all the volumes.

Is there a way to load the data this way or change the Dataset object after loading?
Or, in case it is not possible, is there a way to change the batches so that instead of batching out volumes, it batches out slices?
e.g if batch size is 2, instead of 2 volumes it would send out all slices from the 2 volumes.

Thanks for the help!

Could you explain what the difference would be in these two cases, i.e. how the batch shape would look like?

Of course.
So, assuming the volumes were 128x128x32, greyscale and batch was 2, instead of the batch being
[2,1,32,128,128] they would be [64,1,128,128].
I’m unsure if this sort of conversion even makes sense in any scenario, but I thought it could be an alternative assume there was no function to read and save the volumes as slices on the Dataset object.

This “flattening” operation can be performed by:

x = torch.randn(2, 1, 32, 128, 128)
x = x.permute(0, 2, 1, 3, 4)
x = x.view(-1, *x.size()[2:])
> torch.Size([64, 1, 128, 128])

This could be an easy way to change the input format.
The alternative approach would be to open the volume, grab some slices, and return only these.
However, the logic inside the __getitem__ method would be a bit more complicated, as you would have to e.g. reuse the same volume to load the missing slices and would have to map the passed index somehow to this logic or use a custom sampler etc.

1 Like

Hi, I have a situation opposite to this problem.
I have a point cloud of dim :Bx3xN
and I want to convert it to a voxel of dim: Bx3xRxRxR
how can I do this efficiently in pytorch?

I’m not aware of any built-in PyTorch methods to work with point clouds and their transformations, so I would assume you might want to use a 3rd party library for it.

could you suggest any?

The flattening solution sounds great!
I’m new to torch so Im sorry if this follow-up question doesn’t make much sense but, to use your solutions on 2D models the only part that would need to be changed would be the training loop logic correct? When iterating each batch, I would flatten it and go from there, correct?

Yes, the 4-dimensional input tensor would fit into *2D modules, such as nn.Conv2d.

@sinAshish PyTorch Geometric might have some methods for point clouds, but I haven’t used it so far.

1 Like