Creating 3D VOLUME from respective 2D slices

I am working with 3D brain MRI scans (nifti format). I am using 2D slices from 3D volume to train a 2D network. Now, I want to do the evaluation on 3D volume by stacking the slices of respective subject together and calculate surface dice co-efficeint. How can I stack all the slices of respective subject together to get the 3D volume back?

Have you tried torch.concatenate(<list_of_tensors>, axis=0)? And then you can permute the 3D tensor according to how you sliced it.

@ege_b , my data consists of 3D brain MRI scans. I am treating the 2D slices independantly and training the 2D model on them. Below is the part of dataset class showing how actually I am reading the images:

for i, f in enumerate(files):

            nib_file = nib.load(os.path.join(images_path, f))
            lbl = nib.load(os.path.join(data_path, 'Silver-standard', self.folder, f[:-7] + '_ss.nii.gz')).get_fdata(
                'unchanged', dtype=np.float32)

            if self.scale:
                transformed = scaler.fit_transform(np.reshape(img, (-1, 1)))
                img = np.reshape(transformed, img.shape)

            if not self.sagittal:
                img = np.moveaxis(img, -1, 0)
          
            if self.rotate:
                img = np.rot90(img, axes=(1, 2))

            if img.shape[1] != img.shape[2]:
  
                img = self.pad_image(img)

            images.append(img)

            if not self.sagittal:
                lbl = np.moveaxis(lbl, -1, 0)
            if self.rotate:
                lbl = np.rot90(lbl, axes=(1, 2))
            if lbl.shape[1] != lbl.shape[2]:
                lbl = self.pad_image(lbl)
            labels.append(lbl)

            spacing = [nib_file.header.get_zooms()] * img.shape[0]
            self.voxel_dim.append(np.array(spacing))  


        images, labels = self.unify_sizes(images, labels)
        self.data = np.expand_dims(np.vstack(images), axis=1)
        self.label = np.expand_dims(np.vstack(labels), axis=1)
        self.voxel_dim = np.vstack(self.voxel_dim)

def __len__(self):
        return len(self.data)

    def __getitem__(self, idx):
        data = self.data[idx]
        labels = self.label[idx]
        voxel_dim = self.voxel_dim[idx]
        # asd
        return data, labels, voxel_dim


Now, when I pass the dataset into dataloader, the images are independant 2D slices, the information that which 2D image belonged to which slice is not there anymore. And are randomly passed in the loader depending on the batch size as follows:

train_loader = DataLoader(train_data, batch_size=config.batch_size,
                              shuffle=True, num_workers=10, drop_last=True)

Now for evaluation, I want to stack all the 2D images to get back the respective 3D volume. I am not sure of how they were permuted before passing to DataLoader . I donot have the slice information too. Can you please help me with this.

I mean it seems that you are not sure how to get 2D images in the correct order again, for this you can just create a new data loader with eval_loader = DataLoader(t<your_other_args>, shuffle=False, drop_last=False), which forces the data loader to yield the 2D images in the same order as you saved them to self.data. An other option would be, if you want to reconstruct these during training, which is obviously not possible unless you have every single 2D slice of the 3D image, then you can also force __getitem__ to return the idx so you do know which order of the slice you get. And then you should apply the inverse of every transformation you apply to your 2D slices, put them in the right order depending on their indices, and concatenate them to a torch tensor using torch.concatenate(<list_of_vectors>, axis=0>). But the above code snippet does not have any information to how you actually get 2D slices from your 3D volume.

@ege_b , yes exactly I donot know how to put back the 2D slices together to get 3D volume back. With your suggestion:

eval_loader = DataLoader(t<your_other_args>, shuffle=False, drop_last=False)

If I do this, the size of one 3D is 200,256,256. Now as I have stored them as slices is self.data. The above line of code is not goint to help me to take all the slices of of an image. Since the batch_size is different than the number of slices in the 3D volume.

Moreover, I want to do the same procedure (stacking 2D to get 3D volume) for training set as well. As the same evaluation metric is used for both of train and val set.

I modified the validation code as below. In which I am actually laoding the 3D volume locally, doing prediction on each slice and then concatenating them together to the original 3D volume.

# Train the model on 2D slices
for epoch in range(num_epochs):
    for inputs, targets in slice_dataset:
        optimizer.zero_grad()
        outputs = model(inputs)
        loss = loss_fn(outputs, targets)
        loss.backward()
        optimizer.step()

# Evaluate on full 3D volume
model.eval()
with torch.no_grad():
    # Get 2D slices from the 3D volume
    slices = []
    for z in range(volume.shape[0]):
        slice = volume[z, :, :]
        slices.append(slice)

    # Stack the 2D slices to reconstruct the 3D volume
    stacked = np.stack(slices, axis=0)

    # Run inference on the stacked 2D slices
    inputs = torch.from_numpy(stacked).unsqueeze(1)
    outputs = model(inputs)

    # Convert the output back to a 3D volume
    predicted_volume = np.squeeze(outputs.numpy(), axis=1)

However, for training part , I am not sure how to do it. Since I want to pass the slices randomly per batch, which obviously will not have all the slices from one 3D volume.

To achieve this, you should use a DataLoader with drop_last=False and should return the index of the slice with the data itself in __getitem__, i.e. return data, idx, labels, voxel_dim or however you want and unpack accordingly. On every iteration of the loop unpack with for <batch_nr>, (<data>, <slice_idx>, <labels>, <voxel_dim>) in enumerate(train_loader): and store both data and <slice_idx> (ofcourse name this however you want) in a list. AFTER THE LOOP , flatten the list of <slice_idx> (I’m not sure of the format that will be returned here, you might want to check that with a debugger or a print statement) and achieve a list of slice indices [143, 156, 2, 42, ...]. Then concatenate your list of batches with tens = torch.concatenate(<list_of_batches>, axis=0). Sort your indices and get their real orders with: <order_np_array> = np.array(<list_of_indices>).argsort(). Lastly, reorder them using <3d_image> = tens[<order_np_array>]. This means you can ONLY reconstruct your 3D image AFTER every epoch.

@ege_b I understood your suggestion in general. But from implementation perspective.
My list of slice indices would not be integers[143, 156, 2, 42, ...] The slice id is not a unique identifier. For instance, one image has 200 slices and the second image has 100 slices. Now in __getitem__ ,I am storing the slice id as image_name + idx + slice_id (e.g. CC0304_ge_3_57_F.nii.gz_id_397). If I sort the slice id based on image_name and slice_id. How would I retirve the corresponding prediction, ground truth, voxel_dim. Since the list of slices was not unique for every image.

Also I donot understand this line:

Then concatenate your list of batches with tens = torch.concatenate(<list_of_batches>, axis=0)

Would not this concatenate all the tensors in one single tensor. But this is not the objective here.

I’m sorry, I did not quite understand what the objective is then. But when you are slicing the 3D images, you should have some unique identifier of the position of that 2D slice. The idea is to pass this through in __getitem__ and use this to reconstruct your image.

I thought you were reconstructing the 3D volume by stacking the 2D slices in the correct order, as if the 3D volume was a cube and 2D slices were the squres corresponding to every height level so to say. But without knowledge of what the actual 3D image looks like, I don’t think I can help you no more :slight_smile: If you have any questions for any specific operation though, I might be of further help. Good luck!