How to return the 3D volume from dataset class?

I am new to Pytorch and I am working with 3D dataset. In the custom class I want to read a 3D image and return it as one 3D image and not as 2D slices. Now in dataloader I want to make sure that if batch size is set to two, then two 3D images are being passed. How can I do that with custom dataset class.

I am able to read the 3D images using nibabel. But I am not sure that, how to enusre that one 3D image should be considered as one entity.

Loading the volumes and transforming them into a tensor should work as the DataLoader shouldn’t care about the dimensions of your input as long as it can stack the samples in the batch dimension:

class MyDataset(Dataset):
    def __init__(self):
        self.data = torch.randn(100, 3, 128, 128, 128) # [nb_samples, channels, depth, width, height]
        
    def __getitem__(self, index):
        x = self.data[index]
        return x
    
    def __len__(self):
        return len(self.data)

dataset = MyDataset()
loader = DataLoader(dataset, batch_size=2)

x = next(iter(loader))
print(x.shape)
# torch.Size([2, 3, 128, 128, 128]) # [batch_size, channels, depth, height, width]

@ptrblck , below is the part of my dataset class code:

for i, f in enumerate(files):
            print("file_name", f)
            nib_file = nib.load(os.path.join(images_path, f))
            img = nib_file.get_fdata('unchanged', dtype=np.float32) 
            lbl = nib.load(os.path.join(data_path, 'Silver-standard', self.folder, f[:-7] + '_ss.nii.gz')).get_fdata(
                'unchanged', dtype=np.float32)

            if self.scale:
                transformed = scaler.fit_transform(np.reshape(img, (-1, 1)))
                self.img = np.reshape(transformed, img.shape)         
            if not self.sagittal:
                self.img = np.moveaxis(img, -1, 0)
          
            if self.rotate:
                self.img = np.rot90(img, axes=(1, 2))

            if img.shape[1] != img.shape[2]:
                self.img = self.pad_image(img)
 
            if not self.sagittal:
                self.lbl = np.moveaxis(lbl, -1, 0)
            if self.rotate:
                self.lbl = np.rot90(lbl, axes=(1, 2))
            if lbl.shape[1] != lbl.shape[2]:
                self.lbl = self.pad_image(lbl)
         
            spacing = [nib_file.header.get_zooms()] * img.shape[0]
            self.voxel_dim = spacing  
        
            if i ==1: # done only to understand the shapes
                break


def __getitem__(self, idx):

        data = torch.from_numpy(self.img[idx])
        labels = torch.from_numpy(self.lbl[idx])
      
 
        return data, labels

Now in order to check the data is loaded as desired, I am trying to check the dimesion of 3D image on the first index of train_data as follows:

train_data = cc359_volume(config)
 print("shape", train_data[0][1])

I expected it to give 200, 256,256, as nib_file_dim was (200, 256, 256) but it give the shape:

shape torch.Size([256, 256])

That means, the dataset class is not returning 3D volume as one entity. Rather, it is giving back one slice as indifvidual entity. Also I checked the dimesnion in loader as follows:

train_features, train_labels = next(iter(train_loader))
print(f"Feature batch shape: {train_features.size()}")
print(f"Labels batch shape: {train_labels.size()}")

It gives the following:

Feature batch shape: torch.Size([2, 256, 256])
Labels batch shape: torch.Size([2, 256, 256])

How can I get the datset to return the image as whole.

Negative strides are not supported and e.g. the np.rot90 transformation would create such negative strides as seen here:

img = np.random.randn(3, 224, 224, 224)

img = np.moveaxis(img, -1, 0)
print(img.strides)
# (8, 89915392, 401408, 1792)

# works
x = torch.from_numpy(img)

img = np.rot90(img, axes=(1, 2))
print(img.strides)
# (8, -401408, 89915392, 1792)

# breaks
x = torch.from_numpy(img)
# ValueError: At least one stride in the given numpy array is negative, and tensors with negative strides are not currently supported. (You can probably work around this by making a copy of your array  with array.copy().) 

# works again
x = torch.from_numpy(img.copy())

As the error message suggests, copy() the numpy array before trying to transform it into a PyTorch tensor.

In _getitem__(self, idx), data = torch.from_numpy(self.img[idx]) is retutning 256,256 (one slice dimesion). Why it not returning 200, 256, 256.

I don’t know and oyu would need to check the shape of img after loading it via nib_file.get_fdata and make sure no transformation is changing its shape in an unwanted way.

@ptrblck

Transformations were not changing the shape. Previously in get_item()_, I was indexing on image it self like this:


def __getitem__(self, idx):
        data = self.data[idx]  #self.data has one image

That was changing the dimension and was taking the slices and givng [256,256]. I am now indexing on the file list in the folder itself as follows:

def __len__(self):
        return len(self.files)

    def __getitem__(self, idx):

        nib_file = nib.load(os.path.join(self.images_path, self.files[idx]))

which gives the dimension [200,256,256]. Is my understanding correct?

Yes, loading the entire volume in the __getitem__ method is the right approach and I assumed self.data contained all volumes.