Creating 3D Dataset/DataLoader with patches

I have 20 3D nifty images which sizes are 172x220x156. I want to create a Dataset class and then a DataLoader made of patches of size 32x32x32 cropped from the images.
Each image will have 500 patches like that. so the total number of patches should be 18x500. I have worked with the 2D silces like that before(please see the codes below I used for 2D)
Is there any way to create similar Dataset class with patches from the 18 images??

class Dataset(data.Dataset):
    'characterizes a dataset for pytorch'
    def __init__(self, dir_data, list_IDs):
        self.dir_data = dir_data
        self.list_IDs = list_IDs

    def __len__(self):
        'denotes the total number of samples'
        return len(self.list_IDs)

    def __getitem__(self, item):
        'Generates one sample of data'
        #select sample
        path_mr = os.path.join(self.dir_data, 'mr', self.list_IDs[item])
        path_ct = os.path.join(self.dir_data, 'ct', self.list_IDs[item])

        X = Image.open(path_mr)
        X = np.array(X)  #np.shape(X) = (256,256)
        X = X[:,:,np.newaxis]  #np.shape(X) = (256,256,1)
        X = ToTensor()(X)
        y = Image.open(path_ct)
        y = np.array(y)[:,:,np.newaxis]
        y = ToTensor()(y)
        return X, y

How would you like to create these patches?
If you would like to create them in a non-overlapping way, you would end up with
(172//32) * (220//32) * (156//32) = 120 patches.
Could you explain, how these 500 patches should be created.

For the non-overlapping case you could load the image, use unfold to create the patches and return them in your __getitem__:

x = torch.randn(172, 220, 156)
patches = x.unfold(2, 32, 32).unfold(1, 32, 32).unfold(0, 32, 32)
patches = patches.contiguous().view(-1, 32, 32, 32)
print(patches.shape)
> torch.Size([120, 32, 32, 32])

The code you’ve provided for the 2D case doesn’t seem to create patches or am I missing something?

4 Likes

so I have two modality paired images: MR and CT, each having 172x220x156 images of 20 subjects.
I am creating mask of the 3D data(based on intensity thresholding). Then in random permutation I am taking 500 center points inside the mask(so that I can avoid taking patches from outside the brain). Then I am cropping 32x32x32 patches from each of the 500 center points in both CT and MR images. To mention, MR is input and CT is ground truth.

Yes you are right, the codes I provided is for 2D network which I worked before by creating 156 2D image slices from 3D ones and that worked pretty well and I understood that as well too.
But is there anyway I can do the same method with 3D patches now?

So far I have created two list(MR and CT) of 3D patches from 18 train subjects(2 are for test).

for i in filenames_train:
    path_mr = os.path.join(dir_train,'MR',i)
    MR = nib.load(path_mr)
    MR = np.array(MR.dataobj)
    path_ct = os.path.join(dir_train, 'CT', i)
    CT = nib.load(path_ct)
    CT = np.array(CT.dataobj)
    path_mask = os.path.join(dir_train, 'masks', i)
    mask = nib.load(path_mask)
    mask = np.array(mask.dataobj)

    X, y = get_paired_patch_3D(MR,CT,mask,num_centers=500,patchsize=32)

    patches_MR.append(X)
    patches_CT.append(y)

print(np.shape(patches_MR)) #(18, 500, 32, 32, 32)
print(np.shape(patches_CT)) #(18, 500, 32, 32, 32)

I will be using k-fold cross-validation and 3D UNet to synthesize CT images from MR.

Thanks for reply. Please let me know to clarify more if required.

Thanks for the information!
It looks like your get_paired_patch_3D method already provides the patches from the MR and CT images.

If I understand your question correctly, you would now like to create a Dataset to yield a MR-CT pair as a single sample?

If that’s the case, you could simply view the patches as patches_MR = patches_MR.reshape(-1, 32, 32, 32) (same for patches_CT) and pass them to a custom Dataset:

class MyDataset(Dataset):
    def __init__(self, patches_MR, patches_CT):
        self.patches_MR = patches_MR
        self.patches_CT = patches_CT
        
    def __getitem__(self, index):
        x = self.patches_MR[index]
        y = self.patches_CT[index]
        # Unsqueeze channel dimension
        x = x.unsqueeze(0)
        y = y.unsqueeze(0)
        return x, y
    
    def __len__(self):
        return len(self.patches_MR)

I assume your method to extract the patches keeps returns the corresponding pairs without any shuffling.

Let me know, if I misunderstood the question or use case.

2 Likes

1. Yes get_paired_patch_3D function provides same location patches from the MR and CT images.

2. I am not sure about the single sample thing. are you refering to the batch size?

3. I tried things like this:

# print(np.shape(patches_MR))  #(18, 500, 32, 32, 32)
# print(np.shape(patches_CT))  #(18, 500, 32, 32, 32)

class Dataset(data.Dataset):
    'characterizes a dataset for pytorch'
    def __init__(self, patches_MR, patches_CT):
        self.patches_MR = patches_MR
        self.patches_CT = patches_CT
        # self.patches_MR = patches_MR
        # self.patches_CT = patches_CT

    def __len__(self):
        'denotes the total number of samples'
        return len(self.patches_MR)

    def __getitem__(self, index):
        'Generates one sample of data'
        #select sample
        x = self.patches_MR[index]
        y = self.patches_CT[index]
        # Unsqueeze channel dimension
        x = x.unsqueeze(0)
        y = y.unsqueeze(0)
        return x, y

and then in the train.py:

train_dataset = Dataset(patches_MR,patches_CT)
train_loader = data.DataLoader(train_dataset,batch_size=5,shuffle=True)
print('train directory has {} samples'.format(len(train_dataset)))
# np.shape(train_dataset)
# Out[242]: (18, 2, 500, 32, 32, 32)

As you can see the train_dataset has 2 channels: if I am not wrong one for MR and one for CT and did you use unsqueeze for that reason?

4. Yes while extracting patches I did not use any shuffle which I intend to use in the torch.utils.data.DataLoader as you can see in the codes.

5. I want to feed the input(MR) and ground truth(CT) in the UNet/GAN as:

        for i, sample in enumerate(train_loader):
            netG.train()
            netD.train()
            time_batch_load = time.time() - time_batch_start
            time_compute_start = time.time()
            mr = sample[0].float().to(device) 
            ct = sample[1].float().to(device)
            batch_size_temp = len(sample[0])
            outputG = netG(mr)

I would like to add the following:

for i,sample in enumerate(train_loader):
    mr = sample[0].float().to(device)
    print('input sample shape of train_loader: {}'.format(mr.shape))
    break

gives :

input sample shape of train_loader: torch.Size([5, 1, 500, 32, 32, 32])

Process finished with exit code 0

which is not the exact thing I was expecting. I wanted 5/10 patches at a time. Based on the batch_size = 5 or 10.
Do you think transpose function may come handy?

It seems you didn’t applied the suggested reshape operation on the data.
Here is a sample code:


patches_MR = torch.from_numpy(
    np.random.randn(18, 500, 32, 32, 32).reshape(-1, 32, 32, 32))
patches_CT = torch.from_numpy(
    np.random.randn(18, 500, 32, 32, 32).reshape(-1, 32, 32, 32))


train_dataset = Dataset(patches_MR,patches_CT)
print('train directory has {} samples'.format(len(train_dataset)))
> train directory has 9000 samples

train_loader = DataLoader(train_dataset,batch_size=5,shuffle=True)
x, y = next(iter(train_loader))
print(x.shape)
> torch.Size([5, 1, 32, 32, 32])
print(y.shape)
> torch.Size([5, 1, 32, 32, 32])

I think this is what you want, i.e. given batch_size=5, you’ll get 5 patches from each dataset for each iteration.

I’m wondering, why this seems to work:

np.shape throws an error if I pass my custom Dataset, which I would expect, but apparently you got some shape information?

Hi,
1. yes, I haven’t used the reshape operation at that time.
2. I am curious why have you included np.random.randn ?
The CT and MR patch centers come from the exact same location from two images. I mean the patches must be paired.
Won’t np.random.randn change the correspondence of patches in CT and MR based on random.seed()?
3. I have used reshape in the following way:

patches_MR = np.array(patches_MR)
patches_MR = patches_MR.reshape(-1,32,32,32) #todo: which formats
patches_CT = np.array(patches_CT)
patches_CT = patches_CT.reshape(-1,32,32,32)
# patches_MR = np.array(patches_MR) #type(patches_MR) -->  numpy.ndarray

train_dataset = Dataset(patches_MR,patches_CT)
train_loader = data.DataLoader(train_dataset,batch_size=5,shuffle=True)
print('train directory has {} samples'.format(len(train_dataset)))
# train directory has 9000 samples


for i,sample in enumerate(train_loader):
    mr = sample[0].float().to(device) 
    print('input sample shape of train_loader: {}'.format(mr.shape))
    #input sample shape of train_loader: torch.Size([5, 1, 32, 32, 32])
#    input sample shape of train_loader: torch.Size([5, 1, 32, 32, 32])
#    input sample shape of train_loader: torch.Size([5, 1, 32, 32, 32])
.
.
.

Now it works as you have suggested I guess.

4. Its funny coz I also don’t know how it worked at that time:

# np.shape(train_dataset) 
# Out[242]: (18, 2, 500, 32, 32, 32)

but later it showed errors and I have corrected it as you have mentioned.

I have the following question:

5. Is it necessary to use the class Dataset and torch.utils.data.DataLoader to input the data to the network??
I have seen PyTorch codes that do not use these two either.

  • I just created some dummy data for the example. You should of course use your paired inputs, not the garbage data. :wink:

  • It’s probably not really necessary, but wrapping the data in a Dataset allows you e.g. to add some transformations later. Using a DataLoader on the other hand allows to create batches easily, shuffle the data (the pairs will still be valid), use multiple workers etc. It’s just a clean approach in my opinion, but since you already have the data in memory, you could also just index it manually.

Haha, I should’ve guessed that.
I will run the experiments in some network and let’s hope that it works.

Thanks, appreciate it.

hello banikr,
I have question I read your post and all the conversation. Actually I am making data loader for MRI images collected from ADNI. I loaded a single image from training folder now I want to load all the MRI images in a iteration way and than apply some neural network for classification purposes.
Please help me that how you load your whole MRI data from the directory
I have 900 MRI images in three different folder i.e Alzheimer have three main classes
CN, MCI, AD so I want to load all the data from each folder but how I to do?
Further more I read 1000 post and tutorial but I couldn’t get an idea to implement as I am not much expert in pytorch and 3D data handling.
I am using following IDE and libraires
IDE- Spyder
using Pytorch and tensorflow
python 3.7
Thanks in advance

Hey,
I did not load the whole MRI image to the data loader. The MR images I am using are of size 172x220x156 so it will exceed the memory Cuda cores can load.
For image synthesis, I created patches of 10000 per image and augmented the data. In your case of classification, it should be similar.
Then I am using regression analysis/prediction from MR image which will not work in patch-based training…so I subsampled the image to reduce the number of voxel size per image.
Then the PyTorch data loader should work fine.
Let me know if you need more help.
I would suggest you use Jupyter notebook or Pycharm IDE for coding. I find them easy to use and feasible. Use python 3.6 if possible, not all the libraries support 3.7 yet.
Since it is Pytorch help forum I would ask you to stick to it, eh… :wink:

1 Like

How to make use of the torch.utils.data.Dataset and torch.utils.data.DataLoader on your own data (not just the torchvision.datasets )?

Is there a way to use the inbuilt DataLoaders which they use on TorchVisionDatasets to be used on any dataset?

Yes, that’s possible and you can write your own Dataset implementation and just pass it to a DataLoader.
Have a look at this tutorial for an example.

Thanks banikr for your valuable reply
that my objective that I passed the whole MR image into my network and my network just classify in their respective classes. But now i knows about that no any method which take directly a 3D image as input file and than some processing by CNN or whatever the network is and than classify into their classes, every one use patch wise input into their network. Furthermore, in my project I used all the ADNI data for that I don’t use augmentation but directly processed all my MR images.
yes you’re right that regression analysis will not help in this regard, you have to use neural network for that purpose as suggestion
will now I am very use to in spyder IDE as its on top of most using IDE now days.
if you don’t have issue could please show some of code snippet which from dataloading for guidance.
Thanks for reading such a long reply:innocent::innocent:

Hi baniker How to convert a single or bunch of MRI image having .nii format into patches?
also please guide me how to subsample same image using pytorch?

Hi @ptrblck
I have a question about unfold. I want to extract patches from my dataset. I use medicaltorch libarary to loading data. If i use unfold , It has error, I think when I load data by using dataloader, it doesn’t access to data. what can I do?
Thanks.

ROOT_DIR= “/home/elahe/data/dataset/”
img_list = os.listdir(os.path.join(ROOT_DIR,‘trainnii’))
label_list = os.listdir(os.path.join(ROOT_DIR,‘labelsnii’))
print(img_list[1])
img_list= (i.unfold(2, 32, 32).unfold(1, 32, 32).unfold(0, 32, 32) for i in img_list)
label_list = (i.unfold(2, 32, 32).unfold(1, 32, 32).unfold(0, 32, 32) for i in label_list)

filename_pairs = [(os.path.join(ROOT_DIR,‘trainnii’,x),os.path.join(ROOT_DIR,‘labelsnii’,y)) for x,y in zip(img_list,label_list)]
print(filename_pairs)
train_transform = transforms.Compose([
mt_transforms.Resample(0.25, 0.25),
mt_transforms.ElasticTransform(alpha_range=(40.0, 60.0),
sigma_range=(2.5, 4.0),
p=0.3),
mt_transforms.ToTensor()]
)
train_dataset = mt_datasets.MRI2DSegmentationDataset(filename_pairs,transform=train_transform)
dataloader = DataLoader(train_dataset, batch_size=2,collate_fn=mt_datasets.mt_collate)

What error do you get?
unfold is a method which should be called by or on a tensor. Based on your code snippet it looks like you are calling it on a file path (string).
Load the images, transform them to tensors, and then call unfold on them.

I apply it after transform, It worked. Thanks a lot.
I have another question, Is there any way for pairing label and without label images in 3D, like “MRI2DSegmentationDataset”?
“MRI2DSegmentationDataset” pair images in 2D, but I want to pair in 3D.
Or should i use the patches and transform them to 2D?
can I train my data by using patch in 3D?

I’m not sure what “pairing” means in this context.
If you want to work on a segmentation use case for 3D data, it should work in the same manner as for 2D data (just with an additional dimension).