Make patches from images with their masks and reconstruct those images

Hello,

I have large images with their masks. I want to extract overlapping patches, then feed them to the network and at the end reconstruct the masks to calculate the loss based on the whole image.

Because of that I cannot load the whole data in one dataloader since I will lose which patch belongs to which image.

Ideally, I want to make a folder for each image, put the patches and masks to that folder, then feed the batches of that folder to the model, then reconstruct the images and calculate the loss.

I would appreciate it if someone can help me.

1 Like

To create the patches you could use nn.Unfold or call unfold directly on the tensor.
The image could be reconstructed using nn.Fold. Note that overlapping areas will be accumulated so you might need to normalize these areas separately.

Once you have the patches you could store them in separate folders and create a custom Dataset to load the patches from each folder using the passed index. Assuming you would like to load all patches into a batch, you should use batch_size=1 in your DataLoader and create the complete batch in your Dataset.__getitem__ method.

Alternatively, you could also load each image completely and create the patches in the __getitem__. This would avoid the previous step of storing the patches separately.

2 Likes

@ptrblck Thanks for your answer.
Before you answer my question, I have written a dataloader that l load all the data (images) create batches from that (not a tensor, just a simple RGB image), then for each batch, I will save the image name and the number of batches, so at the end, I can know which patch belongs to which image and what is the number. I do not know if it is a efficient way or not.

For your solution, when we have folders, you mean in Dataset, in __init__, I load all the folders which have batches?
Can you elaborate more? you mean I should have n dataloader which n is the number of images? and another thing is that for example for each image, I will create 1000 batches, then how can I put these 1000 patches into a batch_size of 32 and do not lose which patch belongs to which image?

Again, thanks for your time.

I’m not sure if I misunderstand the explanation, but could you explain this a bit more?

How are you determining the number of batches and how do you store it with the image name?

Would you like to create batches with patches from mixed images or should a batch only contain patches from the same image?
In the latter case: should these patches from the same image be shuffled or do you want a specific order?

1 Like

Dear @ptrblck,

For this question, I have written a python code that given an image, stride, and filter size, I will extract overlap patches from each image. Now in my custom Dataset, in __init__, I will load the image folder. Instead of returning the path of images, I will return a list of NumPy array that has the patches. Also, I will return the name of the image that each patch is extracted from, and the id of patches. In below, I mention code to understand a bit more.

Question: How are you determining the number of batches and how do you store it with the image name?

Answer: CustomDataset:
init(path_to_image, patch_size, stride):
for each image in path_to_image :

  1. Run the extract_patches.py
  2. Add this patches(numpy arrays) to a list
  3. Add the Name of the Image
  4. Add Patch ID

at the end, return the lists. Then in __getitem__, I just return the list[index].

The Second Question:

Question: Would you like to create batches with patches from mixed images or should a batch only contain patches from the same image? In the latter case: should these patches from the same image be shuffled or do you want a specific order?

Answer: No, there is no need to be the same image, but I want to know which patch belongs to which image and be able to reconstruct the whole image.

I have another question too. let’s assume for each image, I have two labels {Mask, Edge} which are images too. The output of the model is the mask and based on the mask I am going to extract the patches. I want to write a custom Loss Function. Based on other discussions, as long as I am working with Variables, I can write the function.
But in my case, because I have to calculate the edge based on mask (which needs some filtering and …) I have to access the .data() of Variable, so I am a bit confused about how to do that. Can you help with this part too? The loss should be in this format:

CustomLoss(Model_Output, TruthMask, TruthEdge):

  1. extract data by Model_Output.data().cpu()
  2. run edge_extraction.py on the previous data
  3. calculate the difference between truth and predicted {edge, mask}

Thanks in advance.

In that case you could just return the image name or a specific id for each patch and the corresponding mask in the __getitem__ method.
Since each model output will then have the corresponding image name, you should be able to sort these outputs according to the original image name.

Variables are deprecated since PyTorch 0.4 so you should use tensors in newer versions.

You shouldn’t access the .data attribute, as it may yield unwanted side effects. If you need the post-process the output with an edge detection algorithm, you could either implement it in PyTorch directly or use another library and implement the backward method via a custom autograd.Function as described here.

1 Like