is invariant 3D Unet even possible?

i have a project in torch to create a 3D semantic segmentation with 3D MRI data stored in NRRD files (that can be converted to 3D tensors). The images are in different but pretty similar shapes.

I tried to create something similar to 3D Unet but in invariant version using torch.nn.Conv3d in the encoder blocks and torch.nn.ConvTranspose3d in decoder blocks but the model wont return the same shape unless all 3 dimensions of the image shape are divided without remainders in 16 (there are 4 encoder blocks with stride of 2).
Is there a way to fix it?

Also possible solution I thought of that is possible if invariant 3D Unet is not possible was padding the image with 0 to reach a shape that is divisible by 16. I am sure that itâ€™s not optimal so if there are other ideas I would love to know them

Hi kfir!

Preliminary comment:

The lengths of your three dimensions (height, width, depth) of your 3D
U-Net donâ€™t mix as your 3D image passes through the U-Net in the
following sense: The â€śvalidâ€ť length of, say, the height dimension (which
you say in your case is â€śdivided without remainders in 16â€ť) doesnâ€™t
depend on the values of the other two dimensions. Each of the three
dimensions can be valid or invalid independently.

So I donâ€™t think the fact that you are working with a 3D U-Net is relevant
and 3D U-Nets.

Iâ€™m not sure what you mean by an â€śinvariant U-Net.â€ť Are you asking that
the output image have the same shape as the input image (not counting
the channels dimension)?

If so, I donâ€™t think you want this. As your image passes through a U-Net,
its edges get nibbled away by the convolutions. Padding the convolutions
so that the image doesnâ€™t shrink pollutes somewhat the U-Net logic. Think
image edges at each convolution stage.

I highly recommend carefully reading the original U-Net paper, paying
attention to how the shape of the image shrinks as it flows through the
U-Net. Note that the original paper does not pad its convolutions.

The condition for â€śvalidâ€ť input-image dimensions is more nuanced for
the original U-Net (and I believe this will be true for any U-Net with
optimally-clean convolution / no-padding logic). Look at the U-Net
diagram in the original paper. It shows how the image shrinks and
from it you can deduce the rules for valid input dimensions.

If by â€śfix it,â€ť you mean have your U-Net output an image whose size
matches that of its input, you can achieve this by appropriately padding
each of the convolutions along the way. But, as discussed above, you
donâ€™t really want to do this.

My recommendation is to reflection-pad the input image up to a size
that has valid dimensions (which I think will be more complicated than
â€śdivisible by 16â€ť). Your image edges will have a somewhat different
character than the interior, but this is already the reality before you pad,
and I think that reflection padding is the â€śgentlestâ€ť way the let your model
learn to segment pixels near the edges.

Best.

K. Frank

Hi Frank, thanks for your comment.

You are definitely right about this one, I didnâ€™t want to get options like resizing the image to a shape that is divisible by 16, because itâ€™s only possible with 2D images and not 3D so I focused about 3D but creating invariant 2D Unet is the same problem as creating invariant 3D Unet for me.

Yes, this is exactly what I meant

I never noticed that in the original U-net padding wasnâ€™t used
Iâ€™ll read it carefully as you suggested I believe it is important, although in many researches using similar architecture, padding was used some of the time.

Again thank you for your highly detailed answer Frank, really helped me!

Hi kfir!

Just to be clear, any resizing would happen on a dimension-by-dimension
basis, so there is no substantive difference between 2D and 3D.

I think that many people are not as insightful and / or careful as the authors
of the original U-Net paper. They really did a quite nuanced job of it and I
think that not padding internally is the right way to go.

It can be a real pain in the neck to prepare batches of images of varying
size to have valid dimensions for input into a U-Net, so there is a certain
motivation to cut corners, rather than do it right.

Best.

K. Frank

1 Like

If youâ€™re just padding the input image and corresponding targets during preprocessing to be a standardized size, that should be fine. For example, using:

I think what @KFrank is getting at is you do not want to add padding into any of the convolution operations in the UNet.

Regarding an invariant UNet, itâ€™s not really possible/practical, especially given the problem you intend to use it for. As far as I can tell, you have 3D MRI scans, and corresponding targets of identical size. And you want to train the model to precisely identify what pixels contain a particular target.

If you used nn.AdaptiveAvgPool3d within the encoder of the model, youâ€™re going to have a major problem with getting the decoder to give the correct output size that matches your targets, because the size information will be lost.

So any resizing or image padding should be handled in your image preprocessing and applied the same to your targets so they match pixel for pixel.

Hi, I read the original U-net paper and now what you @J_Johnson @KFrank are saying is much more clear and makes sense, I think I will remove padding from the network itself as suggested by you and will use mirror-padding in the pre processing for the images.
Thanks a lot really appreciate the help and explanations.

Kfir.

for anyone who will encounter this post later on, apparently there is a function in the libraray monai - monai.transforms.DivisiblePad which does the padding for this case