How to use torchvision.transforms when I need to load 3D images

Now I need to train a network with 3D medical images,but when I use torchvision.transforms to do some operations such as ‘RandomCrop’ and ‘Normalize’,I get an error about number of arguments for init ,Can you give me some advice about that ?
Thank you!

torchvision does not support 3D volumes you will have to implement transforms yourself.

However you can easily copy code from torchvision and just extend it to support the extra dimension.

ok,thanks for your reply

Hi @alan_ayu,

This might come a bit late, but maybe you can use TorchIO for this. Someone asked about RandomCrop here, and you have multiple normalization transforms in the documentation.

3 Likes

Hi @fepegar fepegar,

I am facing a similar issue pre-processing 3D cubes from a custom turbulence data.

I have managed to compute the mean and std deviation of all my cubes (of dimensions 21x21x21) along the three channels by splitting the dataset in batches, then I compute mean and std per batch and finally average them by the total dataset size.

I know I can not apply transforms.Normalize directly to my cubes because that function works with 2D data. When I look at the TorchIO documentation link you provide, I don’t see explicitly any example of normalizing 3D data.

Do you know if there is such an example within the documentation?

Hi, Alan. You have multiple normalization transforms here: https://torchio.readthedocs.io/transforms/preprocessing.html

TorchIO is optimized to work with 3D (and 4D) data! Don’t hesitate to open an issue in the TorchIO repository if you have any further questions.

1 Like

Hi Fernando. Thanks for the reply, I will have a closer look at the documentation.

1 Like