Transfer learning for 3D data


I am working on an image classification task which uses a labelled dataset of 3D medical images (volumetric CT brain scans).
I want to try transfer learning but wasn’t sure if it’s possible to use pre-trained 2D CNNs on my dataset (and if so, how to do that).
Alternatively, is it simply best to use pre-trained 3D CNNs (the only one I currently know of being this 3D ResNet for Action Recognition?

I’m quite new at this so any help is appreciated. Many thanks.

Using the pretrained parameters of a 2D CNN won’t work out of the box, as the depth dimension is missing e.g. in the kernels.
You could try to create this dimension, e.g. by repeating the filters, but I’m not sure, if this would be helpful.

Would it be possible to use each slice as a 2D image first and apply a standard transfer learning approach?
This would make the workflow easier and could give you a signal, if your 3D model would benefit from the pretrained parameters.