Hi All, I come here to try to look for some experienced advice.
I’m working with a 3D image dataset wrapped into a GIF file.
Each Gif file has 20 slices of 5 different images, representing a 3D object.
Each Gif File has 4 different florescent filters and the original image as a 5th filter.
When passing the Gif file into a tensor it’s size is: [ f, s, w, h]
f is the number of filters of the image = 5
s is the number of slices of each filter = 20
w is the width of each filter = 121
h is the height of each filter = 121
Now the idea is to generate a model that could distinguish different classes in the images.
My question is the following, is there any way where I could use the 5 filters as inputs without having to implement a 5D convolution ??
Nowadays I have chosen 3 of those 5 filters and used a C3D model as a POC and looking forward to use a 3D Resnet, but I would love to be able to use all the input information instead of just 3/5.
Any thoughts??
Github of the project: GitHub - fmcalcagno/TaraPlanktonRecognition: TaraPlanktonRecognition