I am currently working on a CNN that will use PET images to classify Alzheimer’s. I have preprocessed the images with an external program and already converted them into tensors. These tensors have the size [1, 169, 208, 179]. Is it advisable to scale down this size due to the probable high computing power and if so, which values would be suitable?
Or would it even be better if I convert the images to tensors myself and then resize them in the same step? If so, how do I handle this with the 3D images, since I’ve mostly only seen tutorials with 2D images?
If someone could explain this to me briefly, it would be a great help.