I have some really odd sized images (24 pixels tall, 1440 wide and 8 channels) which come from a multi-spectral panoramic imaging sensor I built. The width corresponds to a field of view of 360 degrees at a quarter degree resolution.
I want to apply semantic segmentation or instance segmentation using some of the available networks like SegNet, and the like. I am just starting to experiment with these networks, and I noticed that many of these networks start out with pre-trained networks themselves (e.g. VGG).
What I am trying to understand is whether I can still make use of a pre-trained network which has been trained with more regular image sizes (e.g. 512 by 512), and if so, how? Or, since my input size is so different from those which were used to train the pre-trained network, would I have to train “from scratch”?
Any advice regarding my objective (semantic/instance segmentation) is always welcome. I am currently using a data set of 15K training cases that I have generated myself in simulation.
Thanks in advance