Using Segnet on panoramic images

Howdy folks,

I have some really odd sized images (24 pixels tall, 1440 wide and 8 channels) which come from a multi-spectral panoramic imaging sensor I built. The width corresponds to a field of view of 360 degrees at a quarter degree resolution.

I want to apply semantic segmentation or instance segmentation using some of the available networks like SegNet, and the like. I am just starting to experiment with these networks, and I noticed that many of these networks start out with pre-trained networks themselves (e.g. VGG).

What I am trying to understand is whether I can still make use of a pre-trained network which has been trained with more regular image sizes (e.g. 512 by 512), and if so, how? Or, since my input size is so different from those which were used to train the pre-trained network, would I have to train “from scratch”?

Any advice regarding my objective (semantic/instance segmentation) is always welcome. I am currently using a data set of 15K training cases that I have generated myself in simulation.

Thanks in advance

Galto

Hello,
You can use a network already trained, but in any case adapt your images with a resizing input with cv2. Otherwise you have to do a weight transfer learning in a new architecture that takes into account the image size you want. We’re here if you need help.

Hi Nicolo

thanks for your reply.

I probably should be doing weight transfer learning as I am already as coarse as I feel comfortable with in my horizontal resolution.

Do you happen to know any good papers or examples you could point me to on weight transfer learning?

In the mean time I’ll google for it.

Thanks again!

Galto