How to transfer learning 2D data that are not RGB images


I would like to apply pre-trained model (Ex. VGG16 or Resnet50) on my dataset. My dataset doesn’t consist of normal images (3 channels RBG (0, 255)). They are remote sensing data with size (2, 75, 75) and the values range from (-25, 0).

Quoted from
All pre-trained models expect input images normalized in the same way, i.e. mini-batches of 3-channel RGB images of shape (3 x H x W), where H and W are expected to be at least 224. The images have to be loaded in to a range of [0, 1] and then normalized using mean = [0.485, 0.456, 0.406] and std = [0.229, 0.224, 0.225].

However, my dataset doesn’t fit the size and range. What is the best practical way to transfer learning on such a data set? Thanks very much!

…I’m not sure if such cross-domain transfer learning will be helpful, but you can always normalize your data so that they have 0 mean and std 1.

Thanks for your reply!

Yes, I understand that I can always normalize your data so that they have 0 mean and std 1. However, for example, when Resnet50 is trained, the train set ranges from(0, 1), I wonder it is “good” to do the same normalization on my dataset, which ranges from (-25, 0).

My dataset has a small data size(2000), I hope to apply transfer learning to reduce overfitting. Is there a practical way in such scenario?

Well, if you are concerned about the range, you should be more worried that the pretrained network is trained on a completely different domain, and the difference includes range of course. That said, you can definitely try it if your data has some structures of natural image (spatial invariance etc.).

I’m not familiar if transfer learning reduces overfitting though… But if your dataset is small, I’d say just try running it! :slight_smile: GL