We are looking at creating a number of models that are based on current models (i.e. make a variation on ResNet, U-Net, Mask-RCNN etc.), the variations may be in the form of different activation functions, additional layers, different number of input channels (single channel images vs multi-channel) as well as looking into NAS.
We are looking for a simple way to compare the architectures in order to do transfer learning. i.e. We want to compare the architecture of the model (without the weights and biases) to see if our internal pre-trained model we have, will be compatible with the one we are about to train, and we can bootstrap it with transfer learning.
Ideally, we are able to do something like generate a hash or finger print of the graph and that graph will be stable across PyTorch versions. Has anyone had any experience with trying to do this as well? Or any ideas on how this could be achieved?