Storing large amount of different models

Hello everyone,

While doing some research using PyTorch in the last couple of months I’ve been facing a conceptual problem related to storing variations of models to which I cannot find a satisfying answer (excuse me if I missed an obvious resource but my search has lead to nothing).

If I create a model and store it’s parameters I can recreate the model by loading the parameters in the same model. However, if I slightly vary the model to investigate new possibilities, this will change the source model for the previous network, meaning that I am now obliged to store the source code (99% equal) twice, one for each variation if I wanna test/compare the models later. When this is repeated for dozens or hundreds of variations, the code redundancy becomes problematic due to the sheer amount of versions for similar networks. Additionally if the weights of a network are serialized but the source for the model is not saved and then subsequently changed, the retrieval of the model will require some guessing of the original model.

If I understand correctly, the protocol buffers available in Tensorflow help in easing this problem, but no similar technique is available in PyTorch.

I would like to question if anyone uses a work around for this issue, or if I am missing something obvious and it’s not really an issue? Also, what would be a correct way to tackle it, in the case that no solution is widely used yet?

Thank you very much,