A model instance/object has to be initialized before we can load the model weight values to the object from a file if I understand correctly. Let me know if I am mistaken.
If I used a UNet, for example, where the only variable initialization parameters (used to initialize a UNet instance) are the number of feature planes/channels for each “depth” level, would there be a way to pull that information from a saved file prior to loading so I could “dynamically” initialize a UNet object that will have a valid structure for storing the data from the saved file ?
One workaround would be to create a custom file format that will contain both the model and initialization parameter values but I’d rather extract that info from the Pytorch’s file format directly if possible.
That’s the recommended way. You would initialize the same model and just load the
state_dict into it.
The usual approach is to create a
dict and store everything inside it.
E.g. you could store the
optimizer.state_dict() as well as your depth level.
Once you’ve created this
dict, you could store and load it directly using
Thanks for the reply.
Does your answer apply to C++ (libtorch) ? I cannot seem to find a way to execute what you suggested in C++.
You could use some utility funcitions from this topic or load your jitted model directly (which seems to be the recommended way).
“load your jitted model directly (which seems to be the recommended way)”
I am using C++ for both training and evaluation.
I’ll take a look at the discussion you linked. Thanks.
P.S. Pytorch is a very powerful yet very easy-to-use Deep Learning library. The Pytorch team has done a great job so far. Keep up the good work