What is the best way to save a model including parameters? A few different ways are discussed in Saving and Loading Models — PyTorch Tutorials 2.6.0+cu124 documentation, but they all have drawbacks.
torch.save(model.state_dict(), PATH)
: Doesn’t save the architecture, only the parameters.
- " Save/Load Entire Model": Not recommended because “pickle does not save the model class itself. Rather, it saves a path to the file containing the class, which is used during load time. Because of this, your code can break in various ways when used in other projects or after refactors.”
- " Export/Load Model in TorchScript Format": TorchScript is deprecated.
- ONNX: Clumsy an doesn’t work for all models.
The recommended approach would be the first one of saving the state_dict
of the model as it would avoid common pitfalls causing issues during model loading.
Sure, but then you need to have the source code of the model available in a different file. I want to be able to call my script like “python runmodel.py model-file” and run whatever kind of model model-file contains.
Yes, which is a softer requirement than the need to keep source files in the same structure without breaking changes if you store the “entire” model via pickle.
Not sure I understand? Cause there are serialization formats that can store the weights and the model structure without causing fuss.
Do you think that It’s a best option??
ONNX. But it’s painful and annoying to use.