I’m interested in checking that I’m taking the right approach to storing and loading models with torch.save
/load
using the newish default weights_only=True
. I’m distributing code which fits, saves, and loads a few models, and it makes sense to me that I would need to register my own nn.Module
subclasses with torch.serialization.add_safe_globals
.
I’m a bit more confused about torch’s own classes. For instance, I have to add nn.Linear
, nn.ReLU
, etc to the safe globals list in order for saving and loading to work. Since these are fundamental classes in torch, this makes me wonder if I’m cutting corners or doing something that wasn’t intended. Am I introducing a security risk to my users by doing this, maybe in a scenario involving some monkey patching of these classes? Should I just use weights_only=False? Should I be doing everything with state dicts instead? If this is the intended usage of this feature, I’m curious to understand why, if anyone can offer guidance.
Thanks!!