Hi,
I have an old model saved from PyTorch 1.5.1 using torch.save(model, "MyModel.pt").
When I try to use it on PyTorch 1.7.1, I get the following error:
torch.nn.modules.module.ModuleAttributeError: 'FrozenBatchNorm2d' object has no attribute 'eps'
Unfortunately, I don’t know the architecture used to create the model, so I cannot recreate the same in 1.7.1 and copy the weights.
If you’ve stored the “complete” model, you would still need the model class definition and all source files, wouldn’t you, or did you use a pretrained model somehow?
Where did you tune these parameters? Directly in the source file (in torchvision) or in another script?
I don’t think that torch.save(model, path) will store all source files with it, and would thus try to load them in torch.load.
Note that I’m speculating a lot here, as I’m not using this workflow, since it can break in many ways.
The parameters were mainly for the RPN.
I can load the old model in 1.7.1, but when I use it for inference I get the error I cited above. If the sources were store within the .pt file, I would likely not have this error.
I don’t think the sources are stored (which is the reason the loading could break) and, if I’m not mistaken, it’s a known pickle limitation.
Since you already can load the model, you could try to set the missing eps to 0, which seems to be the default based on this PR.
I don’t think you have to replace the FrozenBatchNorm2d layers and could try to add the missing eps attribute directly to the already used layers.
If I understand your issue correctly, the error is only raised in the forward pass when the eps attribute is used, not while loading the state_dict. If so, then adding the eps before executing the forward pass might work.
from torchvision.ops import misc
model = torch.load("...")
for name, layer in model.named_modules():
if isinstance(layer, misc.FrozenBatchNorm2d):
layer.eps = 0.
torch.save(model, "...")