I have an old model saved from PyTorch 1.5.1 using
When I try to use it on PyTorch 1.7.1, I get the following error:
torch.nn.modules.module.ModuleAttributeError: 'FrozenBatchNorm2d' object has no attribute 'eps'
Unfortunately, I don’t know the architecture used to create the model, so I cannot recreate the same in 1.7.1 and copy the weights.
What are my options?
Thank you in advance.
If you’ve stored the “complete” model, you would still need the model class definition and all source files, wouldn’t you, or did you use a pretrained model somehow?
It’s Mask R-CNN model from torchvision, but I know that I tuned many parameters, and I don’t remember which ones.
Where did you tune these parameters? Directly in the source file (in torchvision) or in another script?
I don’t think that
torch.save(model, path) will store all source files with it, and would thus try to load them in
Note that I’m speculating a lot here, as I’m not using this workflow, since it can break in many ways.
The parameters were mainly for the RPN.
I can load the old model in 1.7.1, but when I use it for inference I get the error I cited above. If the sources were store within the
.pt file, I would likely not have this error.
I don’t think the sources are stored (which is the reason the loading could break) and, if I’m not mistaken, it’s a known pickle limitation.
Since you already can load the model, you could try to set the missing
0, which seems to be the default based on this PR.
So if I understand what you suggest, what I have to do is:
- Load the old model.
- Find the FrozenBatchNorm2d layers
- Create a FrozenBatchNorm2d layers with PyTorch 1.7.1 with the old eps parameter.
- Copy the old layers weights/biases in the new layers
- Replace the old layers by the new one.
Am I correct?
If yes, I have two questions:
- How do I replace one layer by another one?
- How do I copy the weights and bias form one layer into the new one?
I don’t think you have to replace the
FrozenBatchNorm2d layers and could try to add the missing
eps attribute directly to the already used layers.
If I understand your issue correctly, the error is only raised in the
forward pass when the
eps attribute is used, not while loading the
state_dict. If so, then adding the
eps before executing the forward pass might work.
But if the model is already created with a missing parameter/argument, how can I add it?
You can add attributes by directly assigning them:
model.frozen_bn1.eps = 0.
It worked, thanks a lot.
from torchvision.ops import misc
model = torch.load("...")
for name, layer in model.named_modules():
if isinstance(layer, misc.FrozenBatchNorm2d):
layer.eps = 0.