Missing key(s) in state_dict: "pool.p"

I have a trained model by me that I saved using

torch.save(model.state_dict(), PATH)

Now I am testing the model after importing the model as

class Model(torch.nn.Module):
    def __init__(self, model, pool):
        super(Model, self).__init__()
        self.model = model
        self.pool = pool

    def forward(self, sample):
        output = self.model(sample)
        #output = self.pool(output) #GeM
        output =  torch.sum(output.view(output.size(0), output.size(1), -1), dim=2) #SPoC

        output = F.normalize(output, p=2, dim=1) #L2 Normalizartion
        return output

model = Model(model=resnet, pool=gmp)
model.load_state_dict(torch.load(PATH))
model.eval()

You can see, I actually am not using the pool but using something else (pool was used in a different experiment using the same model).

Now I am getting this error

File "test.py", line 123, in <module>
    model.load_state_dict(torch.load(PATH))
File "/home/.../python3.8/site-packages/torch/nn/modules/module.py", line 1406, in load_state_dict
    raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for Model:
	Missing key(s) in state_dict: "pool.p". 

here PATH = path to the saved model ‘model.pt’
Can anyone explain what the error is and how to solve it?
I checked the forums and found a case where users were using nn.dataparallel or something like that, but I am not using it.

The only thing I can imagine is you passed something different from a tensor or a nn.module when u instantiated that the 1st time but then you did so when u tried to load.

Can you make a reproducible script?

Well I could not understand what you explained :frowning:
I can make a script but the code is too long. But I can tell you, this same code ran when I tested the experiments involving the pool = gmp

Classes have attributes (anything you call by self.whatever).
Pytorch is coded such that when you set
self.anything= X
It inspects that X. If it’s a nn.Module or a tensor (a layer or a pytorch tensor) it keeps track of it.
Then when you save the model, pytorch iterates over all the layers nor you defined.

This bug can be generated if
When you created an instance of the class
model = Model(model=resnet, pool=gmp) <<<< model is an instance of the class Model
gmp (X in my example) is a different thing from a nn.Module or a tensor.
The second time, when you want to load the model X is a nn.Module or a tensor.
Then it “misses” that.

So in your case it seems pool is a nn.Module but there is something defined inside which is missing. Either due to what i explained or due to some if conditional statement which is skipping it.