Load_state_dict

Hi,
The net parameters(weights and network models) should be stored and then load. But When I try to load it I got an Error. Could I change the following code in a way it works?

net = models.vgg16_bn().to(device)  
Fix_net_par1 = net.named_parameters()
Fix_net_par = copy.deepcopy(dict(Fix_net_par1))
net.load_state_dict(Fix_net_par)       <=== The line that creates the Error

RuntimeError: Error(s) in loading state_dict for VGG:
Missing key(s) in state_dict: “features.1.running_mean”, “features.1.running_var”, “features.4.running_mean”, “features.4.running_var”, “features.8.running_mean”, “features.8.running_var”, “features.11.running_mean”, “features.11.running_var”, “features.15.running_mean”, “features.15.running_var”, “features.18.running_mean”, “features.18.running_var”, “features.21.running_mean”, “features.21.running_var”, “features.25.running_mean”, “features.25.running_var”, “features.28.running_mean”, “features.28.running_var”, “features.31.running_mean”, “features.31.running_var”, “features.35.running_mean”, “features.35.running_var”, “features.38.running_mean”, “features.38.running_var”, “features.41.running_mean”, “features.41.running_var”.

Could I change the above code in a way it works?

Hello,

I never stored and reloaded parameter runtime.

Anyway why you don’t just save your model parameters on your HD and then reload as indicated in the PyTorch tutorials saving_loading_models_tutorial

Alessandro.

because it is really time-consuming.

And did you try this?

model = TheModelClass(*args, **kwargs)
model.train()
actual_dict = model.state_dict()
model = TheModelClass(*args, **kwargs)
model.load_state_dict(actual_dict)

Basically, canonical way without writing on the disk but keeping the data dynamically allocated