Hi
I’m having problems loading a GPU trained model on CPU. The best model was saved with:
data = {'opt':self.opt.state_dict(),'d':self.state_dict()}
t.save(data, path)
I tried loading the best model saved (on CPU) using the code below:
checkpoint = torch.load('bestcheckpoint', map_location='cpu')
model.load_state_dict(checkpoint['d'])
Unfortunately I keep getting an error saying:
size mismatch for content_convs.3.4.bias: copying a param of torch.Size([200]) from checkpoint, where the shape is torch.Size([250]) in current model.
size mismatch for content_convs.3.4.weight: copying a param of torch.Size([200]) from checkpoint, where the shape is torch.Size([250]) in current model.
size mismatch for content_convs.3.4.running_mean: copying a param of torch.Size([200]) from checkpoint, where the shape is torch.Size([250]) in current model.
size mismatch for fc.0.weight: copying a param of torch.Size([2000, 1200]) from checkpoint, where the shape is torch.Size([2000, 2000]) in current model.
However, the same model can be loaded on GPU perfectly. Does anyone know what the dimensions differ when it’s loaded on CPU?
Thanks in advance