Model load error: storage has wrong size

I trained a model in multi-node, multi-gpu environment.
When I try to load the model for validation, I encounter the following error -
Can someone help?
Thanks.

checkpoint = torch.load(args.resume, map_location=lambda storage, loc: storage)
File “/opt/conda/lib/python2.7/site-packages/torch/serialization.py”, line 303, in load
return _load(f, map_location, pickle_module)
File “/opt/conda/lib/python2.7/site-packages/torch/serialization.py”, line 476, in _load
deserialized_objects[key]._set_from_file(f, offset, f_is_real_file)
RuntimeError: storage has wrong size: expected 128 got 64

I encountered this problem too ~ have you fixed this bug ?

@ishanic and @alphadl: Have either of you resolved this issue? I have having the same problem when I train my model on a GPU but then try to load it on my CPU for evaluation purposes.

The code I’m running is:

torch.load('model.pkl', map_location=lambda storage, loc: storage)

The error I got is:

*** RuntimeError: storage has wrong size: expected 857934349 got 128

i meet the same error, when i pretrained model by multi-gpus, but still don’t get any solution. please share your way to deal with it (if you have), thx.