Load_state_dict in CPU

Hi,

I have a saved RNN model trained on a GPU and saved using the method described here

Now I need to load the model on a CPU. I went through other topics in the forum that address the issue loading a GPU model onto a CPU but all of them assume that the entire model is saved and not just the state_dict(). When I try to use the suggested method there which is:

torch.load('mysavedmodel', map_location=lambda storage, location: 'cpu')

I get the following error:

    File "predictor.py", line 107, in <module>
        predictionmodel.load_state_dict(torch.load("mymodel.txt", map_location=lambda storage, loc:'cpu'))
      File "/home/ubuntu/.local/lib/python2.7/site-packages/torch/serialization.py", line 231, in load
        return _load(f, map_location, pickle_module)
      File "/home/ubuntu/.local/lib/python2.7/site-packages/torch/serialization.py", line 379, in _load
        result = unpickler.load()
      File "/home/ubuntu/.local/lib/python2.7/site-packages/torch/_utils.py", line 71, in _rebuild_tensor
        module = importlib.import_module(storage.__module__)
    AttributeError: ("'unicode' object has no attribute '__module__'", <function _rebuild_tensor at 0x7f2e52a73cf8>, (u'cpu', 0, (512L, 63L), (63L, 1L)))

Also, what is the .pt extension people seem to mention in those threads?

Hey,
I know this is quite old, but what worked for me when I had the same issue:

torch.load('file.pt', map_location=lambda storage, location: storage)

PS: The .pt extension is arbitrarily chosen, you can use whatever you want.