Not loading model trained in 2 gpus to cpu for inference


I have following error while loading a 2 gpu trained model on cpu:

RuntimeError: module must have its parameters and buffers on device cuda:0 (device_ids[0]) but found one of them on device: cpu

I tried loading with:

torch.load(‘’, ‘cpu’)
torch.load(‘’, map_location=lambda storage, location: ‘cpu’)

As I’ve researched. But always getting same error.

It seems the gpu-1 part is loaded in cpu, but gpu-0 part is not, so the error message.

I welcome some help.


Did you save the model itself or the state? You can check the if all params are on gpu or not by using


I think best would be to load it on the gpu move it to the cpu and save again.

Thanks for the answer, but:



Tried to load then move and save again:

torch.load('')'cpu'), 'models_train/')

didn’t work either. Same error message

Don’t save the model, save model state dict. Saving the model itself can cause various problems.