Not loading model trained in 2 gpus to cpu for inference

Hello,

I have following error while loading a 2 gpu trained model on cpu:

RuntimeError: module must have its parameters and buffers on device cuda:0 (device_ids[0]) but found one of them on device: cpu

I tried loading with:

torch.load(‘my_model.pt’, ‘cpu’)
torch.load(‘my_model.pt’, map_location=lambda storage, location: ‘cpu’)

As I’ve researched. But always getting same error.

It seems the gpu-1 part is loaded in cpu, but gpu-0 part is not, so the error message.

I welcome some help.

Regards
Daniel

Did you save the model itself or the state? You can check the if all params are on gpu or not by using

next(model.parameters()).is_cuda

I think best would be to load it on the gpu move it to the cpu and save again.

Thanks for the answer, but:

next(model.parameters()).is_cuda

–>True

Tried to load then move and save again:

torch.load('gpu_model.pt')
model.to('cpu')
torch.save(model, 'models_train/CPU_model_cpu.pt')

didn’t work either. Same error message

Don’t save the model, save model state dict. Saving the model itself can cause various problems.
https://pytorch.org/tutorials/beginner/saving_loading_models.html