Load model parameters when GPU number changes

At the current time, it seems that model parameters trained on multiple GPU cannot be loaded to the model on the single GPU, nor reversely.

For example, I trained my model on two GPUs and then when I test the model, I would like to use only one GPU. But the parameters loading failed.

Does anyone has a solution?

Have you looked into the map_location arg here? https://github.com/pytorch/pytorch/blob/master/torch/serialization.py#L289

It is great! I did not know that before. Thanks.

So in my case, how to write map_location?

I have tried map_location=lambda storage, loc: storage to load all parameters to GPU and it does not work.