[Resolved] Cuda runtime error

Hi,
I trained my model on the 2 different GPUs(using Tensor.cuda()) and saved the parameters. Then I want to load these parameters to the CPU model, but I got this error: cuda runtime error (10) : invalid device ordinal at torch/csrc/cuda/Module.cpp:84
How can I solve this?

Thanks!

Found answer in this topic