I run my code on several nodes on a cluster. On GPU enabled nodes, I train a nn, and save the parameters via torch::save. Now I want to load them on a CPU enabled node, where I have no GPU support. I get errors complaining that I don’t have the GPU support (which is correct). I think I need to tell torch::load that it needs to convert what it loads to CPU model. But I can’t find anywhere how.
For python, the approach is clear in a few posts. But how to do this for c++ frontend.
I also tried converting the model to cpu (with model->to(device)), where device was a CPU device, before saving it on GPU. But that didn’t help either. Any ideas on how this can be done?
Error I get while loading:
terminate called after throwing an instance of ‘c10::Error’
what(): Cannot initialize CUDA without ATen_cuda library. PyTorch splits its backend into