Build on CPU then run on another computer with GPU

Hi, I am using libtorch, I only have a laptop which has no gpu. I have access to a server with strong GPUs. Therefore, I want to compile (with cmake) using gpu versions of libtorch locally on my laptop, then move the compiled file to the remote server for execution. Note I cannot compile on that remote server since it does not have cmake installed and I am not root on that remote server.

Hence, I wonder if it is possible to compile with gpu version of libtorch on a computer without cuda and GPU. Then move the compiled file to another machine with GPUs for execution.

  1. Maybe it doesn’t work for cuda toolkit is needed during compilation with GPU. When you import torch, related xxx.so will be called.
  2. No cmake is not a big question. You can still pip install cmake though you are not in sudoers. I am sure it works well.

You can install CUDA on your laptop (even without a GPU) and try to cross-compile PyTorch for your GPU architectures.
However, I’m not sure what the best use case would be (building wheels?), as I’ve only done it in docker containers, which were executed on the servers with GPUs.

Do you mean it is better if I can compile on the remote server with gpus instead of my laptop?

If that’s possible, sure as you wouldn’t have to cross-compile on a potentially slower laptop.
However, you’ve mentioned you cannot compile on the server.