How to compile PyTorch on linux VM but use it in container

Hi, I was trying to use PyTorch in container with 2 Tesla K80 GPU card configured. However I got error message likes: “RuntimeError: CUDA Error: no kernel image is available for execution on the device”

By searching I found the issue: Cuda error: no kernel image is available for execution on the device · Issue #31285 · pytorch/pytorch · GitHub
Which tells me the K80 GPU is too old so I have to compile PyTorch from source to support it.

Now I am following the procedure from here:

But it seems I have to compile the PyTorch on the machine which has GPU card installed, unfortunately that is not my case.

I am using PyTorch in Kubernetes Container, which is Ubuntu 22.04 and has GPU card installed. However I have no root account inside the container so I am not able to compile PyTorch source code from the container. (because i am not able to run sodu command to install all required dependencies)

I have another RHEL VM which I can run sudo command to install all necessary dependencies. But that machine has no GPU card installed.

So my question is:

  • May I compile PyTorch from source on the RHEL VM which has no GPU installed?
  • How to use the compiled PyTorch for another Ubuntu Container?

Thanks a lot!

Your K80 should be supported in PyTorch using the binaries shipping with CUDA 11.x, since these binaries support all architectures between 3.7 (which is your K80) and 9.0 (if you use CUDA 11.8).

Thanks a lot @ptrblck

We will have a try.