Cuda extension install without rebuilding

Hello,

How do I move a CUDA extension after it has been build?
Right now I rebuild the extension every time i rerun the code but as the code inside the extension is unchanged this shouldn’t be necessary.

Is there an easy way to do this?

You mean when using the cpp extensions?
If so, the build process is cached and only the first one should compile. All the other ones will detect that nothing changes and just use it. You can find these cached modules in /tmp on unix machines.

Thanks for the response.

It is because I’m compiling it in my docker container, and so I don’t have any caching functionality.
With Tensorflow I could built it in a build image but I seem to be unable to do the same with Pytorch.
So I build it in my container each time. What I was wondering were whether I could save the output of the compilation to a shared folder and then copy it in during image build in the following builds.

Do you know whether this is possible?

You can specify the build directory if you want and share that directory between your docker images (see doc).

Thanks, I wasn’t aware of that possibility.