Hi there,
I am working on a project that heavily relies on PyTorch which I now want to deploy as a conda package.
I have some custom C++ and CUDA (equivalent but faster) code which I also want to include in my conda build.
At the moment everything works so far, but I haven’t been able to implement the cuda / cpuonly logic which pytorch uses. So my idea would be to just reuse the pytorch logic
conda install [myownbuild] cudatoolkit=10.1 -c [mychannel]
conda install [myownbuild] cpuonly -c [mychannel]
such that when pytorch is installed with the respective cudatoolkit I would want to use the cuda version of my own build and when the cpu only flag is used, then I would want to use the CPU only version of my build as well.
But how can I handle this in the conda meta.yaml?
Currently, I have a simple Linux switch that looks like
...
host:
- python {{ python }}
- numpy {{ numpy }}
- pybind11 >=2.4
- cudatoolkit-dev # [linux]
...
and kind of assumes that on the linux machine there is a cuda card. Is it possible to use the cpuonly / cudatoolkit flag somewhere there? Or could some tell me how this is done in pytorch?
Cheers,
Lucas