Hi all,
We’re working on a ML app that uses pytorch, but we’re having trouble specifying the gpu version of pytorch as a dependency for the build. Our project uses pyproject.toml
to specify all dependancies and setuptools
for the build. Our goal is to allow both CPU and GPU (if available) runs of pytorch after a user pip install
’s our app without any further configuration needed.
we want to specify torch==2.0.1+cu118
for windows and ubuntu users- so that if they have a GPU we will be able to use gpu pytorch. (+cu118
will just default to cpu if there is no gpu available). We also want to specify the cpu-only torch==2.0.1
for osx users since there is not a CUDA build for OSX.
I dont believe there is a good way to do this solely using pyproject.toml
and setuptools
, since we have to specify the index-url
for the +cu118
version of pytorch which is not supported by PEP and recent versions of python.
Ideally we dont have to bring in other dependency managers like poetry
or pdm
- but if there is a good solution with those I will consider it. The perfect solution would work with pyproject.toml
and setuptools
.
Thanks for the help