Distributing binary python wheels which depend on pytorch with custom CUDA Kernels

The github project e3nn is attempting to get on pypi. We have pytorch as a dependency as well as custom cuda kernels. Our goal is to distribute precompiled binaries much like how pytorch is distributed, i.e. pip install e3nn=0.0.1+cu101 for version 0.0.1 with binaries which are compatible with cuda 10.1. I am looking for guidance on how to get this done.

  1. We will need to compile to the manylinux standard. I have seen how manylinux pytorch itself is compiled and uploaded to pypi in the builder repo. Is it possible to get access to this docker image in order to compile our code against this environment? Where can I find it?

  2. Furthermore, how does one create the +cu101 version option in pypi? This feature is completely undocumented and looking at the contents of pytorch’s repo on pypi.org, there are no +cu101 versions available. How can we get this functionality as well?

CC @seemethere, who might be familiar with the binary packages. :slight_smile: