Build Pytorch from source using conda-build


We observe quite significant speedups when we compile pytorch from source. To make life for the people in the group easier I would like to build pytorch on our cluster using conda-build and then distribute it this way internally.

I tried to reproduce the build using the pytorch/builder repo via docker containers but failed. I have come across this discussion Building from source with conda build - #3 by seliad basically stating that the builder is only for internal purposes. Is that still correct?@albanD, @seliad how did it turn out for you in the end?

A local build using python install works nicely so I guess I am not too far away.
What would be a good way to achieve what I am aiming for?

Does it make sense to use one of the docker images on the pytorch docker hub? How should the meta.yaml then look like?

Thanks in advance!



I think what the “internal only” message means is that we change it quite regularly to adapt to our CI/release process without special notice. So you should rely on it.

If all these installs will be on the same machines (with same local shared libs, cuda versions, GPUs, etc), then you should be able to python bdist_wheel to create such a wheel that you can then share (and pip install in each env).

1 Like