We observe quite significant speedups when we compile pytorch from source. To make life for the people in the group easier I would like to build pytorch on our cluster using conda-build and then distribute it this way internally.
I tried to reproduce the build using the pytorch/builder repo via docker containers but failed. I have come across this discussion Building from source with conda build - #3 by seliad basically stating that the builder is only for internal purposes. Is that still correct?@albanD, @seliad how did it turn out for you in the end?
A local build using
python setup.py install works nicely so I guess I am not too far away.
What would be a good way to achieve what I am aiming for?
Does it make sense to use one of the docker images on the pytorch docker hub? How should the meta.yaml then look like?
Thanks in advance!