I was wondering why conda is preferred over pip. pip was used to compile from source, but now it installs binary wheels too.
Any major difference? For Torch we went through 5 (?) package managers at least…
Edit: OK, this quote “Pip is a package manager, and Virtualenv is an environment manager. Conda is both.” (ref.) may explain it.
Hi, I met a problem as follows:
BLAS : Program is Terminated. Because you tried to allocate too many memory regions.
After setting threads as follows,
export OPENBLAS_NUM_THREADS=1
export GOTO_NUM_THREADS=1
export OMP_NUM_THREADS=1
The problem was solved. Therefore, the problem is aroused from multi-thread of CPUs.
I just used 50 or 100 images as a batch, where each image is 3224224. The machine has 2 GPUs – Tesla M40, 2 CPUs–Intel Xeon E5 v4 CPU. Because I cannot connect with conda, I download Pytorch and torchvision sources, and compile them. The cuda version is 7.5, the cudnn version is 5.1.
How can I deal with the problem? Thank you beforehand.
I guessed the problem is caused by the imcompatibility of OpenBLAS with Pytorch, because the problem is totally solved by reinstalling Pytorch with the command ‘conda install pytorch torchvision -c soumith’. It is heard that Pytorch compiled from the sources of github cannot well control OpenBLAS.
I haven’t heard about problems with OpenBLAS before, but I don’t think it’s really a PyTorch bug, since we’re only calling its functions. If it can’t manage its threads properly, there’s nothing we can do about it. I’d recommend using MKL.
Yes, I do agree with you. It is not the problem of Pytorch. I just recorded the problem and its solution to tell other persons the underlying imcompatibility of OpenBLAS, and follow the simplest installing way.