CPU usage issue when using pytorch compiled from source

Hi,

When I use the pytorch compiled from the src(pytorch-src), the CPU usage is much higher than the pytorch installed directly through conda(pytorch-conda).

In addition, when using pytorch-src, gpu usage is lower than pytorch-conda.

I just followed the instructions on github to install from src. I’m using python 2.7. Is there anything with my installation?

Best,
Yikang

1 Like

What OS are you on? And by higher CPU usage do you mean that PyTorch is able to use more cores effectively and run faster?

I’m on Ubuntu 14.04. It uses more cores, but not faster nor efficient, I’m afraid.

When I changed back to the pytorch installed through conda, it seems OK.

Maybe your pytorch that was compiled from source is not using cudnn? And maybe the compiled version is linking against OpenBLAS instead of MKL?

Hi Massa,

Thank you for your help.

I’m not very familiar with Linux. If you don’t mind, could you tell me how to check them.

Best,
Yikang

You can check if cudnn is being used by typing torch.backends.cudnn.is_acceptable(torch.cuda.FloatTensor(1)) in your python interpreter.
For checking if the library was linked against OpenBlas or MKL, type ldd libTH.so, where libTH.so is the library file that was compiled.

1 Like

Check this thread, the discussion provides some details on blas and torch and some additional settings and flags you can look into. Maybe run the scripts I provided there so we can see if it’s indeed a blas-related issue? Also posting the compilation logs could help if you indeed see performance differences between the two installs.