I am running PyTorch on two different machines (no GPU), and I observe drastic speed differences between the two. Both machines are running Ubuntu 18.04, Python 3.7.3 with Anaconda and Pytorch v1.2.0 was built from source.
Here is a small sample of code to reproduce my issues:
import torch from time import time torch.set_num_threads(10) N_samples, N_features = (128,5000) x = [ torch.randn(N_samples, N_features) for _ in range(10) ] y = [ torch.randn(N_samples) for _ in range(10) ] model = torch.nn.Sequential( torch.nn.Linear(N_features, 500), torch.nn.ReLU(), torch.nn.Linear(500, 100), torch.nn.ReLU(), torch.nn.Linear(100, 1), torch.nn.ReLU() ) def run_batch(xi,yi): t0 = time() model(xi) print(time()-t0) [ run_batch(xi,yi) for xi,yi in zip(x,y) ]
On the first machine, I get:
0.45372891426086426 0.37187933921813965 0.37082910537719727 ...
On the second, I get:
0.013288736343383789 0.009611368179321289 0.009567499160766602 ...
For the slower machine, Python was compiled with
--enable-optimizations, but I’m not sure for the faster one.
Moreover, the slower machine has more CPUs and more RAM memory than the other one.
Any idea where the difference might come from?
Thank you for your help