Threading issue in Pytorch

I have just used an mnist code(from Pytorch examples) with Python multi threading. On 1 run thread mnist training is running for 1 epoch and on the other a dummy code is running. A significant slowdown is observed in the mnist training when running in parallel with this code.

If line numbers 104 & 105 are removed(ie now just 1 thread running mnist) the speed of the mnist training is really fast.

  • This is just a trial experiment i conducted to highlight this issue which i was facing in another project of mine.
  • I had similar code in a previous project in normal python too in which there was no Pytorch.
    There it was not causing any issue. I had 3 normal threads and 1 dummy thread there too. No slowdown. In Pytorch the same dummy function is causing an immense slowdown
2 Likes

if you are using multi-threading, set export OMP_NUM_THREADS=1 in your shell prompt before starting python.

2 Likes

Alternatively you can also put this at very top of your script:

import os
os.environ["OMP_NUM_THREADS"] = "1"

Hi, the caffe2(pytorch) forward(c++) is much slower than single thread or single instance, I had tested the both cpu and gpu, both pytorch and caffe2,
could you give me some tips for solving this problem?

thank you.