I have just used an mnist code(from Pytorch examples) with Python multi threading. On 1 run thread mnist training is running for 1 epoch and on the other a dummy code is running. A significant slowdown is observed in the mnist training when running in parallel with this code.
If line numbers 104 & 105 are removed(ie now just 1 thread running mnist) the speed of the mnist training is really fast.
- This is just a trial experiment i conducted to highlight this issue which i was facing in another project of mine.
- I had similar code in a previous project in normal python too in which there was no Pytorch.
There it was not causing any issue. I had 3 normal threads and 1 dummy thread there too. No slowdown. In Pytorch the same dummy function is causing an immense slowdown