Is PyTorch supposed to be thread-safe?

Is invoking PyTorch functions thread-safe wrt Python threads?

I am trying to run model inference in a multi-threaded server and I get segmentation faults from MKLDNN convolution kernel. I expected that PyTorch calls will drop the GIL and be thread-sage similar to numpy, opencv and other similar libraries but was not able to find this clearly documented anywhere.

1 Like

PyTorch underlying C++ library is expected to be thread safe (although the Tensor object is not thread-safe for multiple writers; you need to synchronize that yourself). So if you are getting multithreading related segfaults, if you have a repro, please report us an issue.

Here is the issue with the script to reproduce it:

Why do you think so? Can you provide proof?
Are any official documentation that states which parts are are thread-safe and what parts are not?

I faced with a problem that if DataLoaders has several worker_threads and several threads use each one DataLoaders but on the same Dataset - then it sometimes brings to disaster.

Should I fill a ticket?