How many threads will pytorch start for a convolution layer?

Hi, I’d like to know how many threads will pytorch start for a convolution layer or a convolution filter?
thanks!

Are you looking for the parallelism functions? They have C++ equivalents in ATen (ATen/Parallel.h).

Best regards

Thomas

Thanks for your reply. the get_num_threads() function returns the number of threads used for parallelizing CPU operations. However, I want to know how many threads are used for a convolution layer in GPU. Is there any method?
Best regards

No, on the GPU it would depend on many factors, the implementation (in particular the backend), the hardware specifics etc. Also it’s not quite clear what you want to have, do you want the number threads in a block or the number of threads times the number of blocks started in parallel?
Threads per block and number of blocks are (save CuDNN benchmark=True and similar things) roughly deterministic in the sense that you get the same number of threads per block and blocks if you run them several times with the same shape and stride tensors on the same device, so you could try to fire up the profiling tool of your choice (e.g. nvprof) and let it dump the kernel invocations with the number of threads.

Best regards

Thomas