[W pthreadpool-cpp.cc:90] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool)

Is this warning something I should worry about or try to resolve?
[W pthreadpool-cpp.cc:90] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool)

Full log:

Specifically, I am not sure which line of my code is causing it:

(fashcomp) [jalal@goku fashion-compatibility]$ python main.py --name test_baseline --learned --l2_embed --datadir ../../../data/fashion/ --epochs 1
/scratch3/venv/fashcomp/lib/python3.8/site-packages/torchvision/transforms/transforms.py:310: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.
  warnings.warn("The use of the transforms.Scale transform is deprecated, " +
  + Number of params: 3191808
<class 'torch.utils.data.dataloader.DataLoader'>
/scratch3/venv/fashcomp/lib/python3.8/site-packages/torch/nn/functional.py:718: UserWarning: Named tensors and all their associated APIs are an experimental feature and subject to change. Please do not use them for anything important until they are released as stable. (Triggered internally at  /pytorch/c10/core/TensorImpl.h:1156.)
  return torch.max_pool2d(input, kernel_size, stride, padding, dilation, ceil_mode)
Train Epoch: 1 [0/686851]	Loss: 0.2760 (0.2760) 	Acc: 61.72% (61.72%) 	Emb_Norm: 0.76 (0.76)
Train Epoch: 1 [64000/686851]	Loss: 0.2212 (0.2369) 	Acc: 70.31% (64.27%) 	Emb_Norm: 0.70 (0.69)
Train Epoch: 1 [128000/686851]	Loss: 0.2223 (0.2285) 	Acc: 64.84% (65.69%) 	Emb_Norm: 0.73 (0.70)
Train Epoch: 1 [192000/686851]	Loss: 0.2250 (0.2239) 	Acc: 65.23% (66.52%) 	Emb_Norm: 0.75 (0.71)
Train Epoch: 1 [256000/686851]	Loss: 0.2320 (0.2209) 	Acc: 64.45% (67.01%) 	Emb_Norm: 0.76 (0.72)
Train Epoch: 1 [320000/686851]	Loss: 0.2070 (0.2191) 	Acc: 69.53% (67.30%) 	Emb_Norm: 0.77 (0.73)
Train Epoch: 1 [384000/686851]	Loss: 0.1940 (0.2175) 	Acc: 70.70% (67.58%) 	Emb_Norm: 0.77 (0.74)
Train Epoch: 1 [448000/686851]	Loss: 0.2039 (0.2160) 	Acc: 71.48% (67.83%) 	Emb_Norm: 0.75 (0.74)
Train Epoch: 1 [512000/686851]	Loss: 0.2188 (0.2148) 	Acc: 66.80% (68.05%) 	Emb_Norm: 0.76 (0.74)
Train Epoch: 1 [576000/686851]	Loss: 0.1898 (0.2135) 	Acc: 70.70% (68.27%) 	Emb_Norm: 0.75 (0.74)
Train Epoch: 1 [640000/686851]	Loss: 0.2126 (0.2126) 	Acc: 66.80% (68.43%) 	Emb_Norm: 0.74 (0.75)
[W pthreadpool-cpp.cc:90] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool)
[W pthreadpool-cpp.cc:90] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool)
[W pthreadpool-cpp.cc:90] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool)
[W pthreadpool-cpp.cc:90] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool)
[W pthreadpool-cpp.cc:90] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool)
[W pthreadpool-cpp.cc:90] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool)
[W pthreadpool-cpp.cc:90] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool)
[W pthreadpool-cpp.cc:90] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool)

valid set: Compat AUC: 0.87 FITB: 56.8

[W pthreadpool-cpp.cc:90] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool)
[W pthreadpool-cpp.cc:90] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool)
[W pthreadpool-cpp.cc:90] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool)
[W pthreadpool-cpp.cc:90] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool)
[W pthreadpool-cpp.cc:90] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool)
[W pthreadpool-cpp.cc:90] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool)
[W pthreadpool-cpp.cc:90] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool)
[W pthreadpool-cpp.cc:90] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool)

test set: Compat AUC: 0.87 FITB: 56.7
1 Like

Previously when I was using PyTorch 1.6 + CUDA 10.2 (on RTX 2070) I did not see this behavior. Now I am using PyTorch 1.9 + CUDA 11.1 (on RTX 3090) I am seeing this behavior.

This thread suggests that it is expected behavior. Looks like the latest nightly build solves the problem (reference) .

1 Like

The issue is already fixed as described here. You could either update to the nightly binary, wait for the 1.9.1 release (which is already in RC4) or build from source.

2 Likes