Pytorch build from source stuck on multiple MKL_THREADING = OMP and quit configure without the result

OS: Ubuntu 20.04
Cuda 10.2
Tesla K10
driver: nvidia 470.82.01
GCC: 8
Anaconda ver: 2021.11
Cmake: 3.19.6

Before build were installed:
conda install -c numba numba
conda install astunparse numpy ninja pyyaml mkl mkl-include setuptools cmake cffi typing_extensions future six requests dataclasses

conda install -c pytorch magma-cuda102

The output of configure stage ends with stuck and quit with no result, no build

Building wheel torch-1.11.0a0+git08d8f81
– Building version 1.11.0a0+git08d8f81
cmake -GNinja -DBUILD_PYTHON=True -DBUILD_TEST=True -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=/home/dig/pytorch/torch -DCMAKE_PREFIX_PATH=/home/dig/anaconda3/lib/python3.9/site-packages -DNUMPY_INCLUDE_DIR=/home/dig/anaconda3/lib/python3.9/site-packages/numpy/core/include -DPYTHON_EXECUTABLE=/home/dig/anaconda3/bin/python -DPYTHON_INCLUDE_DIR=/home/dig/anaconda3/include/python3.9 -DPYTHON_LIBRARY=/home/dig/anaconda3/lib/libpython3.9.a -DTORCH_BUILD_VERSION=1.11.0a0+git08d8f81 -DUSE_CUDA=1 -DUSE_CUDNN=1 -DUSE_MKLDNN=1 -DUSE_NUMPY=True /home/dig/pytorch
– The CXX compiler identification is Clang 10.0.0
– The C compiler identification is GNU 8.4.0
– Detecting CXX compiler ABI info
– Detecting CXX compiler ABI info - done
– Check for working CXX compiler: /usr/bin/c++ - skipped
– Detecting CXX compile features
– Detecting CXX compile features - done
– Detecting C compiler ABI info
– Detecting C compiler ABI info - done
– Check for working C compiler: /usr/bin/cc - skipped
– Detecting C compile features
– Detecting C compile features - done
– Not forcing any particular BLAS to be found
– Could not find ccache. Consider installing ccache to speed up compilation.
– Performing Test COMPILER_WORKS
– Performing Test COMPILER_WORKS - Success
– Performing Test SUPPORT_GLIBCXX_USE_C99
– Performing Test SUPPORT_GLIBCXX_USE_C99 - Success
– Performing Test CAFFE2_EXCEPTION_PTR_SUPPORTED
– Performing Test CAFFE2_EXCEPTION_PTR_SUPPORTED - Success
– std::exception_ptr is supported.
– Performing Test CAFFE2_NEED_TO_TURN_OFF_DEPRECATION_WARNING
– Performing Test CAFFE2_NEED_TO_TURN_OFF_DEPRECATION_WARNING - Failed
– Turning off deprecation warning due to glog.
– Performing Test C_HAS_AVX_1
– Performing Test C_HAS_AVX_1 - Failed
– Performing Test C_HAS_AVX_2
– Performing Test C_HAS_AVX_2 - Success
– Performing Test C_HAS_AVX2_1
– Performing Test C_HAS_AVX2_1 - Failed
– Performing Test C_HAS_AVX2_2
– Performing Test C_HAS_AVX2_2 - Success
– Performing Test C_HAS_AVX512_1
– Performing Test C_HAS_AVX512_1 - Failed
– Performing Test C_HAS_AVX512_2
– Performing Test C_HAS_AVX512_2 - Failed
– Performing Test C_HAS_AVX512_3
– Performing Test C_HAS_AVX512_3 - Failed
– Performing Test CXX_HAS_AVX_1
– Performing Test CXX_HAS_AVX_1 - Failed
– Performing Test CXX_HAS_AVX_2
– Performing Test CXX_HAS_AVX_2 - Success
– Performing Test CXX_HAS_AVX2_1
– Performing Test CXX_HAS_AVX2_1 - Failed
– Performing Test CXX_HAS_AVX2_2
– Performing Test CXX_HAS_AVX2_2 - Success
– Performing Test CXX_HAS_AVX512_1
– Performing Test CXX_HAS_AVX512_1 - Failed
– Performing Test CXX_HAS_AVX512_2
– Performing Test CXX_HAS_AVX512_2 - Failed
– Performing Test CXX_HAS_AVX512_3
– Performing Test CXX_HAS_AVX512_3 - Failed
– Current compiler supports avx2 extension. Will build perfkernels.
– Performing Test CAFFE2_COMPILER_SUPPORTS_AVX512_EXTENSIONS
– Performing Test CAFFE2_COMPILER_SUPPORTS_AVX512_EXTENSIONS - Success
– Current compiler supports avx512f extension. Will build fbgemm.
– Performing Test COMPILER_SUPPORTS_HIDDEN_VISIBILITY
– Performing Test COMPILER_SUPPORTS_HIDDEN_VISIBILITY - Success
– Performing Test COMPILER_SUPPORTS_HIDDEN_INLINE_VISIBILITY
– Performing Test COMPILER_SUPPORTS_HIDDEN_INLINE_VISIBILITY - Success
– Performing Test COMPILER_SUPPORTS_RDYNAMIC
– Performing Test COMPILER_SUPPORTS_RDYNAMIC - Success
– Found CUDA: /usr/local/cuda-10.2 (found version “10.2”)
– The CUDA compiler identification is NVIDIA 10.2.89
– Detecting CUDA compiler ABI info
– Detecting CUDA compiler ABI info - done
– Check for working CUDA compiler: /usr/local/cuda-10.2/bin/nvcc - skipped
– Detecting CUDA compile features
– Detecting CUDA compile features - done
– Caffe2: CUDA detected: 10.2
– Caffe2: CUDA nvcc is: /usr/local/cuda-10.2/bin/nvcc
– Caffe2: CUDA toolkit directory: /usr/local/cuda-10.2
– Caffe2: Header version is: 10.2
– Found CUDNN: /usr/lib/cuda/lib64/libcudnn.so
– Found cuDNN: v8.3.0 (include: /usr/local/cuda-10.2/include, library: /usr/lib/cuda/lib64/libcudnn.so)
– /usr/local/cuda-10.2/lib64/libnvrtc.so shorthash is 08c4863f
– Autodetected CUDA architecture(s): 3.0 3.0 3.0
– Added CUDA NVCC flags for: -gencode;arch=compute_30,code=sm_30
– Building using own protobuf under third_party per request.
– Use custom protobuf build.

– 3.13.0.0
– Looking for pthread.h
– Looking for pthread.h - found
– Performing Test CMAKE_HAVE_LIBC_PTHREAD
– Performing Test CMAKE_HAVE_LIBC_PTHREAD - Failed
– Check if compiler accepts -pthread
– Check if compiler accepts -pthread - yes
– Found Threads: TRUE
– Performing Test protobuf_HAVE_BUILTIN_ATOMICS
– Performing Test protobuf_HAVE_BUILTIN_ATOMICS - Success
– Caffe2 protobuf include directory: $<BUILD_INTERFACE:/home/dig/pytorch/third_party/protobuf/src>$<INSTALL_INTERFACE:include>
– Trying to find preferred BLAS backend of choice: MKL
– MKL_THREADING = OMP
– Looking for sys/types.h
– Looking for sys/types.h - found
– Looking for stdint.h
– Looking for stdint.h - found
– Looking for stddef.h
– Looking for stddef.h - found
– Check size of void*
– Check size of void* - done
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP
– MKL_THREADING = OMP

I try to change the Conda ver from 2021.4 up to 2021.11 - no result

I also try to build the pytorch 1.10 - also the same problem is

Did you find a solution for this error?