Hello,
I am trying to install pytorch with cuda by following the build from source method.
I have cuda 11.2, nvtx11.2, cuDNN 8.1.1.33 nvidia cuda visual studio integration 11.2 and visual studio community ediiton 2019 16.6.30204.135. My GPU is compute 7.5 compatible (RTX 2070)
I am trying to build pytorch from a conda environment and I installed all the pre-requisites mentioned in the guide.
Before running setup.py install --cmake, I tried to set the following env variables:
MAGMA_HOME F:\pytorch-source\pytorch.jenkins\pytorch\win-test-helpers\installation-helpers\magma
LIB F:\pytorch-source\pytorch.jenkins\pytorch\win-test-helpers\installation-helpers\mkl\lib
CMAKE_GENERATOR Ninja
TORCH_CUDA_ARCH_LIST 7.5
CMAKE_INCLUDE_PATH F:\pytorch-source\pytorch.jenkins\pytorch\win-test-helpers\installation-helpers\mkl\include
The build and installation is working and it finishes successfully, however, when I try to actually create a tensor on the gpu, i get the following behavior:
import torch
torch.cuda.is_available()
True
torch.cuda.current_device()
0
torch.cuda.device(0)
<torch.cuda.device object at 0x000002731D947640>
torch.cuda.device_count()
1
torch.cuda.get_device_name(0)
βGeForce RTX 2070β
torch.randn(1, device=βcudaβ)
Traceback (most recent call last):
File ββ, line 1, in
RuntimeError: CUDA error: no kernel image is available for execution on the device
Can someone please guide me further on troubleshooting this? It seems to me like maybe I am missing some configuration parameters when building? Also, I am struggling to find where the build logs are getting written in order to check in more detail how the build process is doing.
Below, you can find the beginning of the build log that should show the environment details and config options used for build:
β Building version 1.9.0a0+git01b1557
cmake -GNinja -DBUILD_PYTHON=True -DBUILD_TEST=True -DCMAKE_BUILD_TYPE=Release -DCMAKE_GENERATOR=Ninja -DCMAKE_INCLUDE_PATH=F:\TAID-Master\MLAV\pytorch-source\pytorch.jenkins\pytorch\win-test-helpers\installation-helpers\mkl\include -DCMAKE_INSTALL_PREFIX=F:\TAID-Master\MLAV\pytorch-source\pytorch\torch -DCMAKE_PREFIX_PATH=C:\ProgramData\Miniconda3\envs\pytorch-source\Lib\site-packages -DCUDNN_LIBRARY=C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.2\lib\x64 -DNUMPY_INCLUDE_DIR=C:\ProgramData\Miniconda3\envs\pytorch-source\lib\site-packages\numpy\core\include -DPYTHON_EXECUTABLE=C:\ProgramData\Miniconda3\envs\pytorch-source\python.exe -DPYTHON_INCLUDE_DIR=C:\ProgramData\Miniconda3\envs\pytorch-source\include -DPYTHON_LIBRARY=C:\ProgramData\Miniconda3\envs\pytorch-source/libs/python38.lib -DTORCH_BUILD_VERSION=1.9.0a0+git01b1557 -DUSE_NUMPY=True F:\TAID-Master\MLAV\pytorch-source\pytorch
β The CXX compiler identification is MSVC 19.26.28806.0
β The C compiler identification is MSVC 19.26.28806.0
β Detecting CXX compiler ABI info
β Detecting CXX compiler ABI info - done
β Check for working CXX compiler: C:/Program Files (x86)/Microsoft Visual Studio/2019/Community/VC/Tools/MSVC/14.26.28801/bin/Hostx64/x64/cl.exe - skipped
β Detecting CXX compile features
β Detecting CXX compile features - done
β Detecting C compiler ABI info
β Detecting C compiler ABI info - done
β Check for working C compiler: C:/Program Files (x86)/Microsoft Visual Studio/2019/Community/VC/Tools/MSVC/14.26.28801/bin/Hostx64/x64/cl.exe - skipped
β Detecting C compile features
β Detecting C compile features - done
β Not forcing any particular BLAS to be found
CMake Warning at CMakeLists.txt:305 (message):
TensorPipe cannot be used on Windows. Set it to OFFβ Performing Test COMPILER_WORKS
β Performing Test COMPILER_WORKS - Success
β Performing Test SUPPORT_GLIBCXX_USE_C99
β Performing Test SUPPORT_GLIBCXX_USE_C99 - Success
β Performing Test CAFFE2_EXCEPTION_PTR_SUPPORTED
β Performing Test CAFFE2_EXCEPTION_PTR_SUPPORTED - Success
β std::exception_ptr is supported.
β Performing Test CAFFE2_NEED_TO_TURN_OFF_DEPRECATION_WARNING
β Performing Test CAFFE2_NEED_TO_TURN_OFF_DEPRECATION_WARNING - Failed
β Performing Test CAFFE2_COMPILER_SUPPORTS_AVX2_EXTENSIONS
β Performing Test CAFFE2_COMPILER_SUPPORTS_AVX2_EXTENSIONS - Success
β Current compiler supports avx2 extension. Will build perfkernels.
β Performing Test CAFFE2_COMPILER_SUPPORTS_AVX512_EXTENSIONS
β Performing Test CAFFE2_COMPILER_SUPPORTS_AVX512_EXTENSIONS - Success
β Current compiler supports avx512f extension. Will build fbgemm.
β Performing Test COMPILER_SUPPORTS_HIDDEN_VISIBILITY
β Performing Test COMPILER_SUPPORTS_HIDDEN_VISIBILITY - Failed
β Performing Test COMPILER_SUPPORTS_HIDDEN_INLINE_VISIBILITY
β Performing Test COMPILER_SUPPORTS_HIDDEN_INLINE_VISIBILITY - Failed
β Performing Test COMPILER_SUPPORTS_RDYNAMIC
β Performing Test COMPILER_SUPPORTS_RDYNAMIC - Failed
β Building using own protobuf under third_party per request.
β Use custom protobuf build.β 3.11.4.0
β Looking for pthread.h
β Looking for pthread.h - not found
β Found Threads: TRUE
β Caffe2 protobuf include directory: $<BUILD_INTERFACE:F:/TAID-Master/MLAV/pytorch-source/pytorch/third_party/protobuf/src>$<INSTALL_INTERFACE:include>
β Trying to find preferred BLAS backend of choice: MKL
β MKL_THREADING = OMP
β Looking for sys/types.h
β Looking for sys/types.h - found
β Looking for stdint.h
β Looking for stdint.h - found
β Looking for stddef.h
β Looking for stddef.h - found
β Check size of void*
β Check size of void* - done
β Looking for cblas_sgemm
β Looking for cblas_sgemm - found
β MKL libraries: F:/TAID-Master/MLAV/pytorch-source/pytorch/.jenkins/pytorch/win-test-helpers/installation-helpers/mkl/lib/mkl_intel_lp64.lib;F:/TAID-Master/MLAV/pytorch-source/pytorch/.jenkins/pytorch/win-test-helpers/installation-helpers/mkl/lib/mkl_intel_thread.lib;F:/TAID-Master/MLAV/pytorch-source/pytorch/.jenkins/pytorch/win-test-helpers/installation-helpers/mkl/lib/mkl_core.lib;F:/TAID-Master/MLAV/pytorch-source/pytorch/.jenkins/pytorch/win-test-helpers/installation-helpers/mkl/lib/libiomp5md.lib
β MKL include directory: F:/TAID-Master/MLAV/pytorch-source/pytorch/.jenkins/pytorch/win-test-helpers/installation-helpers/mkl/include
β MKL OpenMP type: Intel
β MKL OpenMP library: F:/TAID-Master/MLAV/pytorch-source/pytorch/.jenkins/pytorch/win-test-helpers/installation-helpers/mkl/lib/libiomp5md.lib
β The ASM compiler identification is MSVC
β Found assembler: C:/Program Files (x86)/Microsoft Visual Studio/2019/Community/VC/Tools/MSVC/14.26.28801/bin/Hostx64/x64/cl.exe
CMake Deprecation Warning at third_party/googletest/CMakeLists.txt:1 (cmake_minimum_required):
Compatibility with CMake < 2.8.12 will be removed from a future version of
CMake.** AsmJit Summary **
ASMJIT_DIR=F:/TAID-Master/MLAV/pytorch-source/pytorch/third_party/fbgemm/third_party/asmjit
ASMJIT_TEST=FALSE
ASMJIT_TARGET_TYPE=SHARED
ASMJIT_DEPS=
ASMJIT_LIBS=asmjit
ASMJIT_CFLAGS=
ASMJIT_PRIVATE_CFLAGS=-MP;-GF;-Zc:inline;-Zc:strictStrings;-Zc:threadSafeInit-;-W4
ASMJIT_PRIVATE_CFLAGS_DBG=-GS
ASMJIT_PRIVATE_CFLAGS_REL=-GS-;-O2;-Oi
β Using third party subdirectory Eigen.
β Found PythonInterp: C:/ProgramData/Miniconda3/envs/pytorch-source/python.exe (found suitable version β3.8.8β, minimum required is β3.0β)
β Found PythonLibs: C:/ProgramData/Miniconda3/envs/pytorch-source/libs/python38.lib (found suitable version β3.8.8β, minimum required is β3.0β)
β Could NOT find pybind11 (missing: pybind11_DIR)
β Could NOT find pybind11 (missing: pybind11_INCLUDE_DIR)
β Using third_party/pybind11.
β pybind11 include dirs: F:/TAID-Master/MLAV/pytorch-source/pytorch/cmake/β¦/third_party/pybind11/include
β Could NOT find MPI_C (missing: MPI_C_LIB_NAMES MPI_C_HEADER_DIR MPI_C_WORKS)
β Could NOT find MPI_CXX (missing: MPI_CXX_LIB_NAMES MPI_CXX_HEADER_DIR MPI_CXX_WORKS)
β Could NOT find MPI (missing: MPI_C_FOUND MPI_CXX_FOUND)
Reason given by package: MPI component βFortranβ was requested, but language Fortran is not enabled.CMake Warning at cmake/Dependencies.cmake:1045 (message):
Not compiling with MPI. Suppress this warning with -DUSE_MPI=OFF
Call Stack (most recent call first):
CMakeLists.txt:604 (include)β Adding OpenMP CXX_FLAGS: -openmp:experimental -IF:/TAID-Master/MLAV/pytorch-source/pytorch/.jenkins/pytorch/win-test-helpers/installation-helpers/mkl/include
β Will link against OpenMP libraries: F:/TAID-Master/MLAV/pytorch-source/pytorch/.jenkins/pytorch/win-test-helpers/installation-helpers/mkl/lib/libiomp5md.lib
β Found CUDA: C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v11.2 (found version β11.2β)
β Caffe2: CUDA detected: 11.2
β Caffe2: CUDA nvcc is: C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v11.2/bin/nvcc.exe
β Caffe2: CUDA toolkit directory: C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v11.2
β Caffe2: Header version is: 11.2
β Found CUDNN: C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v11.2/lib/x64/cudnn.lib
β Found cuDNN: v8.1.1 (include: C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v11.2/include, library: C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v11.2/lib/x64/cudnn.lib)
β C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v11.2/lib/x64/nvrtc.lib shorthash is aa1d5a72
CMake Warning at cmake/public/utils.cmake:365 (message):
In the future we will require one to explicitly pass TORCH_CUDA_ARCH_LIST
to cmake instead of implicitly setting it as an env variable. This will
become a FATAL_ERROR in future version of pytorch.
Call Stack (most recent call first):
cmake/public/cuda.cmake:483 (torch_cuda_get_nvcc_gencode_flag)
cmake/Dependencies.cmake:1150 (include)
CMakeLists.txt:604 (include)β Added CUDA NVCC flags for: -gencode;arch=compute_75,code=sm_75
β Found CUB: C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v11.2/include
CMake Warning (dev) at third_party/gloo/CMakeLists.txt:21 (option):
Policy CMP0077 is not set: option() honors normal variables. Run βcmake
βhelp-policy CMP0077β for policy details. Use the cmake_policy command to
set the policy and suppress this warning.For compatibility with older versions of CMake, option is clearing the
normal variable βBUILD_BENCHMARKβ.
This warning is for project developers. Use -Wno-dev to suppress it.CMake Warning (dev) at third_party/gloo/CMakeLists.txt:34 (option):
Policy CMP0077 is not set: option() honors normal variables. Run βcmake
βhelp-policy CMP0077β for policy details. Use the cmake_policy command to
set the policy and suppress this warning.
β MSVC detected
β Set USE_REDIS OFF
β Set USE_IBVERBS OFF
β Set USE_NCCL OFF
β Set USE_RCCL OFF
β Set USE_LIBUV ON
β Only USE_LIBUV is supported on Windows
β Gloo build as SHARED libraryβ Found CUDA: C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v11.2 (found suitable version β11.2β, minimum required is β7.0β)
β CUDA detected: 11.2
CMake Warning at cmake/Dependencies.cmake:1394 (message):
Metal is only used in ios builds.
Call Stack (most recent call first):
CMakeLists.txt:604 (include)Generated: F:/TAID-Master/MLAV/pytorch-source/pytorch/build/third_party/onnx/onnx/onnx_onnx_torch-ml.proto
Generated: F:/TAID-Master/MLAV/pytorch-source/pytorch/build/third_party/onnx/onnx/onnx-operators_onnx_torch-ml.proto
Generated: F:/TAID-Master/MLAV/pytorch-source/pytorch/build/third_party/onnx/onnx/onnx-data_onnx_torch.protoβ ******** Summary ********
β CMake version : 3.19.6
β CMake command : C:/ProgramData/Miniconda3/envs/pytorch-source/Library/bin/cmake.exe
β System : Windows
β C++ compiler : C:/Program Files (x86)/Microsoft Visual Studio/2019/Community/VC/Tools/MSVC/14.26.28801/bin/Hostx64/x64/cl.exe
β C++ compiler version : 19.26.28806.0
β CXX flags : /DWIN32 /D_WINDOWS /GR /EHsc /w /bigobj -DUSE_PTHREADPOOL -openmp:experimental -IF:/TAID-Master/MLAV/pytorch-source/pytorch/.jenkins/pytorch/win-test-helpers/installation-helpers/mkl/include
β Build type : Release
β Compile definitions : WIN32_LEAN_AND_MEAN;TH_BLAS_MKL;_OPENMP_NOFORCE_MANIFEST;ONNX_ML=1;ONNXIFI_ENABLE_EXT=1
β CMAKE_PREFIX_PATH : C:\ProgramData\Miniconda3\envs\pytorch-source\Lib\site-packages;C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v11.2;C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.2;C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v11.2
β CMAKE_INSTALL_PREFIX : F:/TAID-Master/MLAV/pytorch-source/pytorch/torch
β CMAKE_MODULE_PATH : F:/TAID-Master/MLAV/pytorch-source/pytorch/cmake/Modules;F:/TAID-Master/MLAV/pytorch-source/pytorch/cmake/public/β¦/Modules_CUDA_fixβ ONNX version : 1.8.0
β ONNX NAMESPACE : onnx_torch
β ONNX_BUILD_TESTS : OFF
β ONNX_BUILD_BENCHMARKS : OFF
β ONNX_USE_LITE_PROTO : OFF
β ONNXIFI_DUMMY_BACKEND : OFF
β ONNXIFI_ENABLE_EXT : OFFβ Protobuf compiler :
β Protobuf includes :
β Protobuf libraries :
β BUILD_ONNX_PYTHON : OFFβ ******** Summary ********
β CMake version : 3.19.6
β CMake command : C:/ProgramData/Miniconda3/envs/pytorch-source/Library/bin/cmake.exe
β System : Windows
β C++ compiler : C:/Program Files (x86)/Microsoft Visual Studio/2019/Community/VC/Tools/MSVC/14.26.28801/bin/Hostx64/x64/cl.exe
β C++ compiler version : 19.26.28806.0
β CXX flags : /DWIN32 /D_WINDOWS /GR /EHsc /w /bigobj -DUSE_PTHREADPOOL -openmp:experimental -IF:/TAID-Master/MLAV/pytorch-source/pytorch/.jenkins/pytorch/win-test-helpers/installation-helpers/mkl/include
β Build type : Release
β Compile definitions : WIN32_LEAN_AND_MEAN;TH_BLAS_MKL;_OPENMP_NOFORCE_MANIFEST;ONNX_ML=1;ONNXIFI_ENABLE_EXT=1
β CMAKE_PREFIX_PATH : C:\ProgramData\Miniconda3\envs\pytorch-source\Lib\site-packages;C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v11.2;C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.2;C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v11.2
β CMAKE_INSTALL_PREFIX : F:/TAID-Master/MLAV/pytorch-source/pytorch/torch
β CMAKE_MODULE_PATH : F:/TAID-Master/MLAV/pytorch-source/pytorch/cmake/Modules;F:/TAID-Master/MLAV/pytorch-source/pytorch/cmake/public/β¦/Modules_CUDA_fixβ ONNX version : 1.4.1
β ONNX NAMESPACE : onnx_torch
β ONNX_BUILD_TESTS : OFF
β ONNX_BUILD_BENCHMARKS : OFF
β ONNX_USE_LITE_PROTO : OFF
β ONNXIFI_DUMMY_BACKEND : OFFβ Protobuf compiler :
β Protobuf includes :
β Protobuf libraries :
β BUILD_ONNX_PYTHON : OFF
β Found CUDA with FP16 support, compiling with torch.cuda.HalfTensor
β Adding -DNDEBUG to compile flags
β Checking prototype magma_get_sgeqrf_nb for MAGMA_V2
β Checking prototype magma_get_sgeqrf_nb for MAGMA_V2 - True
β Compiling with MAGMA support
β MAGMA INCLUDE DIRECTORIES: F:/TAID-Master/MLAV/pytorch-source/pytorch/.jenkins/pytorch/win-test-helpers/installation-helpers/magma/include
β MAGMA LIBRARIES: F:/TAID-Master/MLAV/pytorch-source/pytorch/.jenkins/pytorch/win-test-helpers/installation-helpers/magma/lib/magma.lib
β MAGMA V2 check: 1
β Could not find hardware support for NEON on this machine.
β No OMAP3 processor on this machine.
β No OMAP4 processor on this machine.
β Looking for cpuid.h
β Looking for cpuid.h - not found
β Performing Test NO_GCC_EBX_FPIC_BUG
β Performing Test NO_GCC_EBX_FPIC_BUG - Failed
β Performing Test C_HAS_AVX_1
β Performing Test C_HAS_AVX_1 - Success
β Performing Test C_HAS_AVX2_1
β Performing Test C_HAS_AVX2_1 - Success
β Performing Test CXX_HAS_AVX_1
β Performing Test CXX_HAS_AVX_1 - Success
β Performing Test CXX_HAS_AVX2_1
β Performing Test CXX_HAS_AVX2_1 - Success
β AVX compiler support found
β AVX2 compiler support found
β Performing Test BLAS_F2C_DOUBLE_WORKS
β Performing Test BLAS_F2C_DOUBLE_WORKS - Failed
β Performing Test BLAS_F2C_FLOAT_WORKS
β Performing Test BLAS_F2C_FLOAT_WORKS - Success
β Performing Test BLAS_USE_CBLAS_DOT
β Performing Test BLAS_USE_CBLAS_DOT - Success
β Found a library with BLAS API (mkl). Full path: (F:/TAID-Master/MLAV/pytorch-source/pytorch/.jenkins/pytorch/win-test-helpers/installation-helpers/mkl/lib/mkl_intel_lp64.lib;F:/TAID-Master/MLAV/pytorch-source/pytorch/.jenkins/pytorch/win-test-helpers/installation-helpers/mkl/lib/mkl_intel_thread.lib;F:/TAID-Master/MLAV/pytorch-source/pytorch/.jenkins/pytorch/win-test-helpers/installation-helpers/mkl/lib/mkl_core.lib;F:/TAID-Master/MLAV/pytorch-source/pytorch/.jenkins/pytorch/win-test-helpers/installation-helpers/mkl/lib/libiomp5md.lib)
β Found a library with LAPACK API (mkl).
disabling ROCM because NOT USE_ROCM is set
β MIOpen not found. Compiling without MIOpen support
β MKLDNN_CPU_RUNTIME = OMP
CMake Deprecation Warning at third_party/ideep/mkl-dnn/CMakeLists.txt:17 (cmake_minimum_required):
Compatibility with CMake < 2.8.12 will be removed from a future version of
CMake.Update the VERSION argument value or use a β¦ suffix to tell
CMake that the project does not need compatibility with older versions.β Intel MKL-DNN compat: set DNNL_ENABLE_CONCURRENT_EXEC to MKLDNN_ENABLE_CONCURRENT_EXEC with value
ON
β Intel MKL-DNN compat: set DNNL_BUILD_EXAMPLES to MKLDNN_BUILD_EXAMPLES with valueFALSE
β Intel MKL-DNN compat: set DNNL_BUILD_TESTS to MKLDNN_BUILD_TESTS with valueFALSE
β Intel MKL-DNN compat: set DNNL_LIBRARY_TYPE to MKLDNN_LIBRARY_TYPE with valueSTATIC
β Intel MKL-DNN compat: set DNNL_ARCH_OPT_FLAGS to MKLDNN_ARCH_OPT_FLAGS with value ``
β Intel MKL-DNN compat: set DNNL_CPU_RUNTIME to MKLDNN_CPU_RUNTIME with valueOMP
β Found OpenMP_CXX: -openmp:experimental -IF:/TAID-Master/MLAV/pytorch-source/pytorch/.jenkins/pytorch/win-test-helpers/installation-helpers/mkl/include
β GPU support is disabled
β Primitive cache is enabled
β Found MKL-DNN: TRUE
β Performing Test C_HAS_THREAD
β Performing Test C_HAS_THREAD - Success
β Version: 7.0.3
β Build type: Release
β CXX_STANDARD: 14
β Performing Test has_std_14_flag
β Performing Test has_std_14_flag - Failed
β Performing Test has_std_1y_flag
β Performing Test has_std_1y_flag - Failed
β Performing Test SUPPORTS_USER_DEFINED_LITERALS
β Performing Test SUPPORTS_USER_DEFINED_LITERALS - Success
β Performing Test FMT_HAS_VARIANT
β Performing Test FMT_HAS_VARIANT - Success
β Required features: cxx_variadic_templates
β Looking for _strtod_l
β Looking for _strtod_l - found
β Not using libkineto in a Windows build.
β CUDA build detected, configuring Kineto with CUPTI support.
β Looking for backtrace
β Looking for backtrace - not found
β Could NOT find Backtrace (missing: Backtrace_LIBRARY Backtrace_INCLUDE_DIR)
β donβt use NUMA
β Performing Test COMPILER_SUPPORTS_NO_AVX256_SPLIT
β Performing Test COMPILER_SUPPORTS_NO_AVX256_SPLIT - Failed
β Using ATen parallel backend: OMP
AT_INSTALL_INCLUDE_DIR include/ATen/core
core header install: F:/TAID-Master/MLAV/pytorch-source/pytorch/build/aten/src/ATen/core/TensorBody.h
β NCCL operators skipped due to no CUDA support
β Excluding FakeLowP operators
β Including IDEEP operators
β Excluding image processing operators due to no opencv
β Excluding video processing operators due to no opencv
β MPI operators skipped due to no MPI support
β Include Observer libraryβ
β ******** Summary ********
β General:
β CMake version : 3.19.6
β CMake command : C:/ProgramData/Miniconda3/envs/pytorch-source/Library/bin/cmake.exe
β System : Windows
β C++ compiler : C:/Program Files (x86)/Microsoft Visual Studio/2019/Community/VC/Tools/MSVC/14.26.28801/bin/Hostx64/x64/cl.exe
β C++ compiler id : MSVC
β C++ compiler version : 19.26.28806.0
β Using ccache if found : OFF
β CXX flags : /DWIN32 /D_WINDOWS /GR /EHsc /w /bigobj -DUSE_PTHREADPOOL -openmp:experimental -IF:/TAID-Master/MLAV/pytorch-source/pytorch/.jenkins/pytorch/win-test-helpers/installation-helpers/mkl/include -DNDEBUG -DUSE_FBGEMM -DUSE_XNNPACK
β Build type : Release
β Compile definitions : WIN32_LEAN_AND_MEAN;TH_BLAS_MKL;_OPENMP_NOFORCE_MANIFEST;ONNX_ML=1;ONNXIFI_ENABLE_EXT=1;ONNX_NAMESPACE=onnx_torch;_CRT_SECURE_NO_DEPRECATE=1;MAGMA_V2;IDEEP_USE_MKL;USE_EXTERNAL_MZCRC;MINIZ_DISABLE_ZIP_READER_CRC32_CHECKS
β CMAKE_PREFIX_PATH : C:\ProgramData\Miniconda3\envs\pytorch-source\Lib\site-packages;C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v11.2;C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.2;C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v11.2
β CMAKE_INSTALL_PREFIX : F:/TAID-Master/MLAV/pytorch-source/pytorch/torchβ TORCH_VERSION : 1.9.0
β CAFFE2_VERSION : 1.9.0
β BUILD_CAFFE2 : ON
β BUILD_CAFFE2_OPS : ON
β BUILD_CAFFE2_MOBILE : OFF
β BUILD_STATIC_RUNTIME_BENCHMARK: OFF
β BUILD_TENSOREXPR_BENCHMARK: OFF
β BUILD_BINARY : OFF
β BUILD_CUSTOM_PROTOBUF : ON
β Link local protobuf : ON
β BUILD_DOCS : OFF
β BUILD_PYTHON : True
β Python version : 3.8.8
β Python executable : C:/ProgramData/Miniconda3/envs/pytorch-source/python.exe
β Pythonlibs version : 3.8.8
β Python library : C:/ProgramData/Miniconda3/envs/pytorch-source/libs/python38.lib
β Python includes : C:/ProgramData/Miniconda3/envs/pytorch-source/include
β Python site-packages: Lib/site-packages
β BUILD_SHARED_LIBS : ON
β CAFFE2_USE_MSVC_STATIC_RUNTIME : OFF
β BUILD_TEST : True
β BUILD_JNI : OFF
β BUILD_MOBILE_AUTOGRAD : OFF
β BUILD_LITE_INTERPRETER: OFF
β INTERN_BUILD_MOBILE :
β USE_BLAS : 1
β BLAS : mkl
β USE_LAPACK : 1
β LAPACK : mkl
β USE_ASAN : OFF
β USE_CPP_CODE_COVERAGE : OFF
β USE_CUDA : ON
β Split CUDA : OFF
β CUDA static link : OFF
β USE_CUDNN : ON
β CUDA version : 11.2
β cuDNN version : 8.1.1
β CUDA root directory : C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v11.2
β CUDA library : C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v11.2/lib/x64/cuda.lib
β cudart library : C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v11.2/lib/x64/cudart_static.lib
β cublas library : C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v11.2/lib/x64/cublas.lib
β cufft library : C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v11.2/lib/x64/cufft.lib
β curand library : C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v11.2/lib/x64/curand.lib
β cuDNN library : C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v11.2/lib/x64/cudnn.lib
β nvrtc : C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v11.2/lib/x64/nvrtc.lib
β CUDA include path : C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v11.2/include
β NVCC executable : C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v11.2/bin/nvcc.exe
β NVCC flags : -Xcompiler;/w;-w;-Xfatbin;-compress-all;-DONNX_NAMESPACE=onnx_torch;βuse-local-env;-gencode;arch=compute_75,code=sm_75;-Xcudafe;βdiag_suppress=cc_clobber_ignored,βdiag_suppress=integer_sign_change,βdiag_suppress=useless_using_declaration,βdiag_suppress=set_but_not_used,βdiag_suppress=field_without_dll_interface,βdiag_suppress=base_class_has_different_dll_interface,βdiag_suppress=dll_interface_conflict_none_assumed,βdiag_suppress=dll_interface_conflict_dllexport_assumed,βdiag_suppress=implicit_return_from_non_void_function,βdiag_suppress=unsigned_compare_with_zero,βdiag_suppress=declared_but_not_referenced,βdiag_suppress=bad_friend_decl;βWerror;cross-execution-space-call;βno-host-device-move-forward;-Xcompiler;-MD$<$CONFIG:Debug:d>;βexpt-relaxed-constexpr;βexpt-extended-lambda;-Xcompiler=/wd4819,/wd4503,/wd4190,/wd4244,/wd4251,/wd4275,/wd4522;-Wno-deprecated-gpu-targets;βexpt-extended-lambda;-DCUDA_HAS_FP16=1;-D__CUDA_NO_HALF_OPERATORS__;-D__CUDA_NO_HALF_CONVERSIONS__;-D__CUDA_NO_BFLOAT16_CONVERSIONS__;-D__CUDA_NO_HALF2_OPERATORS__
β CUDA host compiler : C:/Program Files (x86)/Microsoft Visual Studio/2019/Community/VC/Tools/MSVC/14.26.28801/bin/Hostx64/x64/cl.exe
β NVCC --device-c : OFF
β USE_TENSORRT : OFF
β USE_ROCM : OFF
β USE_EIGEN_FOR_BLAS :
β USE_FBGEMM : ON
β USE_FAKELOWP : OFF
β USE_KINETO : OFF
β USE_FFMPEG : OFF
β USE_GFLAGS : OFF
β USE_GLOG : OFF
β USE_LEVELDB : OFF
β USE_LITE_PROTO : OFF
β USE_LMDB : OFF
β USE_METAL : OFF
β USE_PYTORCH_METAL : OFF
β USE_FFTW : OFF
β USE_MKL : ON
β USE_MKLDNN : ON
β USE_MKLDNN_CBLAS : OFF
β USE_NCCL : OFF
β USE_NNPACK : OFF
β USE_NUMPY : ON
β USE_OBSERVERS : ON
β USE_OPENCL : OFF
β USE_OPENCV : OFF
β USE_OPENMP : ON
β USE_TBB : OFF
β USE_VULKAN : OFF
β USE_PROF : OFF
β USE_QNNPACK : OFF
β USE_PYTORCH_QNNPACK : OFF
β USE_REDIS : OFF
β USE_ROCKSDB : OFF
β USE_ZMQ : OFF
β USE_DISTRIBUTED : ON
β USE_MPI : OFF
β USE_GLOO : ON
β USE_TENSORPIPE : OFF
β USE_DEPLOY : OFF
β Public Dependencies : Threads::Threads;caffe2::mkl;caffe2::mkldnn
β Private Dependencies : pthreadpool;cpuinfo;XNNPACK;fbgemm;fp16;gloo;aten_op_header_gen;foxi_loader;fmt::fmt-header-only
β Configuring done
β Generating done