Install pytorch for CUDA [compute capability] 3.0 on windows?

Hi Forum!
I decided to build pytorch on GTX660 graphics card (don’t ask why, I don’t know).
I got the idea from the article.
What I used:
NVIDIA Driver
474.30-desktop-win10-win11-64bit-international-dch-whql
cuda 10.2
cuda_10.2.89_441.22_win10.exe
2 updates (cuda_10.2.1_win10 and cuda_10.2.2_win10)
cudnn 8.7.0
cudnn-windows-x86_64-8.7.0.84_cuda10-archive
BuildTools 14.29.30133
Picture1

magma
magma_2.5.4_cuda102_release
mkl
mkl_2020.2.254
Replaced onnx library
onnx-1.8.1
I ran into a lot of problems, most of them I managed, but when I got to the point of making C10. I got errors that I have no idea how to solve.

These are the creation logs of pytorch:

(D:\condaEnv) D:\Code\pytorch>python setup.py install --cmake
Building wheel torch-1.12.0a0+git664058f
-- Building version 1.12.0a0+git664058f
cmake -GNinja -DBUILD_PYTHON=True -DBUILD_TEST=True -DCMAKE_BUILD_TYPE=Release -DCMAKE_INCLUDE_PATH=D:\Pytorch_requirements\mkl\include -DCMAKE_INSTALL_PREFIX=D:\Code\pytorch\torch -DCMAKE_PREFIX_PATH=D:\condaEnv\Lib\site-packages -DMSVC_Z7_OVERRIDE=OFF -DNUMPY_INCLUDE_DIR=D:\condaEnv\Lib\site-packages\numpy\core\include -DPYTHON_EXECUTABLE=D:\condaEnv\python.exe -DPYTHON_INCLUDE_DIR=D:\condaEnv\Include -DPYTHON_LIBRARY=D:\condaEnv/libs/python311.lib -DTORCH_BUILD_VERSION=1.12.0a0+git664058f -DUSE_KINETO=0 -DUSE_NUMPY=True D:\Code\pytorch
-- The CXX compiler identification is MSVC 19.29.30148.0
-- The C compiler identification is MSVC 19.29.30148.0
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: C:/Program Files (x86)/Microsoft Visual Studio/2019/BuildTools/VC/Tools/MSVC/14.29.30133/bin/Hostx64/x64/cl.exe - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: C:/Program Files (x86)/Microsoft Visual Studio/2019/BuildTools/VC/Tools/MSVC/14.29.30133/bin/Hostx64/x64/cl.exe - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Not forcing any particular BLAS to be found
CMake Warning (dev) at D:/condaEnv/Library/share/cmake-3.24/Modules/CMakeDependentOption.cmake:89 (message):
  Policy CMP0127 is not set: cmake_dependent_option() supports full Condition
  Syntax.  Run "cmake --help-policy CMP0127" for policy details.  Use the
  cmake_policy command to set the policy and suppress this warning.
Call Stack (most recent call first):
  CMakeLists.txt:259 (cmake_dependent_option)
This warning is for project developers.  Use -Wno-dev to suppress it.

CMake Warning (dev) at D:/condaEnv/Library/share/cmake-3.24/Modules/CMakeDependentOption.cmake:89 (message):
  Policy CMP0127 is not set: cmake_dependent_option() supports full Condition
  Syntax.  Run "cmake --help-policy CMP0127" for policy details.  Use the
  cmake_policy command to set the policy and suppress this warning.
Call Stack (most recent call first):
  CMakeLists.txt:290 (cmake_dependent_option)
This warning is for project developers.  Use -Wno-dev to suppress it.

CMake Warning at CMakeLists.txt:367 (message):
  TensorPipe cannot be used on Windows.  Set it to OFF
...
...
cmake --build . --target install --config Release -- -j 1
[2/759] Building CUDA object caffe2\CMakeFiles\torch_cuda.dir\__\aten\src\ATen\native\cuda\DepthwiseConv2d.cu.obj
FAILED: caffe2/CMakeFiles/torch_cuda.dir/__/aten/src/ATen/native/cuda/DepthwiseConv2d.cu.obj
C:\PROGRA~1\NVIDIA~2\CUDA\v10.2\bin\nvcc.exe -forward-unknown-to-host-compiler -DAT_PER_OPERATOR_HEADERS -DIDEEP_USE_MKL -DMINIZ_DISABLE_ZIP_READER_CRC32_CHECKS -DONNXIFI_ENABLE_EXT=1 -DONNX_ML=1 -DONNX_NAMESPACE=onnx_torch -DTORCH_CUDA_BUILD_MAIN_LIB -DUSE_C10D_GLOO -DUSE_CUDA -DUSE_DISTRIBUTED -DUSE_EXPERIMENTAL_CUDNN_V8_API -DUSE_EXTERNAL_MZCRC -DWIN32_LEAN_AND_MEAN -D_CRT_SECURE_NO_DEPRECATE=1 -D_OPENMP_NOFORCE_MANIFEST -Dtorch_cuda_EXPORTS -ID:\Code\pytorch\build\aten\src -ID:\Code\pytorch\aten\src -ID:\Code\pytorch\build -ID:\Code\pytorch -ID:\Code\pytorch\cmake\..\third_party\benchmark\include -ID:\Code\pytorch\cmake\..\third_party\cudnn_frontend\include -ID:\Code\pytorch\third_party\onnx -ID:\Code\pytorch\build\third_party\onnx -ID:\Code\pytorch\third_party\foxi -ID:\Code\pytorch\build\third_party\foxi -ID:\Code\pytorch\build\include -ID:\Code\pytorch\torch\csrc\distributed -ID:\Code\pytorch\aten\src\THC -ID:\Code\pytorch\aten\src\ATen\cuda -ID:\Code\pytorch\build\caffe2\aten\src -ID:\Code\pytorch\aten\..\third_party\catch\single_include -ID:\Code\pytorch\aten\src\ATen\.. -ID:\Code\pytorch\c10\cuda\..\.. -ID:\Code\pytorch\c10\.. -ID:\Code\pytorch\torch\csrc\api -ID:\Code\pytorch\torch\csrc\api\include -isystem=D:\Code\pytorch\build\third_party\gloo -isystem=D:\Code\pytorch\cmake\..\third_party\gloo -isystem=D:\Code\pytorch\cmake\..\third_party\googletest\googlemock\include -isystem=D:\Code\pytorch\cmake\..\third_party\googletest\googletest\include -isystem=D:\Code\pytorch\third_party\protobuf\src -isystem=D:\Pytorch_requirements\mkl\include -isystem=D:\Code\pytorch\third_party\XNNPACK\include -isystem=D:\Code\pytorch\cmake\..\third_party\eigen -isystem=D:\Code\pytorch\cmake\..\third_party\cub -isystem="C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.2\include" -isystem=D:\Code\pytorch\third_party\ideep\mkl-dnn\third_party\oneDNN\include -isystem=D:\Code\pytorch\third_party\ideep\include -isystem="C:\Program Files\NVIDIA Corporation\NvToolsExt\include" -isystem=D:\Pytorch_requirements\magma\include -Xcompiler /w -w -Xfatbin -compress-all -DONNX_NAMESPACE=onnx_torch --use-local-env -gencode arch=compute_30,code=sm_30 -Xcudafe --diag_suppress=cc_clobber_ignored,--diag_suppress=integer_sign_change,--diag_suppress=useless_using_declaration,--diag_suppress=set_but_not_used,--diag_suppress=field_without_dll_interface,--diag_suppress=base_class_has_different_dll_interface,--diag_suppress=dll_interface_conflict_none_assumed,--diag_suppress=dll_interface_conflict_dllexport_assumed,--diag_suppress=implicit_return_from_non_void_function,--diag_suppress=unsigned_compare_with_zero,--diag_suppress=declared_but_not_referenced,--diag_suppress=bad_friend_decl --Werror cross-execution-space-call --no-host-device-move-forward --expt-relaxed-constexpr --expt-extended-lambda  -Xcompiler=/wd4819,/wd4503,/wd4190,/wd4244,/wd4251,/wd4275,/wd4522 -Wno-deprecated-gpu-targets --expt-extended-lambda -DCUDA_HAS_FP16=1 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -Xcompiler="-MD -O2 -Ob2" -DNDEBUG -Xcompiler /MD -DCAFFE2_USE_GLOO -DTH_HAVE_THREAD -Xcompiler= -DTORCH_CUDA_BUILD_MAIN_LIB -std=c++14 -MD -MT caffe2\CMakeFiles\torch_cuda.dir\__\aten\src\ATen\native\cuda\DepthwiseConv2d.cu.obj -MF caffe2\CMakeFiles\torch_cuda.dir\__\aten\src\ATen\native\cuda\DepthwiseConv2d.cu.obj.d -x cu -c D:\Code\pytorch\aten\src\ATen\native\cuda\DepthwiseConv2d.cu -o caffe2\CMakeFiles\torch_cuda.dir\__\aten\src\ATen\native\cuda\DepthwiseConv2d.cu.obj -Xcompiler=-Fdcaffe2\CMakeFiles\torch_cuda.dir\,-FS
D:/Code/pytorch/aten/src\ATen/core/dynamic_type.h(235): error: member "c10::DynamicTypeTrait<c10::TensorType>::isBaseType" may not be initialized

D:/Code/pytorch/aten/src\ATen/core/dynamic_type.h(235): error: expression must have a constant value
D:/Code/pytorch/aten/src\ATen/core/dynamic_type.h(235): note: the value of member "c10::DynamicTypeTrait<c10::TensorType>::isBaseType"
(235): here cannot be used as a constant

D:/Code/pytorch/aten/src\ATen/core/dynamic_type.h(235): error: member "c10::DynamicTypeTrait<c10::NoneType>::isBaseType" may not be initialized

D:/Code/pytorch/aten/src\ATen/core/dynamic_type.h(235): error: expression must have a constant value
D:/Code/pytorch/aten/src\ATen/core/dynamic_type.h(235): note: the value of member "c10::DynamicTypeTrait<c10::NoneType>::isBaseType"
(235): here cannot be used as a constant

D:/Code/pytorch/aten/src\ATen/core/dynamic_type.h(235): error: member "c10::DynamicTypeTrait<c10::BoolType>::isBaseType" may not be initialized

D:/Code/pytorch/aten/src\ATen/core/dynamic_type.h(235): error: expression must have a constant value
D:/Code/pytorch/aten/src\ATen/core/dynamic_type.h(235): note: the value of member "c10::DynamicTypeTrait<c10::BoolType>::isBaseType"
(235): here cannot be used as a constant

D:/Code/pytorch/aten/src\ATen/core/dynamic_type.h(235): error: member "c10::DynamicTypeTrait<c10::IntType>::isBaseType" may not be initialized

D:/Code/pytorch/aten/src\ATen/core/dynamic_type.h(235): error: expression must have a constant value
D:/Code/pytorch/aten/src\ATen/core/dynamic_type.h(235): note: the value of member "c10::DynamicTypeTrait<c10::IntType>::isBaseType"
(235): here cannot be used as a constant

D:/Code/pytorch/aten/src\ATen/core/dynamic_type.h(235): error: member "c10::DynamicTypeTrait<c10::FloatType>::isBaseType" may not be initialized

D:/Code/pytorch/aten/src\ATen/core/dynamic_type.h(235): error: expression must have a constant value
D:/Code/pytorch/aten/src\ATen/core/dynamic_type.h(235): note: the value of member "c10::DynamicTypeTrait<c10::FloatType>::isBaseType"
(235): here cannot be used as a constant

D:/Code/pytorch/aten/src\ATen/core/dynamic_type.h(235): error: member "c10::DynamicTypeTrait<c10::SymIntType>::isBaseType" may not be initialized

D:/Code/pytorch/aten/src\ATen/core/dynamic_type.h(235): error: expression must have a constant value
D:/Code/pytorch/aten/src\ATen/core/dynamic_type.h(235): note: the value of member "c10::DynamicTypeTrait<c10::SymIntType>::isBaseType"
(235): here cannot be used as a constant

D:/Code/pytorch/aten/src\ATen/core/dynamic_type.h(235): error: member "c10::DynamicTypeTrait<c10::ComplexType>::isBaseType" may not be initialized

D:/Code/pytorch/aten/src\ATen/core/dynamic_type.h(235): error: expression must have a constant value
D:/Code/pytorch/aten/src\ATen/core/dynamic_type.h(235): note: the value of member "c10::DynamicTypeTrait<c10::ComplexType>::isBaseType"
(235): here cannot be used as a constant

D:/Code/pytorch/aten/src\ATen/core/dynamic_type.h(235): error: member "c10::DynamicTypeTrait<c10::NumberType>::isBaseType" may not be initialized

D:/Code/pytorch/aten/src\ATen/core/dynamic_type.h(235): error: expression must have a constant value
D:/Code/pytorch/aten/src\ATen/core/dynamic_type.h(235): note: the value of member "c10::DynamicTypeTrait<c10::NumberType>::isBaseType"
(235): here cannot be used as a constant

D:/Code/pytorch/aten/src\ATen/core/dynamic_type.h(235): error: member "c10::DynamicTypeTrait<c10::StringType>::isBaseType" may not be initialized

D:/Code/pytorch/aten/src\ATen/core/dynamic_type.h(235): error: expression must have a constant value
D:/Code/pytorch/aten/src\ATen/core/dynamic_type.h(235): note: the value of member "c10::DynamicTypeTrait<c10::StringType>::isBaseType"
(235): here cannot be used as a constant

D:/Code/pytorch/aten/src\ATen/core/dynamic_type.h(235): error: member "c10::DynamicTypeTrait<c10::ListType>::isBaseType" may not be initialized

D:/Code/pytorch/aten/src\ATen/core/dynamic_type.h(235): error: expression must have a constant value
D:/Code/pytorch/aten/src\ATen/core/dynamic_type.h(235): note: the value of member "c10::DynamicTypeTrait<c10::ListType>::isBaseType"
(235): here cannot be used as a constant

D:/Code/pytorch/aten/src\ATen/core/dynamic_type.h(235): error: member "c10::DynamicTypeTrait<c10::TupleType>::isBaseType" may not be initialized

D:/Code/pytorch/aten/src\ATen/core/dynamic_type.h(235): error: expression must have a constant value
D:/Code/pytorch/aten/src\ATen/core/dynamic_type.h(235): note: the value of member "c10::DynamicTypeTrait<c10::TupleType>::isBaseType"
(235): here cannot be used as a constant

D:/Code/pytorch/aten/src\ATen/core/dynamic_type.h(235): error: member "c10::DynamicTypeTrait<c10::DictType>::isBaseType" may not be initialized

D:/Code/pytorch/aten/src\ATen/core/dynamic_type.h(235): error: expression must have a constant value
D:/Code/pytorch/aten/src\ATen/core/dynamic_type.h(235): note: the value of member "c10::DynamicTypeTrait<c10::DictType>::isBaseType"
(235): here cannot be used as a constant

D:/Code/pytorch/aten/src\ATen/core/dynamic_type.h(235): error: member "c10::DynamicTypeTrait<c10::ClassType>::isBaseType" may not be initialized

D:/Code/pytorch/aten/src\ATen/core/dynamic_type.h(235): error: expression must have a constant value
D:/Code/pytorch/aten/src\ATen/core/dynamic_type.h(235): note: the value of member "c10::DynamicTypeTrait<c10::ClassType>::isBaseType"
(235): here cannot be used as a constant

D:/Code/pytorch/aten/src\ATen/core/dynamic_type.h(235): error: member "c10::DynamicTypeTrait<c10::OptionalType>::isBaseType" may not be initialized

D:/Code/pytorch/aten/src\ATen/core/dynamic_type.h(235): error: expression must have a constant value
D:/Code/pytorch/aten/src\ATen/core/dynamic_type.h(235): note: the value of member "c10::DynamicTypeTrait<c10::OptionalType>::isBaseType"
(235): here cannot be used as a constant

D:/Code/pytorch/aten/src\ATen/core/dynamic_type.h(235): error: member "c10::DynamicTypeTrait<c10::AnyListType>::isBaseType" may not be initialized

D:/Code/pytorch/aten/src\ATen/core/dynamic_type.h(235): error: expression must have a constant value
D:/Code/pytorch/aten/src\ATen/core/dynamic_type.h(235): note: the value of member "c10::DynamicTypeTrait<c10::AnyListType>::isBaseType"
(235): here cannot be used as a constant

D:/Code/pytorch/aten/src\ATen/core/dynamic_type.h(235): error: member "c10::DynamicTypeTrait<c10::AnyTupleType>::isBaseType" may not be initialized

D:/Code/pytorch/aten/src\ATen/core/dynamic_type.h(235): error: expression must have a constant value
D:/Code/pytorch/aten/src\ATen/core/dynamic_type.h(235): note: the value of member "c10::DynamicTypeTrait<c10::AnyTupleType>::isBaseType"
(235): here cannot be used as a constant

D:/Code/pytorch/aten/src\ATen/core/dynamic_type.h(235): error: member "c10::DynamicTypeTrait<c10::DeviceObjType>::isBaseType" may not be initialized

D:/Code/pytorch/aten/src\ATen/core/dynamic_type.h(235): error: expression must have a constant value
D:/Code/pytorch/aten/src\ATen/core/dynamic_type.h(235): note: the value of member "c10::DynamicTypeTrait<c10::DeviceObjType>::isBaseType"
(235): here cannot be used as a constant

D:/Code/pytorch/aten/src\ATen/core/dynamic_type.h(235): error: member "c10::DynamicTypeTrait<c10::StreamObjType>::isBaseType" may not be initialized

D:/Code/pytorch/aten/src\ATen/core/dynamic_type.h(235): error: expression must have a constant value
D:/Code/pytorch/aten/src\ATen/core/dynamic_type.h(235): note: the value of member "c10::DynamicTypeTrait<c10::StreamObjType>::isBaseType"
(235): here cannot be used as a constant

D:/Code/pytorch/aten/src\ATen/core/dynamic_type.h(235): error: member "c10::DynamicTypeTrait<c10::CapsuleType>::isBaseType" may not be initialized

D:/Code/pytorch/aten/src\ATen/core/dynamic_type.h(235): error: expression must have a constant value
D:/Code/pytorch/aten/src\ATen/core/dynamic_type.h(235): note: the value of member "c10::DynamicTypeTrait<c10::CapsuleType>::isBaseType"
(235): here cannot be used as a constant

D:/Code/pytorch/aten/src\ATen/core/dynamic_type.h(235): error: member "c10::DynamicTypeTrait<c10::GeneratorType>::isBaseType" may not be initialized

D:/Code/pytorch/aten/src\ATen/core/dynamic_type.h(235): error: expression must have a constant value
D:/Code/pytorch/aten/src\ATen/core/dynamic_type.h(235): note: the value of member "c10::DynamicTypeTrait<c10::GeneratorType>::isBaseType"
(235): here cannot be used as a constant

D:/Code/pytorch/aten/src\ATen/core/dynamic_type.h(235): error: member "c10::DynamicTypeTrait<c10::StorageType>::isBaseType" may not be initialized

D:/Code/pytorch/aten/src\ATen/core/dynamic_type.h(235): error: expression must have a constant value
D:/Code/pytorch/aten/src\ATen/core/dynamic_type.h(235): note: the value of member "c10::DynamicTypeTrait<c10::StorageType>::isBaseType"
(235): here cannot be used as a constant

D:/Code/pytorch/aten/src\ATen/core/dynamic_type.h(235): error: member "c10::DynamicTypeTrait<c10::VarType>::isBaseType" may not be initialized

D:/Code/pytorch/aten/src\ATen/core/dynamic_type.h(235): error: expression must have a constant value
D:/Code/pytorch/aten/src\ATen/core/dynamic_type.h(235): note: the value of member "c10::DynamicTypeTrait<c10::VarType>::isBaseType"
(235): here cannot be used as a constant

D:/Code/pytorch/aten/src\ATen/core/dynamic_type.h(235): error: member "c10::DynamicTypeTrait<c10::AnyClassType>::isBaseType" may not be initialized

D:/Code/pytorch/aten/src\ATen/core/dynamic_type.h(235): error: expression must have a constant value
D:/Code/pytorch/aten/src\ATen/core/dynamic_type.h(235): note: the value of member "c10::DynamicTypeTrait<c10::AnyClassType>::isBaseType"
(235): here cannot be used as a constant

D:/Code/pytorch/aten/src\ATen/core/dynamic_type.h(235): error: member "c10::DynamicTypeTrait<c10::QSchemeType>::isBaseType" may not be initialized

D:/Code/pytorch/aten/src\ATen/core/dynamic_type.h(235): error: expression must have a constant value
D:/Code/pytorch/aten/src\ATen/core/dynamic_type.h(235): note: the value of member "c10::DynamicTypeTrait<c10::QSchemeType>::isBaseType"
(235): here cannot be used as a constant

D:/Code/pytorch/aten/src\ATen/core/dynamic_type.h(235): error: member "c10::DynamicTypeTrait<c10::QuantizerType>::isBaseType" may not be initialized

D:/Code/pytorch/aten/src\ATen/core/dynamic_type.h(235): error: expression must have a constant value
D:/Code/pytorch/aten/src\ATen/core/dynamic_type.h(235): note: the value of member "c10::DynamicTypeTrait<c10::QuantizerType>::isBaseType"
(235): here cannot be used as a constant

D:/Code/pytorch/aten/src\ATen/core/dynamic_type.h(235): error: member "c10::DynamicTypeTrait<c10::AnyEnumType>::isBaseType" may not be initialized

D:/Code/pytorch/aten/src\ATen/core/dynamic_type.h(235): error: expression must have a constant value
D:/Code/pytorch/aten/src\ATen/core/dynamic_type.h(235): note: the value of member "c10::DynamicTypeTrait<c10::AnyEnumType>::isBaseType"
(235): here cannot be used as a constant

D:/Code/pytorch/aten/src\ATen/core/dynamic_type.h(235): error: member "c10::DynamicTypeTrait<c10::RRefType>::isBaseType" may not be initialized

D:/Code/pytorch/aten/src\ATen/core/dynamic_type.h(235): error: expression must have a constant value
D:/Code/pytorch/aten/src\ATen/core/dynamic_type.h(235): note: the value of member "c10::DynamicTypeTrait<c10::RRefType>::isBaseType"
(235): here cannot be used as a constant

D:/Code/pytorch/aten/src\ATen/core/dynamic_type.h(235): error: member "c10::DynamicTypeTrait<c10::FutureType>::isBaseType" may not be initialized

D:/Code/pytorch/aten/src\ATen/core/dynamic_type.h(235): error: expression must have a constant value
D:/Code/pytorch/aten/src\ATen/core/dynamic_type.h(235): note: the value of member "c10::DynamicTypeTrait<c10::FutureType>::isBaseType"
(235): here cannot be used as a constant

D:/Code/pytorch/aten/src\ATen/core/dynamic_type.h(235): error: member "c10::DynamicTypeTrait<c10::AnyType>::isBaseType" may not be initialized

D:/Code/pytorch/aten/src\ATen/core/dynamic_type.h(235): error: expression must have a constant value
D:/Code/pytorch/aten/src\ATen/core/dynamic_type.h(235): note: the value of member "c10::DynamicTypeTrait<c10::AnyType>::isBaseType"
(235): here cannot be used as a constant

D:/Code/pytorch/aten/src\ATen/core/dynamic_type.h(236): error: member "c10::DynamicTypeTrait<c10::ScalarTypeType>::isBaseType" may not be initialized

D:/Code/pytorch/aten/src\ATen/core/dynamic_type.h(236): error: expression must have a constant value
D:/Code/pytorch/aten/src\ATen/core/dynamic_type.h(236): note: the value of member "c10::DynamicTypeTrait<c10::ScalarTypeType>::isBaseType"
(236): here cannot be used as a constant

D:/Code/pytorch/aten/src\ATen/core/dynamic_type.h(236): error: member "c10::DynamicTypeTrait<c10::LayoutType>::isBaseType" may not be initialized

D:/Code/pytorch/aten/src\ATen/core/dynamic_type.h(236): error: expression must have a constant value
D:/Code/pytorch/aten/src\ATen/core/dynamic_type.h(236): note: the value of member "c10::DynamicTypeTrait<c10::LayoutType>::isBaseType"
(236): here cannot be used as a constant

D:/Code/pytorch/aten/src\ATen/core/dynamic_type.h(236): error: member "c10::DynamicTypeTrait<c10::MemoryFormatType>::isBaseType" may not be initialized

D:/Code/pytorch/aten/src\ATen/core/dynamic_type.h(236): error: expression must have a constant value
D:/Code/pytorch/aten/src\ATen/core/dynamic_type.h(236): note: the value of member "c10::DynamicTypeTrait<c10::MemoryFormatType>::isBaseType"
(236): here cannot be used as a constant

64 errors detected in the compilation of "C:/Users/PromiX/AppData/Local/Temp/tmpxft_00001d88_00000000-7_DepthwiseConv2d.cpp1.ii".
nvcc warning : The -std=c++14 flag is not supported with the configured host compiler. Flag will be ignored.
DepthwiseConv2d.cu
ninja: build stopped: subcommand failed



tmpxft_000010e8_00000000-0

(D:\condaEnv) D:\Code\pytorch\build\CMakeFiles\ShowIncludes>call "C:/Program Files (x86)/Microsoft Visual Studio/2019/BuildTools/VC/Tools/MSVC/14.29.30133/bin/HostX64/x64/../../../../../../../VC/Auxiliary/Build/vcvars64.bat" 
**********************************************************************
** Visual Studio 2019 Developer Command Prompt v16.11.26
** Copyright (c) 2021 Microsoft Corporation
**********************************************************************
[vcvarsall.bat] Environment initialized for: 'x64'

FULL LOG
I would be glad if somebody could tell me what the error could be.
Thank you in advance.

Try to build an older PyTorch release as the current main branch bumped the C++ standard to C++17, which CUDA<11 did not support. I’m unsure if this could be related, but might otherwise fail at a later build step.

Well I changed pytorch version to 1.9.1. Again replacing onnx library with onnx-1.8.1 and fixing bugs with changed function names. Also missed building pytorch\caffe2\quantization\server\ since I couldn’t find any way to fix the error in conv_dnnlowp_op.cc most likely related to the fbgemm version, but I couldn’t find a suitable old one.
And again I stumbled with an error at the last stage of the build.

D:\condaEnv) D:\pytorch>python setup.py install --cmake
Building wheel torch-1.9.0a0+gitdfbd030
-- Building version 1.9.0a0+gitdfbd030
cmake -GNinja -DBUILD_PYTHON=True -DBUILD_TEST=True -DCMAKE_BUILD_TYPE=Release -DCMAKE_INCLUDE_PATH=D:\Pytorch_requirements\mkl\include -DCMAKE_INSTALL_PREFIX=D:\pytorch\torch -DCMAKE_PREFIX_PATH=D:\condaEnv\Lib\site-packages -DMSVC_Z7_OVERRIDE=OFF -DNUMPY_INCLUDE_DIR=D:\condaEnv\Lib\site-packages\numpy\core\include -DPYTHON_EXECUTABLE=D:\condaEnv\python.exe -DPYTHON_INCLUDE_DIR=D:\condaEnv\Include -DPYTHON_LIBRARY=D:\condaEnv/libs/python311.lib -DTORCH_BUILD_VERSION=1.9.0a0+gitdfbd030 -DUSE_KINETO=0 -DUSE_NUMPY=True D:\pytorch
-- The CXX compiler identification is MSVC 19.29.30148.0
-- The C compiler identification is MSVC 19.29.30148.0
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: C:/Program Files (x86)/Microsoft Visual Studio/2019/BuildTools/VC/Tools/MSVC/14.29.30133/bin/Hostx64/x64/cl.exe - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: C:/Program Files (x86)/Microsoft Visual Studio/2019/BuildTools/VC/Tools/MSVC/14.29.30133/bin/Hostx64/x64/cl.exe - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Not forcing any particular BLAS to be found
CMake Warning at CMakeLists.txt:318 (message):
  TensorPipe cannot be used on Windows.  Set it to OFF


-- Performing Test COMPILER_WORKS
-- Performing Test COMPILER_WORKS - Success
-- Performing Test SUPPORT_GLIBCXX_USE_C99
-- Performing Test SUPPORT_GLIBCXX_USE_C99 - Success
-- Performing Test CAFFE2_EXCEPTION_PTR_SUPPORTED
-- Performing Test CAFFE2_EXCEPTION_PTR_SUPPORTED - Success
-- std::exception_ptr is supported.
-- Performing Test CAFFE2_NEED_TO_TURN_OFF_DEPRECATION_WARNING
-- Performing Test CAFFE2_NEED_TO_TURN_OFF_DEPRECATION_WARNING - Failed
-- Performing Test CAFFE2_COMPILER_SUPPORTS_AVX2_EXTENSIONS
-- Performing Test CAFFE2_COMPILER_SUPPORTS_AVX2_EXTENSIONS - Success
-- Current compiler supports avx2 extension. Will build perfkernels.
-- Performing Test CAFFE2_COMPILER_SUPPORTS_AVX512_EXTENSIONS
-- Performing Test CAFFE2_COMPILER_SUPPORTS_AVX512_EXTENSIONS - Success
-- Current compiler supports avx512f extension. Will build fbgemm.
-- Performing Test COMPILER_SUPPORTS_HIDDEN_VISIBILITY
-- Performing Test COMPILER_SUPPORTS_HIDDEN_VISIBILITY - Failed
-- Performing Test COMPILER_SUPPORTS_HIDDEN_INLINE_VISIBILITY
-- Performing Test COMPILER_SUPPORTS_HIDDEN_INLINE_VISIBILITY - Failed
-- Performing Test COMPILER_SUPPORTS_RDYNAMIC
-- Performing Test COMPILER_SUPPORTS_RDYNAMIC - Failed
-- Building using own protobuf under third_party per request.
-- Use custom protobuf build.
-- 
-- 3.13.0.0
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Failed
-- Looking for pthread_create in pthreads
-- Looking for pthread_create in pthreads - not found
-- Looking for pthread_create in pthread
-- Looking for pthread_create in pthread - not found
-- Found Threads: TRUE  
-- Caffe2 protobuf include directory: $<BUILD_INTERFACE:D:/pytorch/third_party/protobuf/src>$<INSTALL_INTERFACE:include>
-- Trying to find preferred BLAS backend of choice: MKL
-- MKL_THREADING = OMP
-- Looking for sys/types.h
-- Looking for sys/types.h - found
-- Looking for stdint.h
-- Looking for stdint.h - found
-- Looking for stddef.h
-- Looking for stddef.h - found
-- Check size of void*
-- Check size of void* - done
-- Looking for cblas_sgemm
-- Looking for cblas_sgemm - found
-- MKL libraries: D:/Pytorch_requirements/mkl/lib/mkl_intel_lp64.lib;D:/Pytorch_requirements/mkl/lib/mkl_intel_thread.lib;D:/Pytorch_requirements/mkl/lib/mkl_core.lib;D:/Pytorch_requirements/mkl/lib/libiomp5md.lib
-- MKL include directory: D:/Pytorch_requirements/mkl/include
-- MKL OpenMP type: Intel
-- MKL OpenMP library: D:/Pytorch_requirements/mkl/lib/libiomp5md.lib
-- The ASM compiler identification is MSVC
-- Found assembler: C:/Program Files (x86)/Microsoft Visual Studio/2019/BuildTools/VC/Tools/MSVC/14.29.30133/bin/Hostx64/x64/cl.exe
-- Found Python: D:/condaEnv/python.exe (found version "3.11.3") found components: Interpreter 
-- Found Git: C:/Program Files/Git/cmd/git.exe (found version "2.41.0.windows.1") 
-- git version: v1.6.1 normalized to 1.6.1
-- Version: 1.6.1
-- Looking for shm_open in rt
-- Looking for shm_open in rt - not found
-- Performing Test HAVE_STD_REGEX
-- Performing Test HAVE_STD_REGEX
-- Performing Test HAVE_STD_REGEX -- success
-- Performing Test HAVE_GNU_POSIX_REGEX
-- Performing Test HAVE_GNU_POSIX_REGEX
-- Performing Test HAVE_GNU_POSIX_REGEX -- failed to compile
-- Performing Test HAVE_POSIX_REGEX
-- Performing Test HAVE_POSIX_REGEX
-- Performing Test HAVE_POSIX_REGEX -- failed to compile
-- Performing Test HAVE_STEADY_CLOCK
-- Performing Test HAVE_STEADY_CLOCK
-- Performing Test HAVE_STEADY_CLOCK -- success
-- Found PythonInterp: D:/condaEnv/python.exe (found version "3.11.3") 
-- Performing Test COMPILER_SUPPORTS_AVX512
-- Performing Test COMPILER_SUPPORTS_AVX512 - Success
CMake Warning (dev) at D:/condaEnv/Library/share/cmake-3.24/Modules/FindPackageHandleStandardArgs.cmake:438 (message):
  The package name passed to `find_package_handle_standard_args` (OpenMP_C)
  does not match the name of the calling package (OpenMP).  This can lead to
  problems in calling code that expects `find_package` result variables
  (e.g., `_FOUND`) to follow a certain pattern.
Call Stack (most recent call first):
  cmake/Modules/FindOpenMP.cmake:576 (find_package_handle_standard_args)
  third_party/fbgemm/CMakeLists.txt:129 (find_package)
This warning is for project developers.  Use -Wno-dev to suppress it.

-- Found OpenMP_C: -openmp:experimental -ID:/Pytorch_requirements/mkl/include
CMake Warning (dev) at D:/condaEnv/Library/share/cmake-3.24/Modules/FindPackageHandleStandardArgs.cmake:438 (message):
  The package name passed to `find_package_handle_standard_args` (OpenMP_CXX)
  does not match the name of the calling package (OpenMP).  This can lead to
  problems in calling code that expects `find_package` result variables
  (e.g., `_FOUND`) to follow a certain pattern.
Call Stack (most recent call first):
  cmake/Modules/FindOpenMP.cmake:576 (find_package_handle_standard_args)
  third_party/fbgemm/CMakeLists.txt:129 (find_package)
This warning is for project developers.  Use -Wno-dev to suppress it.

-- Found OpenMP_CXX: -openmp:experimental -ID:/Pytorch_requirements/mkl/include
-- Found OpenMP: TRUE
CMake Warning at third_party/fbgemm/CMakeLists.txt:131 (message):
  OpenMP found! OpenMP_C_INCLUDE_DIRS =


CMake Warning at third_party/fbgemm/CMakeLists.txt:224 (message):
  ==========


CMake Warning at third_party/fbgemm/CMakeLists.txt:225 (message):
  CMAKE_BUILD_TYPE = Release


CMake Warning at third_party/fbgemm/CMakeLists.txt:226 (message):
  CMAKE_CXX_FLAGS_DEBUG is /MDd /Zi /Ob0 /Od /RTC1 /w /bigobj


CMake Warning at third_party/fbgemm/CMakeLists.txt:227 (message):
  CMAKE_CXX_FLAGS_RELEASE is /MD /O2 /Ob2 /DNDEBUG /w /bigobj


CMake Warning at third_party/fbgemm/CMakeLists.txt:228 (message):
  ==========


** AsmJit Summary **
   ASMJIT_DIR=D:/pytorch/third_party/fbgemm/third_party/asmjit
   ASMJIT_TEST=FALSE
   ASMJIT_TARGET_TYPE=SHARED
   ASMJIT_DEPS=
   ASMJIT_LIBS=asmjit
   ASMJIT_CFLAGS=
   ASMJIT_PRIVATE_CFLAGS=-MP;-GF;-Zc:__cplusplus;-Zc:inline;-Zc:strictStrings;-Zc:threadSafeInit-;-W4
   ASMJIT_PRIVATE_CFLAGS_DBG=-GS
   ASMJIT_PRIVATE_CFLAGS_REL=-GS-;-O2;-Oi
-- Using third party subdirectory Eigen.
-- Found PythonInterp: D:/condaEnv/python.exe (found suitable version "3.11.3", minimum required is "3.0") 
-- Found PythonLibs: D:/condaEnv/libs/python311.lib (found suitable version "3.11.3", minimum required is "3.0") 
-- Could NOT find pybind11 (missing: pybind11_DIR)
-- Could NOT find pybind11 (missing: pybind11_INCLUDE_DIR) 
-- Using third_party/pybind11.
-- pybind11 include dirs: D:/pytorch/cmake/../third_party/pybind11/include
-- Could NOT find MPI_C (missing: MPI_C_LIB_NAMES MPI_C_HEADER_DIR MPI_C_WORKS) 
-- Could NOT find MPI_CXX (missing: MPI_CXX_LIB_NAMES MPI_CXX_HEADER_DIR MPI_CXX_WORKS) 
-- Could NOT find MPI (missing: MPI_C_FOUND MPI_CXX_FOUND) 
CMake Warning at cmake/Dependencies.cmake:1050 (message):
  Not compiling with MPI.  Suppress this warning with -DUSE_MPI=OFF
Call Stack (most recent call first):
  CMakeLists.txt:621 (include)


CMake Warning (dev) at D:/condaEnv/Library/share/cmake-3.24/Modules/FindPackageHandleStandardArgs.cmake:438 (message):
  The package name passed to `find_package_handle_standard_args` (OpenMP_C)
  does not match the name of the calling package (OpenMP).  This can lead to
  problems in calling code that expects `find_package` result variables
  (e.g., `_FOUND`) to follow a certain pattern.
Call Stack (most recent call first):
  cmake/Modules/FindOpenMP.cmake:576 (find_package_handle_standard_args)
  cmake/Dependencies.cmake:1105 (find_package)
  CMakeLists.txt:621 (include)
This warning is for project developers.  Use -Wno-dev to suppress it.

CMake Warning (dev) at D:/condaEnv/Library/share/cmake-3.24/Modules/FindPackageHandleStandardArgs.cmake:438 (message):
  The package name passed to `find_package_handle_standard_args` (OpenMP_CXX)
  does not match the name of the calling package (OpenMP).  This can lead to
  problems in calling code that expects `find_package` result variables
  (e.g., `_FOUND`) to follow a certain pattern.
Call Stack (most recent call first):
  cmake/Modules/FindOpenMP.cmake:576 (find_package_handle_standard_args)
  cmake/Dependencies.cmake:1105 (find_package)
  CMakeLists.txt:621 (include)
This warning is for project developers.  Use -Wno-dev to suppress it.

-- Adding OpenMP CXX_FLAGS: -openmp:experimental -ID:/Pytorch_requirements/mkl/include
-- Will link against OpenMP libraries: D:/Pytorch_requirements/mkl/lib/libiomp5md.lib
-- Found CUDA: C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v10.2 (found version "10.2") 
-- Caffe2: CUDA detected: 10.2
-- Caffe2: CUDA nvcc is: C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v10.2/bin/nvcc.exe
-- Caffe2: CUDA toolkit directory: C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v10.2
-- Caffe2: Header version is: 10.2
-- Found CUDNN: C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v10.2/lib/x64/cudnn.lib  
-- Found cuDNN: v8.0.2  (include: C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v10.2/include, library: C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v10.2/lib/x64/cudnn.lib)
-- C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v10.2/lib/x64/nvrtc.lib shorthash is 76e0edf4
CMake Warning at cmake/public/utils.cmake:365 (message):
  In the future we will require one to explicitly pass TORCH_CUDA_ARCH_LIST
  to cmake instead of implicitly setting it as an env variable.  This will
  become a FATAL_ERROR in future version of pytorch.
Call Stack (most recent call first):
  cmake/public/cuda.cmake:511 (torch_cuda_get_nvcc_gencode_flag)
  cmake/Dependencies.cmake:1155 (include)
  CMakeLists.txt:621 (include)


-- Added CUDA NVCC flags for: -gencode;arch=compute_30,code=sm_30
-- Could NOT find CUB (missing: CUB_INCLUDE_DIR) 
CMake Warning (dev) at third_party/gloo/CMakeLists.txt:21 (option):
  Policy CMP0077 is not set: option() honors normal variables.  Run "cmake
  --help-policy CMP0077" for policy details.  Use the cmake_policy command to
  set the policy and suppress this warning.

  For compatibility with older versions of CMake, option is clearing the
  normal variable 'BUILD_BENCHMARK'.
This warning is for project developers.  Use -Wno-dev to suppress it.

CMake Warning (dev) at third_party/gloo/CMakeLists.txt:35 (option):
  Policy CMP0077 is not set: option() honors normal variables.  Run "cmake
  --help-policy CMP0077" for policy details.  Use the cmake_policy command to
  set the policy and suppress this warning.

  For compatibility with older versions of CMake, option is clearing the
  normal variable 'USE_NCCL'.
This warning is for project developers.  Use -Wno-dev to suppress it.

CMake Warning (dev) at third_party/gloo/CMakeLists.txt:36 (option):
  Policy CMP0077 is not set: option() honors normal variables.  Run "cmake
  --help-policy CMP0077" for policy details.  Use the cmake_policy command to
  set the policy and suppress this warning.

  For compatibility with older versions of CMake, option is clearing the
  normal variable 'USE_RCCL'.
This warning is for project developers.  Use -Wno-dev to suppress it.

-- MSVC detected
-- Set USE_REDIS OFF
-- Set USE_IBVERBS OFF
-- Set USE_NCCL OFF
-- Set USE_RCCL OFF
-- Set USE_LIBUV ON
-- Only USE_LIBUV is supported on Windows
-- Enabling sccache for CXX
-- Enabling sccache for C
-- Gloo build as SHARED library
CMake Warning (dev) at cmake/Modules_CUDA_fix/upstream/FindCUDA.cmake:547 (if):
  Policy CMP0054 is not set: Only interpret if() arguments as variables or
  keywords when unquoted.  Run "cmake --help-policy CMP0054" for policy
  details.  Use the cmake_policy command to set the policy and suppress this
  warning.

  Quoted variables like "MSVC" will no longer be dereferenced when the policy
  is set to NEW.  Since the policy is not set the OLD behavior will be used.
Call Stack (most recent call first):
  cmake/Modules_CUDA_fix/FindCUDA.cmake:11 (include)
  third_party/gloo/cmake/Cuda.cmake:122 (find_package)
  third_party/gloo/cmake/Dependencies.cmake:115 (include)
  third_party/gloo/CMakeLists.txt:111 (include)
This warning is for project developers.  Use -Wno-dev to suppress it.

-- Found CUDA: C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v10.2 (found suitable version "10.2", minimum required is "7.0") 
-- CUDA detected: 10.2
CMake Warning at cmake/Dependencies.cmake:1406 (message):
  Metal is only used in ios builds.
Call Stack (most recent call first):
  CMakeLists.txt:621 (include)


-- Found PythonInterp: D:/condaEnv/python.exe (found version "3.11.3") 
-- Found PythonLibs: D:/condaEnv/libs/python311.lib (found version "3.11.3") 
Generated: D:/pytorch/build/third_party/onnx/onnx/onnx_onnx_torch-ml.proto
Generated: D:/pytorch/build/third_party/onnx/onnx/onnx-operators_onnx_torch-ml.proto
Generated: D:/pytorch/build/third_party/onnx/onnx/onnx-data_onnx_torch.proto
-- 
-- ******** Summary ********
--   CMake version         : 3.24.1
--   CMake command         : D:/condaEnv/Library/bin/cmake.exe
--   System                : Windows
--   C++ compiler          : C:/Program Files (x86)/Microsoft Visual Studio/2019/BuildTools/VC/Tools/MSVC/14.29.30133/bin/Hostx64/x64/cl.exe
--   C++ compiler version  : 19.29.30148.0
--   CXX flags             : /DWIN32 /D_WINDOWS /GR /EHsc /w /bigobj -DUSE_PTHREADPOOL -openmp:experimental -ID:/Pytorch_requirements/mkl/include
--   Build type            : Release
--   Compile definitions   : WIN32_LEAN_AND_MEAN;TH_BLAS_MKL;_OPENMP_NOFORCE_MANIFEST;ONNX_ML=1;ONNXIFI_ENABLE_EXT=1
--   CMAKE_PREFIX_PATH     : D:\condaEnv\Lib\site-packages;C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v10.2
--   CMAKE_INSTALL_PREFIX  : D:/pytorch/torch
--   CMAKE_MODULE_PATH     : D:/pytorch/cmake/Modules;D:/pytorch/cmake/public/../Modules_CUDA_fix
--
--   ONNX version          : 1.8.1
--   ONNX NAMESPACE        : onnx_torch
--   ONNX_BUILD_TESTS      : OFF
--   ONNX_BUILD_BENCHMARKS : OFF
--   ONNX_USE_LITE_PROTO   : OFF
--   ONNXIFI_DUMMY_BACKEND : OFF
--   ONNXIFI_ENABLE_EXT    : OFF
--
--   Protobuf compiler     :
--   Protobuf includes     :
--   Protobuf libraries    :
--   BUILD_ONNX_PYTHON     : OFF
-- 
-- ******** Summary ********
--   CMake version         : 3.24.1
--   CMake command         : D:/condaEnv/Library/bin/cmake.exe
--   System                : Windows
--   C++ compiler          : C:/Program Files (x86)/Microsoft Visual Studio/2019/BuildTools/VC/Tools/MSVC/14.29.30133/bin/Hostx64/x64/cl.exe
--   C++ compiler version  : 19.29.30148.0
--   CXX flags             : /DWIN32 /D_WINDOWS /GR /EHsc /w /bigobj -DUSE_PTHREADPOOL -openmp:experimental -ID:/Pytorch_requirements/mkl/include
--   Build type            : Release
--   Compile definitions   : WIN32_LEAN_AND_MEAN;TH_BLAS_MKL;_OPENMP_NOFORCE_MANIFEST;ONNX_ML=1;ONNXIFI_ENABLE_EXT=1
--   CMAKE_PREFIX_PATH     : D:\condaEnv\Lib\site-packages;C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v10.2
--   CMAKE_INSTALL_PREFIX  : D:/pytorch/torch
--   CMAKE_MODULE_PATH     : D:/pytorch/cmake/Modules;D:/pytorch/cmake/public/../Modules_CUDA_fix
--
--   ONNX version          : 1.4.1
--   ONNX NAMESPACE        : onnx_torch
--   ONNX_BUILD_TESTS      : OFF
--   ONNX_BUILD_BENCHMARKS : OFF
--   ONNX_USE_LITE_PROTO   : OFF
--   ONNXIFI_DUMMY_BACKEND : OFF
--
--   Protobuf compiler     :
--   Protobuf includes     :
--   Protobuf libraries    :
--   BUILD_ONNX_PYTHON     : OFF
-- Found CUDA with FP16 support, compiling with torch.cuda.HalfTensor
-- Adding -DNDEBUG to compile flags
-- Checking prototype magma_get_sgeqrf_nb for MAGMA_V2
-- Checking prototype magma_get_sgeqrf_nb for MAGMA_V2 - True
-- Compiling with MAGMA support
-- MAGMA INCLUDE DIRECTORIES: D:/Pytorch_requirements/magma/include
-- MAGMA LIBRARIES: D:/Pytorch_requirements/magma/lib/magma.lib
-- MAGMA V2 check: 1
-- Could not find hardware support for NEON on this machine.
-- No OMAP3 processor on this machine.
-- No OMAP4 processor on this machine.
-- Looking for cpuid.h
-- Looking for cpuid.h - not found
-- Performing Test NO_GCC_EBX_FPIC_BUG
-- Performing Test NO_GCC_EBX_FPIC_BUG - Failed
-- Performing Test C_HAS_AVX_1
-- Performing Test C_HAS_AVX_1 - Success
-- Performing Test C_HAS_AVX2_1
-- Performing Test C_HAS_AVX2_1 - Success
-- Performing Test CXX_HAS_AVX_1
-- Performing Test CXX_HAS_AVX_1 - Success
-- Performing Test CXX_HAS_AVX2_1
-- Performing Test CXX_HAS_AVX2_1 - Success
-- AVX compiler support found
-- AVX2 compiler support found
-- Performing Test BLAS_F2C_DOUBLE_WORKS
-- Performing Test BLAS_F2C_DOUBLE_WORKS - Failed
-- Performing Test BLAS_F2C_FLOAT_WORKS
-- Performing Test BLAS_F2C_FLOAT_WORKS - Success
-- Performing Test BLAS_USE_CBLAS_DOT
-- Performing Test BLAS_USE_CBLAS_DOT - Success
-- Found a library with BLAS API (mkl). Full path: (D:/Pytorch_requirements/mkl/lib/mkl_intel_lp64.lib;D:/Pytorch_requirements/mkl/lib/mkl_intel_thread.lib;D:/Pytorch_requirements/mkl/lib/mkl_core.lib;D:/Pytorch_requirements/mkl/lib/libiomp5md.lib)
-- Found a library with LAPACK API (mkl).
disabling ROCM because NOT USE_ROCM is set
-- MIOpen not found. Compiling without MIOpen support
-- MKLDNN source files not found!
CMake Warning at cmake/Dependencies.cmake:1755 (message):
  MKLDNN could not be found.
Call Stack (most recent call first):
  CMakeLists.txt:621 (include)


-- Performing Test C_HAS_THREAD
-- Performing Test C_HAS_THREAD - Success
-- Module support is disabled.
-- Version: 9.1.0
-- Build type: Release
-- CXX_STANDARD: 14
-- Required features: cxx_variadic_templates
-- Looking for backtrace
-- Looking for backtrace - not found
-- Could NOT find Backtrace (missing: Backtrace_LIBRARY Backtrace_INCLUDE_DIR) 
-- don't use NUMA
-- Performing Test COMPILER_SUPPORTS_NO_AVX256_SPLIT
-- Performing Test COMPILER_SUPPORTS_NO_AVX256_SPLIT - Failed
-- Using ATen parallel backend: OMP
AT_INSTALL_INCLUDE_DIR include/ATen/core
core header install: D:/pytorch/build/aten/src/ATen/core/TensorBody.h
-- NCCL operators skipped due to no CUDA support
-- Excluding FakeLowP operators
-- Excluding ideep operators as we are not using ideep
-- Excluding image processing operators due to no opencv
-- Excluding video processing operators due to no opencv
-- MPI operators skipped due to no MPI support
-- Include Observer library
CMake Warning (dev) at torch/CMakeLists.txt:348:
  Syntax Warning in cmake code at column 107

  Argument not separated from preceding token by whitespace.
This warning is for project developers.  Use -Wno-dev to suppress it.

CMake Warning (dev) at torch/CMakeLists.txt:348:
  Syntax Warning in cmake code at column 115

  Argument not separated from preceding token by whitespace.
This warning is for project developers.  Use -Wno-dev to suppress it.

CMake Warning at cmake/public/utils.cmake:365 (message):
  In the future we will require one to explicitly pass TORCH_CUDA_ARCH_LIST
  to cmake instead of implicitly setting it as an env variable.  This will
  become a FATAL_ERROR in future version of pytorch.
Call Stack (most recent call first):
  torch/CMakeLists.txt:315 (torch_cuda_get_nvcc_gencode_flag)


CMake Warning (dev) at D:/condaEnv/Library/share/cmake-3.24/Modules/FindPackageHandleStandardArgs.cmake:438 (message):
  The package name passed to `find_package_handle_standard_args` (OpenMP_C)
  does not match the name of the calling package (OpenMP).  This can lead to
  problems in calling code that expects `find_package` result variables
  (e.g., `_FOUND`) to follow a certain pattern.
Call Stack (most recent call first):
  cmake/Modules/FindOpenMP.cmake:576 (find_package_handle_standard_args)
  caffe2/CMakeLists.txt:1155 (find_package)
This warning is for project developers.  Use -Wno-dev to suppress it.

CMake Warning (dev) at D:/condaEnv/Library/share/cmake-3.24/Modules/FindPackageHandleStandardArgs.cmake:438 (message):
  The package name passed to `find_package_handle_standard_args` (OpenMP_CXX)
  does not match the name of the calling package (OpenMP).  This can lead to
  problems in calling code that expects `find_package` result variables
  (e.g., `_FOUND`) to follow a certain pattern.
Call Stack (most recent call first):
  cmake/Modules/FindOpenMP.cmake:576 (find_package_handle_standard_args)
  caffe2/CMakeLists.txt:1155 (find_package)
This warning is for project developers.  Use -Wno-dev to suppress it.

-- pytorch is compiling with OpenMP.
OpenMP CXX_FLAGS: -openmp:experimental -ID:/Pytorch_requirements/mkl/include.
OpenMP libraries: D:/Pytorch_requirements/mkl/lib/libiomp5md.lib.
-- Caffe2 is compiling with OpenMP.
OpenMP CXX_FLAGS: -openmp:experimental -ID:/Pytorch_requirements/mkl/include.
OpenMP libraries: D:/Pytorch_requirements/mkl/lib/libiomp5md.lib.
-- Using Lib/site-packages as python relative installation path
CMake Warning at CMakeLists.txt:941 (message):
  Generated cmake files are only fully tested if one builds with system glog,
  gflags, and protobuf.  Other settings may generate files that are not well
  tested.


-- 
-- ******** Summary ********
-- General:
--   CMake version         : 3.24.1
--   CMake command         : D:/condaEnv/Library/bin/cmake.exe
--   System                : Windows
--   C++ compiler          : C:/Program Files (x86)/Microsoft Visual Studio/2019/BuildTools/VC/Tools/MSVC/14.29.30133/bin/Hostx64/x64/cl.exe
--   C++ compiler id       : MSVC
--   C++ compiler version  : 19.29.30148.0
--   Using ccache if found : OFF
--   CXX flags             : /DWIN32 /D_WINDOWS /GR /EHsc /w /bigobj -DUSE_PTHREADPOOL -openmp:experimental -ID:/Pytorch_requirements/mkl/include -DNDEBUG -DUSE_FBGEMM -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE
--   Build type            : Release
--   Compile definitions   : WIN32_LEAN_AND_MEAN;TH_BLAS_MKL;_OPENMP_NOFORCE_MANIFEST;ONNX_ML=1;ONNXIFI_ENABLE_EXT=1;ONNX_NAMESPACE=onnx_torch;_CRT_SECURE_NO_DEPRECATE=1;MAGMA_V2;USE_EXTERNAL_MZCRC;MINIZ_DISABLE_ZIP_READER_CRC32_CHECKS
--   CMAKE_PREFIX_PATH     : D:\condaEnv\Lib\site-packages;C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v10.2
--   CMAKE_INSTALL_PREFIX  : D:/pytorch/torch
--   USE_GOLD_LINKER       : OFF
--
--   TORCH_VERSION         : 1.9.0
--   CAFFE2_VERSION        : 1.9.0
--   BUILD_CAFFE2          : ON
--   BUILD_CAFFE2_OPS      : ON
--   BUILD_CAFFE2_MOBILE   : OFF
--   BUILD_STATIC_RUNTIME_BENCHMARK: OFF
--   BUILD_TENSOREXPR_BENCHMARK: OFF
--   BUILD_BINARY          : OFF
--   BUILD_CUSTOM_PROTOBUF : ON
--     Link local protobuf : ON
--   BUILD_DOCS            : OFF
--   BUILD_PYTHON          : True
--     Python version      : 3.11.3
--     Python executable   : D:/condaEnv/python.exe
--     Pythonlibs version  : 3.11.3
--     Python library      : D:/condaEnv/libs/python311.lib
--     Python includes     : D:/condaEnv/include
--     Python site-packages: Lib/site-packages
--   BUILD_SHARED_LIBS     : ON
--   CAFFE2_USE_MSVC_STATIC_RUNTIME     : OFF
--   BUILD_TEST            : True
--   BUILD_JNI             : OFF
--   BUILD_MOBILE_AUTOGRAD : OFF
--   BUILD_LITE_INTERPRETER: OFF
--   INTERN_BUILD_MOBILE   :
--   USE_BLAS              : 1
--     BLAS                : mkl
--   USE_LAPACK            : 1
--     LAPACK              : mkl
--   USE_ASAN              : OFF
--   USE_CPP_CODE_COVERAGE : OFF
--   USE_CUDA              : ON
--     Split CUDA          : OFF
--     CUDA static link    : OFF
--     USE_CUDNN           : ON
--     CUDA version        : 10.2
--     cuDNN version       : 8.0.2
--     CUDA root directory : C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v10.2
--     CUDA library        : C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v10.2/lib/x64/cuda.lib
--     cudart library      : C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v10.2/lib/x64/cudart_static.lib
--     cublas library      : C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v10.2/lib/x64/cublas.lib
--     cufft library       : C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v10.2/lib/x64/cufft.lib
--     curand library      : C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v10.2/lib/x64/curand.lib
--     cuDNN library       : C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v10.2/lib/x64/cudnn.lib
--     nvrtc               : C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v10.2/lib/x64/nvrtc.lib
--     CUDA include path   : C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v10.2/include
--     NVCC executable     : C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v10.2/bin/nvcc.exe
--     NVCC flags          : -Xcompiler;/w;-w;-Xfatbin;-compress-all;-DONNX_NAMESPACE=onnx_torch;--use-local-env;-gencode;arch=compute_30,code=sm_30;-Xcudafe;--diag_suppress=cc_clobber_ignored,--diag_suppress=integer_sign_change,--diag_suppress=useless_using_declaration,--diag_suppress=set_but_not_used,--diag_suppress=field_without_dll_interface,--diag_suppress=base_class_has_different_dll_interface,--diag_suppress=dll_interface_conflict_none_assumed,--diag_suppress=dll_interface_conflict_dllexport_assumed,--diag_suppress=implicit_return_from_non_void_function,--diag_suppress=unsigned_compare_with_zero,--diag_suppress=declared_but_not_referenced,--diag_suppress=bad_friend_decl;--Werror;cross-execution-space-call;--no-host-device-move-forward;-Xcompiler;-MD$<$<CONFIG:Debug>:d>;--expt-relaxed-constexpr;--expt-extended-lambda;-Xcompiler=/wd4819,/wd4503,/wd4190,/wd4244,/wd4251,/wd4275,/wd4522;-Wno-deprecated-gpu-targets;--expt-extended-lambda;-DCUDA_HAS_FP16=1;-D__CUDA_NO_HALF_OPERATORS__;-D__CUDA_NO_HALF_CONVERSIONS__;-D__CUDA_NO_BFLOAT16_CONVERSIONS__;-D__CUDA_NO_HALF2_OPERATORS__
--     CUDA host compiler  : C:/Program Files (x86)/Microsoft Visual Studio/2019/BuildTools/VC/Tools/MSVC/14.29.30133/bin/Hostx64/x64/cl.exe
--     NVCC --device-c     : OFF
--     USE_TENSORRT        : OFF
--   USE_ROCM              : OFF
--   USE_EIGEN_FOR_BLAS    :
--   USE_FBGEMM            : ON
--     USE_FAKELOWP          : OFF
--   USE_KINETO            : 0
--   USE_FFMPEG            : OFF
--   USE_GFLAGS            : OFF
--   USE_GLOG              : OFF
--   USE_LEVELDB           : OFF
--   USE_LITE_PROTO        : OFF
--   USE_LMDB              : OFF
--   USE_METAL             : OFF
--   USE_PYTORCH_METAL     : OFF
--   USE_FFTW              : OFF
--   USE_MKL               : ON
--   USE_MKLDNN            : ON
--   USE_NCCL              : OFF
--   USE_NNPACK            : OFF
--   USE_NUMPY             : ON
--   USE_OBSERVERS         : ON
--   USE_OPENCL            : OFF
--   USE_OPENCV            : OFF
--   USE_OPENMP            : ON
--   USE_TBB               : OFF
--   USE_VULKAN            : OFF
--   USE_PROF              : OFF
--   USE_QNNPACK           : OFF
--   USE_PYTORCH_QNNPACK   : OFF
--   USE_REDIS             : OFF
--   USE_ROCKSDB           : OFF
--   USE_ZMQ               : OFF
--   USE_DISTRIBUTED       : ON
--     USE_MPI             : OFF
--     USE_GLOO            : ON
--     USE_TENSORPIPE      : OFF
--   USE_DEPLOY           : OFF
--   Public Dependencies  : Threads::Threads;caffe2::mkl
--   Private Dependencies : pthreadpool;cpuinfo;XNNPACK;fbgemm;fp16;gloo;aten_op_header_gen;foxi_loader;fmt::fmt-header-only
-- Configuring done
-- Generating done
-- Build files have been written to: D:/pytorch/build
cmake --build . --target install --config Release -- -j 1
...
[5271/6099] Building NVCC (Device) object caffe2/CMakeFiles/torch_cuda.dir/operators/torch_cuda_generated_summarize_op.cu.obj
FAILED: caffe2/CMakeFiles/torch_cuda.dir/operators/torch_cuda_generated_summarize_op.cu.obj D:/pytorch/build/caffe2/CMakeFiles/torch_cuda.dir/operators/torch_cuda_generated_summarize_op.cu.obj
cmd.exe /C "cd /D D:\pytorch\build\caffe2\CMakeFiles\torch_cuda.dir\operators && D:\condaEnv\Library\bin\cmake.exe -E make_directory D:/pytorch/build/caffe2/CMakeFiles/torch_cuda.dir/operators/. && D:\condaEnv\Library\bin\cmake.exe -D verbose:BOOL=OFF -D build_configuration:STRING=Release -D generated_file:STRING=D:/pytorch/build/caffe2/CMakeFiles/torch_cuda.dir/operators/./torch_cuda_generated_summarize_op.cu.obj -D generated_cubin_file:STRING=D:/pytorch/build/caffe2/CMakeFiles/torch_cuda.dir/operators/./torch_cuda_generated_summarize_op.cu.obj.cubin.txt -P D:/pytorch/build/caffe2/CMakeFiles/torch_cuda.dir/operators/torch_cuda_generated_summarize_op.cu.obj.Release.cmake"
summarize_op.cu
summarize_op.cu
C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v10.2/include\thrust/detail/allocator/allocator_traits.inl(163): error C2993: 'T': is not a valid type for non-type template parameter '__formal'
C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v10.2/include\thrust/detail/allocator/allocator_traits.inl(163): note: see reference to class template instantiation 'thrust::detail::allocator_traits_detail::has_member_destroy_impl_has_member<T,Result(Arg1,Arg2)>' being compiled
C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v10.2/include\thrust/detail/allocator/allocator_traits.inl(163): error C2065: 't': undeclared identifier
C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v10.2/include\thrust/detail/allocator/allocator_traits.inl(163): error C2923: 'std::_Select<__formal>::_Apply': 't' is not a valid template type argument for parameter '<unnamed-symbol>'
C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v10.2/include\thrust/detail/allocator/allocator_traits.inl(163): note: see declaration of 't'
C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v10.2/include\thrust/detail/allocator/allocator_traits.inl(163): error C2062: type 'unknown-type' unexpected
CMake Error at torch_cuda_generated_summarize_op.cu.obj.Release.cmake:281 (message):
  Error generating file
  D:/pytorch/build/caffe2/CMakeFiles/torch_cuda.dir/operators/./torch_cuda_generated_summarize_op.cu.obj


ninja: build stopped: subcommand failed.

Full log
I found the same problem on github
I don’t think there’s a single solution to it.

Same as @PromiX , I have followed the steps on this DataGraphi blog (the best PyTorch build from source info on the internet), and built PyTorch v1.9.1, v1.10.0, v1.11.0, v1.13.1 on CUDA 10.2 of my old Nvidia GTX gaming laptop.

The result, as predicted by @ptrblck , is failed miserably.

SOLUTION

1. Accept the defeat.
2. Give up on building PyTorch.
3. Download pre-built PyTorch binary.
4. Run PyTorch with CPU.
5. Slow, but steady, this is life.

On the bright side, you can run any AI software without worrying about insufficient VRAM.

For anyone who wants to build PyTorch from source, it’s not too late to turn around and go home.