Install from source with cuda compute capability 5.2 and OSX 10.12

Hello Pytorch forum,

I have previously installed Pytorch 1.0 from source on my Mac OSX 10.12 with cuda 9.0 and cudnn 7.0 ; it runs fine with external GPU support connecting an NVIDIA GTX Titan X (compute capability 5.2).

I cloned again the source and tried to install for upgrading to Pytorch 1.2 with
python3 setup.py install (I do not use conda)

It runs until it breaks with

[ 39%] Building NVCC (Device) object caffe2/CMakeFiles/torch.dir/__/aten/src/THC/torch_generated_THCStorage.cu.o
nvcc fatal : Unsupported gpu architecture ‘compute_70’

File “/usr/local/Cellar/python/3.7.3/Frameworks/Python.framework/Versions/3.7/lib/python3.7/subprocess.py”, line 347, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command ‘[‘cmake’, ‘–build’, ‘.’, ‘–target’, ‘install’, ‘–config’, ‘Release’, ‘–’, ‘-j’, ‘4’]’ returned non-zero exit status 2.

Does anyone know what could be wrong please ?
Is it due to the compute capability of my GPU below 7.0 ?

Thanks

Are you accidentally using an older CUDA version?
It looks like nvcc doesn’t support compute capability 7.0, which should be supported starting from CUDA9.0.
Could you check the CUDA version via nvcc --version?

PS: you could try to set the architecture specifically for your GPU via TORCH_CUDA_ARCH_LIST=6.1.

Thank you for your reply.

For the CUDA version here is what I have from nvcc:

nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2017 NVIDIA Corporation
Built on Fri_Sep__1_13:16:23_CDT_2017
Cuda compilation tools, release 9.0, V9.0.175

When I try running:
export TORCH_CUDA_ARCH_LIST=6.1
python3 setup.py install

Then I get the same errors. I attach the full output screenshot if that helps …

That’s strange. Could you run python setup.py clean and rerun both lines to reinstall it?

thank you for following-up

I ran
python3 setup.py clean

Then tried again the compilation from scratch as
MACOSX_DEPLOYMENT_TARGET=10.9 CC=clang CXX=clang++ TORCH_CUDA_ARCH_LIST=6.1 python3 setup.py install
(or also just TORCH_CUDA_ARCH_LIST=6.1 python3 setup.py install)

It still break at 39% but the report mentions a fatal error: ‘string.h’ file not found instead of the previous nvcc fatal error.

I wonder if I should clone again or if something else is missing …
I attach the new report, thanks !

It seems to be an xcode issue es reported here.
Could you run xcode-select --install and check, if it helps?

Thank you. This one ran without issues:
xcode-select --install

Then I tried again from scratch the:
MACOSX_DEPLOYMENT_TARGET=10.9 CC=clang CXX=clang++ TORCH_CUDA_ARCH_LIST=6.1 python3 setup.py install

Which got me further, but as some others in the thread you point it did not solve all.
Now it breaks at 75%:

make[1]: *** [caffe2/CMakeFiles/torch.dir/all] Error 2
make: *** [all] Error 2
Traceback (most recent call last):
File “setup.py”, line 759, in
build_deps()
File “setup.py”, line 321, in build_deps
cmake=cmake)

I cannot see by myself where the problem could come from (but it seems pytorch related) … may you have any more guesses please ?

Does your compiler support all C++11 standards?
The const char* what_arg argument in out_of_range added to C++11 as seen here.

I’ve never used macOS, but I assume you are compiling with clang? If so, which version are you using?

Sorry for the delayed reply …

Regarding the compiler I have:

clang --version
Apple LLVM version 8.0.0 (clang-800.0.42.1)
Target: x86_64-apple-darwin16.7.0
Thread model: posix
InstalledDir: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin

I tried updating with brew install llvm which installs but “keg-only”:

I am currently running again:
MACOSX_DEPLOYMENT_TARGET=10.9 CC=clang CXX=clang++ TORCH_CUDA_ARCH_LIST=6.1 python3 setup.py install

If it doesn’t work, I will python3 setup.py clean and try running again the compilation with:
export LDFLAGS=“-L/usr/local/opt/llvm/lib”
export CPPFLAGS=“-I/usr/local/opt/llvm/include”

I keep you updated if that works or still raises errors. Thanks !

update after trying brew install llvm

if I open python3 I have the following still
Python 3.7.3 (default, Jun 6 2019, 12:03:32)
[Clang 8.0.0 (clang-800.0.42.1)] on darwin

I tried, both from scratch after setup.py clean:

MACOSX_DEPLOYMENT_TARGET=10.9 CC=clang CXX=clang++ TORCH_CUDA_ARCH_LIST=6.1 python3 setup.py install

LDFLAGS=“-L/usr/local/opt/llvm/lib” CPPFLAGS=“-I/usr/local/opt/llvm/include” TORCH_CUDA_ARCH_LIST=6.1 python3 setup.py install

In every case the compilation breaks with the same error at 75% which I cannot solve so far …

According to Clang - C++ Programming Language Status
" Clang 3.3 and later implement all of the ISO C++ 2011 standard."

And I currently have:
Xcode 8.2.1

  • Xcode : Build version 8C1002
  • clang : Apple LLVM version 8.0.0 (clang-800.0.42.1)

However I am not sure how to relate my clang version to 3.3 and later for checking C++11 support …

Hi @ptrblck

I tried a couple of more things after updating llvm:
_ re-installing xcode
_ pip3 install mkl
_ pip3 install mkl-devel

however the compilation still break at 75% as before …
in case it could help, I join the initial checks of the compilation …

Building wheel torch-1.3.0a0+4fb5a7c
-- Building version 1.3.0a0+4fb5a7c
cmake -DBUILD_PYTHON=True -DBUILD_TEST=True -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=/Users/adrienbitton/Desktop/pytorch/torch -DCMAKE_PREFIX_PATH=/usr/local/Cellar/python/3.7.3/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages -DNUMPY_INCLUDE_DIR=/usr/local/lib/python3.7/site-packages/numpy/core/include -DPYTHON_EXECUTABLE=/usr/local/opt/python/bin/python3.7 -DPYTHON_INCLUDE_DIR=/usr/local/Cellar/python/3.7.3/Frameworks/Python.framework/Versions/3.7/include/python3.7m -DPYTHON_LIBRARY=/usr/local/opt/python/Frameworks/Python.framework/Versions/3.7/lib/Python.framework/Versions/3.7/Python -DTORCH_BUILD_VERSION=1.3.0a0+4fb5a7c -DUSE_CUDA=True -DUSE_NUMPY=True /Users/adrienbitton/Desktop/pytorch
-- The CXX compiler identification is AppleClang 8.0.0.8000042
-- The C compiler identification is AppleClang 8.0.0.8000042
-- Check for working CXX compiler: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/clang++
-- Check for working CXX compiler: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/clang++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Check for working C compiler: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/clang
-- Check for working C compiler: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/clang -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Detecting C compile features
-- Detecting C compile features - done
-- Not forcing any particular BLAS to be found
-- CLANG_VERSION_STRING:         8.0
-- Performing Test COMPILER_WORKS
-- Performing Test COMPILER_WORKS - Success
-- Performing Test SUPPORT_GLIBCXX_USE_C99
-- Performing Test SUPPORT_GLIBCXX_USE_C99 - Success
-- Performing Test CAFFE2_EXCEPTION_PTR_SUPPORTED
-- Performing Test CAFFE2_EXCEPTION_PTR_SUPPORTED - Success
-- std::exception_ptr is supported.
-- NUMA is disabled
-- Performing Test CAFFE2_NEED_TO_TURN_OFF_DEPRECATION_WARNING
-- Performing Test CAFFE2_NEED_TO_TURN_OFF_DEPRECATION_WARNING - Failed
-- Turning off deprecation warning due to glog.
-- Performing Test CAFFE2_COMPILER_SUPPORTS_AVX2_EXTENSIONS
-- Performing Test CAFFE2_COMPILER_SUPPORTS_AVX2_EXTENSIONS - Success
-- Current compiler supports avx2 extension. Will build perfkernels.
-- Performing Test CAFFE2_COMPILER_SUPPORTS_AVX512_EXTENSIONS
-- Performing Test CAFFE2_COMPILER_SUPPORTS_AVX512_EXTENSIONS - Failed
-- Performing Test COMPILER_SUPPORTS_HIDDEN_VISIBILITY
-- Performing Test COMPILER_SUPPORTS_HIDDEN_VISIBILITY - Success
-- Performing Test COMPILER_SUPPORTS_HIDDEN_INLINE_VISIBILITY
-- Performing Test COMPILER_SUPPORTS_HIDDEN_INLINE_VISIBILITY - Success
-- Performing Test COMPILER_SUPPORTS_RDYNAMIC
-- Performing Test COMPILER_SUPPORTS_RDYNAMIC - Success
-- Building using own protobuf under third_party per request.
-- Use custom protobuf build.
-- Looking for pthread.h
-- Looking for pthread.h - found
-- Looking for pthread_create
-- Looking for pthread_create - found
-- Found Threads: TRUE
-- Caffe2 protobuf include directory: $<BUILD_INTERFACE:/Users/adrienbitton/Desktop/pytorch/third_party/protobuf/src>$<INSTALL_INTERFACE:include>
-- Trying to find preferred BLAS backend of choice: MKL
-- MKL_THREADING = OMP
-- Looking for sys/types.h
-- Looking for sys/types.h - found
-- Looking for stdint.h
-- Looking for stdint.h - found
-- Looking for stddef.h
-- Looking for stddef.h - found
-- Check size of void*
-- Check size of void* - done
-- Looking for cblas_sgemm
-- Looking for cblas_sgemm - found
-- MKL libraries: /usr/local/lib/libmkl_intel_lp64.dylib;/usr/local/lib/libmkl_intel_thread.dylib;/usr/local/lib/libmkl_core.dylib;/usr/local/lib/libiomp5.dylib;/usr/lib/libpthread.dylib;/usr/lib/libm.dylib
-- MKL include directory: /usr/local/include
-- MKL OpenMP type: Intel
-- MKL OpenMP library: /usr/local/lib/libiomp5.dylib
-- The ASM compiler identification is AppleClang
-- Found assembler: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/clang
-- Brace yourself, we are building NNPACK
-- Performing Test NNPACK_ARCH_IS_X86_32
-- Performing Test NNPACK_ARCH_IS_X86_32 - Failed
-- Found PythonInterp: /usr/local/opt/python/bin/python3.7 (found version "3.7.3")
-- NNPACK backend is x86-64
-- Failed to find LLVM FileCheck
-- Found Git: /usr/local/bin/git (found version "2.21.0")
-- git Version: v1.4.0-505be96a
-- Version: 1.4.0
-- Performing Test HAVE_CXX_FLAG_STD_CXX11
-- Performing Test HAVE_CXX_FLAG_STD_CXX11 - Success
-- Performing Test HAVE_CXX_FLAG_WALL
-- Performing Test HAVE_CXX_FLAG_WALL - Success
-- Performing Test HAVE_CXX_FLAG_WEXTRA
-- Performing Test HAVE_CXX_FLAG_WEXTRA - Success
-- Performing Test HAVE_CXX_FLAG_WSHADOW
-- Performing Test HAVE_CXX_FLAG_WSHADOW - Success
-- Performing Test HAVE_CXX_FLAG_WERROR
-- Performing Test HAVE_CXX_FLAG_WERROR - Success
-- Performing Test HAVE_CXX_FLAG_PEDANTIC
-- Performing Test HAVE_CXX_FLAG_PEDANTIC - Success
-- Performing Test HAVE_CXX_FLAG_PEDANTIC_ERRORS
-- Performing Test HAVE_CXX_FLAG_PEDANTIC_ERRORS - Success
-- Performing Test HAVE_CXX_FLAG_WSHORTEN_64_TO_32
-- Performing Test HAVE_CXX_FLAG_WSHORTEN_64_TO_32 - Success
-- Performing Test HAVE_CXX_FLAG_WFLOAT_EQUAL
-- Performing Test HAVE_CXX_FLAG_WFLOAT_EQUAL - Success
-- Performing Test HAVE_CXX_FLAG_FSTRICT_ALIASING
-- Performing Test HAVE_CXX_FLAG_FSTRICT_ALIASING - Success
-- Performing Test HAVE_CXX_FLAG_WNO_DEPRECATED_DECLARATIONS
-- Performing Test HAVE_CXX_FLAG_WNO_DEPRECATED_DECLARATIONS - Success
-- Performing Test HAVE_CXX_FLAG_WSTRICT_ALIASING
-- Performing Test HAVE_CXX_FLAG_WSTRICT_ALIASING - Success
-- Performing Test HAVE_CXX_FLAG_WD654
-- Performing Test HAVE_CXX_FLAG_WD654 - Failed
-- Performing Test HAVE_CXX_FLAG_WTHREAD_SAFETY
-- Performing Test HAVE_CXX_FLAG_WTHREAD_SAFETY - Success
-- Performing Test HAVE_THREAD_SAFETY_ATTRIBUTES
-- Performing Test HAVE_THREAD_SAFETY_ATTRIBUTES
-- Performing Test HAVE_THREAD_SAFETY_ATTRIBUTES -- failed to compile
-- Performing Test HAVE_CXX_FLAG_COVERAGE
-- Performing Test HAVE_CXX_FLAG_COVERAGE - Success
-- Performing Test HAVE_STD_REGEX
-- Performing Test HAVE_STD_REGEX
-- Performing Test HAVE_STD_REGEX -- success
-- Performing Test HAVE_GNU_POSIX_REGEX
-- Performing Test HAVE_GNU_POSIX_REGEX
-- Performing Test HAVE_GNU_POSIX_REGEX -- failed to compile
-- Performing Test HAVE_POSIX_REGEX
-- Performing Test HAVE_POSIX_REGEX
-- Performing Test HAVE_POSIX_REGEX -- success
-- Performing Test HAVE_STEADY_CLOCK
-- Performing Test HAVE_STEADY_CLOCK
-- Performing Test HAVE_STEADY_CLOCK -- success
CMake Warning at cmake/Dependencies.cmake:452 (message):
  A compiler with AVX512 support is required for FBGEMM.  Not compiling with
  FBGEMM.  Turn this warning off by USE_FBGEMM=OFF.
Call Stack (most recent call first):
  CMakeLists.txt:362 (include)


-- Using third party subdirectory Eigen.
Python 3.7.3
-- Found PythonInterp: /usr/local/opt/python/bin/python3.7 (found suitable version "3.7.3", minimum required is "2.7")
-- Found PythonLibs: /usr/local/opt/python/Frameworks/Python.framework/Versions/3.7/lib/Python.framework/Versions/3.7/Python (found suitable version "3.7.3", minimum required is "2.7")
-- Could NOT find pybind11 (missing: pybind11_DIR)
-- Could NOT find pybind11 (missing: pybind11_INCLUDE_DIR)
-- Using third_party/pybind11.
-- Adding OpenMP CXX_FLAGS: -Xpreprocessor -fopenmp -I/usr/local/include
-- Will link against OpenMP libraries: /usr/local/lib/libiomp5.dylib
-- Found CUDA: /usr/local/cuda (found version "9.0")
-- Caffe2: CUDA detected: 9.0
-- Caffe2: CUDA nvcc is: /usr/local/cuda/bin/nvcc
-- Caffe2: CUDA toolkit directory: /usr/local/cuda
-- Caffe2: Header version is: 9.0
-- Found CUDNN: /usr/local/cuda/lib/libcudnn.dylib
-- Found cuDNN: v7.0.4  (include: /usr/local/cuda/include, library: /usr/local/cuda/lib/libcudnn.dylib)
CMake Warning at cmake/public/utils.cmake:172 (message):
  In the future we will require one to explicitly pass TORCH_CUDA_ARCH_LIST
  to cmake instead of implicitly setting it as an env variable.  This will
  become a FATAL_ERROR in future version of pytorch.
Call Stack (most recent call first):
  cmake/public/cuda.cmake:369 (torch_cuda_get_nvcc_gencode_flag)
  cmake/Dependencies.cmake:828 (include)
  CMakeLists.txt:362 (include)


-- Added CUDA NVCC flags for: -gencode;arch=compute_61,code=sm_61
-- Could NOT find CUB (missing: CUB_INCLUDE_DIR)
CMake Warning at cmake/Dependencies.cmake:1032 (message):
  Metal is only used in ios builds.
Call Stack (most recent call first):
  CMakeLists.txt:362 (include)


--
-- ******** Summary ********
--   CMake version         : 3.14.5
--   CMake command         : /usr/local/Cellar/cmake/3.14.5/bin/cmake
--   System                : Darwin
--   C++ compiler          : /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/clang++
--   C++ compiler version  : 8.0.0.8000042
--   CXX flags             :  -Wno-deprecated -fvisibility-inlines-hidden -Wno-deprecated-declarations -Xpreprocessor -fopenmp -I/usr/local/include -Wnon-virtual-dtor
--   Build type            : Release
--   Compile definitions   : TH_BLAS_MKL;ONNX_ML=1
--   CMAKE_PREFIX_PATH     : /usr/local/Cellar/python/3.7.3/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages;/usr/local/cuda
--   CMAKE_INSTALL_PREFIX  : /Users/adrienbitton/Desktop/pytorch/torch
--   CMAKE_MODULE_PATH     : /Users/adrienbitton/Desktop/pytorch/cmake/Modules;/Users/adrienbitton/Desktop/pytorch/cmake/public/../Modules_CUDA_fix
--
--   ONNX version          : 1.5.0
--   ONNX NAMESPACE        : onnx_torch
--   ONNX_BUILD_TESTS      : OFF
--   ONNX_BUILD_BENCHMARKS : OFF
--   ONNX_USE_LITE_PROTO   : OFF
--   ONNXIFI_DUMMY_BACKEND : OFF
--   ONNXIFI_ENABLE_EXT    : OFF
--
--   Protobuf compiler     :
--   Protobuf includes     :
--   Protobuf libraries    :
--   BUILD_ONNX_PYTHON     : OFF
--
-- ******** Summary ********
--   CMake version         : 3.14.5
--   CMake command         : /usr/local/Cellar/cmake/3.14.5/bin/cmake
--   System                : Darwin
--   C++ compiler          : /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/clang++
--   C++ compiler version  : 8.0.0.8000042
--   CXX flags             :  -Wno-deprecated -fvisibility-inlines-hidden -Wno-deprecated-declarations -Xpreprocessor -fopenmp -I/usr/local/include -Wnon-virtual-dtor
--   Build type            : Release
--   Compile definitions   : TH_BLAS_MKL;ONNX_ML=1
--   CMAKE_PREFIX_PATH     : /usr/local/Cellar/python/3.7.3/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages;/usr/local/cuda
--   CMAKE_INSTALL_PREFIX  : /Users/adrienbitton/Desktop/pytorch/torch
--   CMAKE_MODULE_PATH     : /Users/adrienbitton/Desktop/pytorch/cmake/Modules;/Users/adrienbitton/Desktop/pytorch/cmake/public/../Modules_CUDA_fix
--
--   ONNX version          : 1.4.1
--   ONNX NAMESPACE        : onnx_torch
--   ONNX_BUILD_TESTS      : OFF
--   ONNX_BUILD_BENCHMARKS : OFF
--   ONNX_USE_LITE_PROTO   : OFF
--   ONNXIFI_DUMMY_BACKEND : OFF
--
--   Protobuf compiler     :
--   Protobuf includes     :
--   Protobuf libraries    :
--   BUILD_ONNX_PYTHON     : OFF
-- Found CUDA with FP16 support, compiling with torch.cuda.HalfTensor
-- Removing -DNDEBUG from compile flags
-- MAGMA not found. Compiling without MAGMA support
-- Could not find hardware support for NEON on this machine.
-- No OMAP3 processor on this machine.
-- No OMAP4 processor on this machine.
-- Looking for cpuid.h
-- Looking for cpuid.h - found
-- Performing Test HAVE_GCC_GET_CPUID
-- Performing Test HAVE_GCC_GET_CPUID - Success
-- Performing Test NO_GCC_EBX_FPIC_BUG
-- Performing Test NO_GCC_EBX_FPIC_BUG - Success
-- Performing Test C_HAS_AVX_1
-- Performing Test C_HAS_AVX_1 - Failed
-- Performing Test C_HAS_AVX_2
-- Performing Test C_HAS_AVX_2 - Success
-- Performing Test C_HAS_AVX2_1
-- Performing Test C_HAS_AVX2_1 - Failed
-- Performing Test C_HAS_AVX2_2
-- Performing Test C_HAS_AVX2_2 - Success
-- Performing Test CXX_HAS_AVX_1
-- Performing Test CXX_HAS_AVX_1 - Failed
-- Performing Test CXX_HAS_AVX_2
-- Performing Test CXX_HAS_AVX_2 - Success
-- Performing Test CXX_HAS_AVX2_1
-- Performing Test CXX_HAS_AVX2_1 - Failed
-- Performing Test CXX_HAS_AVX2_2
-- Performing Test CXX_HAS_AVX2_2 - Success
-- AVX compiler support found
-- AVX2 compiler support found
-- Performing Test BLAS_F2C_DOUBLE_WORKS
-- Performing Test BLAS_F2C_DOUBLE_WORKS - Failed
-- Performing Test BLAS_F2C_FLOAT_WORKS
-- Performing Test BLAS_F2C_FLOAT_WORKS - Success
-- Performing Test BLAS_USE_CBLAS_DOT
-- Performing Test BLAS_USE_CBLAS_DOT - Success
-- Found a library with BLAS API (mkl).
-- Found a library with LAPACK API (mkl).
disabling ROCM because NOT USE_ROCM is set
-- MIOpen not found. Compiling without MIOpen support
-- MKLDNN_THREADING = OMP:COMP
CMake Warning (dev) at third_party/ideep/mkl-dnn/cmake/options.cmake:33 (option):
  Policy CMP0077 is not set: option() honors normal variables.  Run "cmake
  --help-policy CMP0077" for policy details.  Use the cmake_policy command to
  set the policy and suppress this warning.

  For compatibility with older versions of CMake, option is clearing the
  normal variable 'MKLDNN_ENABLE_CONCURRENT_EXEC'.
Call Stack (most recent call first):
  third_party/ideep/mkl-dnn/cmake/utils.cmake:24 (include)
  third_party/ideep/mkl-dnn/CMakeLists.txt:74 (include)
This warning is for project developers.  Use -Wno-dev to suppress it.

-- Found OpenMP_C: -Xpreprocessor -fopenmp -I/usr/local/include (found version "4.0")
-- Found OpenMP_CXX: -Xpreprocessor -fopenmp -I/usr/local/include (found version "4.0")
-- Found OpenMP: TRUE (found version "4.0")
-- OpenMP lib: provided by compiler
-- Could NOT find Doxygen (missing: DOXYGEN_EXECUTABLE)
-- VTune profiling environment is unset
-- Found MKL-DNN: TRUE
-- Looking for mmap
-- Looking for mmap - found
-- Looking for shm_open
-- Looking for shm_open - found
-- Looking for shm_unlink
-- Looking for shm_unlink - found
-- Looking for malloc_usable_size
-- Looking for malloc_usable_size - not found
-- Performing Test C_HAS_THREAD
-- Performing Test C_HAS_THREAD - Success
-- don't use NUMA
-- Performing Test COMPILER_SUPPORTS_NO_AVX256_SPLIT
-- Performing Test COMPILER_SUPPORTS_NO_AVX256_SPLIT - Failed
-- Check size of long double
-- Check size of long double - done
-- Performing Test COMPILER_SUPPORTS_LONG_DOUBLE
-- Performing Test COMPILER_SUPPORTS_LONG_DOUBLE - Success
-- Performing Test COMPILER_SUPPORTS_FLOAT128
-- Performing Test COMPILER_SUPPORTS_FLOAT128 - Failed
-- Performing Test COMPILER_SUPPORTS_SSE2
-- Performing Test COMPILER_SUPPORTS_SSE2 - Success
-- Performing Test COMPILER_SUPPORTS_SSE4
-- Performing Test COMPILER_SUPPORTS_SSE4 - Success
-- Performing Test COMPILER_SUPPORTS_AVX
-- Performing Test COMPILER_SUPPORTS_AVX - Success
-- Performing Test COMPILER_SUPPORTS_FMA4
-- Performing Test COMPILER_SUPPORTS_FMA4 - Success
-- Performing Test COMPILER_SUPPORTS_AVX2
-- Performing Test COMPILER_SUPPORTS_AVX2 - Success
-- Performing Test COMPILER_SUPPORTS_SVE
-- Performing Test COMPILER_SUPPORTS_SVE - Failed
-- Performing Test COMPILER_SUPPORTS_AVX512F
-- Performing Test COMPILER_SUPPORTS_AVX512F - Failed
-- Performing Test COMPILER_SUPPORTS_OPENMP
-- Performing Test COMPILER_SUPPORTS_OPENMP - Failed
-- Performing Test COMPILER_SUPPORTS_WEAK_ALIASES
-- Performing Test COMPILER_SUPPORTS_WEAK_ALIASES - Failed
-- Performing Test COMPILER_SUPPORTS_BUILTIN_MATH
-- Performing Test COMPILER_SUPPORTS_BUILTIN_MATH - Success
-- Configuring build for SLEEF-v3.2
   Target system: Darwin-16.7.0
   Target processor: x86_64
   Host system: Darwin-16.7.0
   Host processor: x86_64
   Detected C compiler: AppleClang @ /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/clang
-- Using option `-Wall -Wno-unused -Wno-attributes -Wno-unused-result -ffp-contract=off -fno-math-errno -fno-trapping-math` to compile libsleef
-- Building shared libs : OFF
-- MPFR : LIB_MPFR-NOTFOUND
-- GMP : LIBGMP-NOTFOUND
-- RUNNING_ON_TRAVIS : 0
-- COMPILER_SUPPORTS_OPENMP :
-- NCCL operators skipped due to no CUDA support
-- Including IDEEP operators
-- Excluding image processing operators due to no opencv
-- Excluding video processing operators due to no opencv
-- MPI operators skipped due to no MPI support

Did you initialize all submodules via git submodule update --init?

I think I did at first (when following again the readme) but I am not sure …

I re-checked dependencies and re-cloned the source.

I will try again from scratch:
git submodule sync
git submodule update --init --recursive
LDFLAGS="-L/usr/local/opt/llvm/lib" CPPFLAGS="-I/usr/local/opt/llvm/include" TORCH_CUDA_ARCH_LIST=6.1 python3 setup.py install

Now that I re-installed dependencies and re-cloned pytorch source.
Trying compiling from scratch leads to other errors …

It doesn’t start the build % but first goes with
Building CXX object caffe2/CMakeFiles
and it stops at [1970/3136]

I don’t know why is this new step happening and why it is failing … any hint or known workaround ? Only thing I found could be setting BUILD_SHARED_LIBS=OFF or USE_NINJA=OFF ; I am trying it now …

[1909/3136] Building CXX object caffe2/CMakeFiles/torch.dir/sgd/learning_rate_op.cc.o
In file included from ../caffe2/sgd/learning_rate_op.cc:1:
In file included from ../caffe2/sgd/learning_rate_op.h:6:
In file included from ../caffe2/core/context.h:9:
In file included from ../caffe2/core/allocator.h:3:
In file included from ../c10/core/CPUAllocator.h:7:
../c10/util/Logging.h:191:29: warning: comparison of integers of different signs: 'const unsigned long' and 'const int' [-Wsign-compare]
BINARY_COMP_HELPER(Greater, >)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~
../c10/util/Logging.h:184:11: note: expanded from macro 'BINARY_COMP_HELPER'
    if (x op y) {                                            \
        ~ ^  ~
../caffe2/sgd/learning_rate_op.h:143:7: note: in instantiation of function template specialization 'c10::enforce_detail::Greater<unsigned long, int>' requested here
      CAFFE_ENFORCE_GT(
      ^
../c10/util/Logging.h:242:27: note: expanded from macro 'CAFFE_ENFORCE_GT'
  CAFFE_ENFORCE_THAT_IMPL(Greater((x), (y)), #x " > " #y, __VA_ARGS__)
                          ^
../caffe2/sgd/learning_rate_op.h:23:20: note: in instantiation of member function 'caffe2::LearningRateOp<float, caffe2::CPUContext>::createLearningRateFunctor' requested here
    functor_.reset(createLearningRateFunctor(policy));
                   ^
../c10/util/Registry.h:184:30: note: in instantiation of member function 'caffe2::LearningRateOp<float, caffe2::CPUContext>::LearningRateOp' requested here
    return ObjectPtrType(new DerivedType(args...));
                             ^
../caffe2/sgd/learning_rate_op.cc:4:1: note: in instantiation of function template specialization 'c10::Registerer<std::__1::basic_string<char>, std::__1::unique_ptr<caffe2::OperatorBase, std::__1::default_delete<caffe2::OperatorBase> >, const caffe2::OperatorDef &, caffe2::Workspace *>::DefaultCreator<caffe2::LearningRateOp<float, caffe2::CPUContext> >' requested here
REGISTER_CPU_OPERATOR(LearningRate, LearningRateOp<float, CPUContext>);
^
../caffe2/core/operator.h:1396:3: note: expanded from macro 'REGISTER_CPU_OPERATOR'
  C10_REGISTER_CLASS(CPUOperatorRegistry, name, __VA_ARGS__)
  ^
../c10/util/Registry.h:279:3: note: expanded from macro 'C10_REGISTER_CLASS'
  C10_REGISTER_TYPED_CLASS(RegistryName, #key, __VA_ARGS__)
  ^
../c10/util/Registry.h:238:33: note: expanded from macro 'C10_REGISTER_TYPED_CLASS'
      Registerer##RegistryName::DefaultCreator<__VA_ARGS__>,                \
                                ^
In file included from ../caffe2/sgd/learning_rate_op.cc:1:
In file included from ../caffe2/sgd/learning_rate_op.h:8:
../caffe2/sgd/learning_rate_functors.h:279:11: warning: using integer absolute value function 'abs' when argument is of floating point type [-Wabsolute-value]
    T x = abs(static_cast<T>(iter) / stepsize_ - 2 * cycle + 1);
          ^
../caffe2/sgd/learning_rate_functors.h:268:3: note: in instantiation of member function 'caffe2::CyclicalLearningRate<float>::operator()' requested here
  CyclicalLearningRate(
  ^
../caffe2/sgd/learning_rate_op.h:179:18: note: in instantiation of member function 'caffe2::CyclicalLearningRate<float>::CyclicalLearningRate' requested here
      return new CyclicalLearningRate<T>(base_lr_, max_lr, stepsize, decay);
                 ^
../caffe2/sgd/learning_rate_op.h:23:20: note: in instantiation of member function 'caffe2::LearningRateOp<float, caffe2::CPUContext>::createLearningRateFunctor' requested here
    functor_.reset(createLearningRateFunctor(policy));
                   ^
../c10/util/Registry.h:184:30: note: in instantiation of member function 'caffe2::LearningRateOp<float, caffe2::CPUContext>::LearningRateOp' requested here
    return ObjectPtrType(new DerivedType(args...));
                             ^
../caffe2/sgd/learning_rate_op.cc:4:1: note: in instantiation of function template specialization 'c10::Registerer<std::__1::basic_string<char>, std::__1::unique_ptr<caffe2::OperatorBase, std::__1::default_delete<caffe2::OperatorBase> >, const caffe2::OperatorDef &, caffe2::Workspace *>::DefaultCreator<caffe2::LearningRateOp<float, caffe2::CPUContext> >' requested here
REGISTER_CPU_OPERATOR(LearningRate, LearningRateOp<float, CPUContext>);
^
../caffe2/core/operator.h:1396:3: note: expanded from macro 'REGISTER_CPU_OPERATOR'
  C10_REGISTER_CLASS(CPUOperatorRegistry, name, __VA_ARGS__)
  ^
../c10/util/Registry.h:279:3: note: expanded from macro 'C10_REGISTER_CLASS'
  C10_REGISTER_TYPED_CLASS(RegistryName, #key, __VA_ARGS__)
  ^
../c10/util/Registry.h:238:33: note: expanded from macro 'C10_REGISTER_TYPED_CLASS'
      Registerer##RegistryName::DefaultCreator<__VA_ARGS__>,                \
                                ^
../caffe2/sgd/learning_rate_functors.h:279:11: note: use function 'std::abs' instead
    T x = abs(static_cast<T>(iter) / stepsize_ - 2 * cycle + 1);
          ^~~
          std::abs
../caffe2/sgd/learning_rate_functors.h:281:12: warning: using integer absolute value function 'abs' when argument is of floating point type [-Wabsolute-value]
        (T(abs(max_lr_)) / T(abs(base_lr_)) - 1) * std::max(T(0.0), (1 - x)) *
           ^
../caffe2/sgd/learning_rate_functors.h:281:12: note: use function 'std::abs' instead
        (T(abs(max_lr_)) / T(abs(base_lr_)) - 1) * std::max(T(0.0), (1 - x)) *
           ^~~
           std::abs
../caffe2/sgd/learning_rate_functors.h:281:30: warning: using integer absolute value function 'abs' when argument is of floating point type [-Wabsolute-value]
        (T(abs(max_lr_)) / T(abs(base_lr_)) - 1) * std::max(T(0.0), (1 - x)) *
                             ^
../caffe2/sgd/learning_rate_functors.h:281:30: note: use function 'std::abs' instead
        (T(abs(max_lr_)) / T(abs(base_lr_)) - 1) * std::max(T(0.0), (1 - x)) *
                             ^~~
                             std::abs
4 warnings generated.
[1914/3136] Building CXX object caffe2/CMakeFiles/torch.dir/share/contrib/nnpack/conv_op.cc.o
../caffe2/share/contrib/nnpack/conv_op.cc:183:16: warning: unused variable 'output_channels' [-Wunused-variable]
  const size_t output_channels = Y->dim32(1);
               ^
../caffe2/share/contrib/nnpack/conv_op.cc:181:16: warning: unused variable 'batch_size' [-Wunused-variable]
  const size_t batch_size = X.dim32(0);
               ^
../caffe2/share/contrib/nnpack/conv_op.cc:155:13: warning: unused variable 'N' [-Wunused-variable]
  const int N = X.dim32(0), C = X.dim32(1), H = X.dim32(2), W = X.dim32(3);
            ^
../caffe2/share/contrib/nnpack/conv_op.cc:182:16: warning: unused variable 'input_channels' [-Wunused-variable]
  const size_t input_channels = X.dim32(1);
               ^
../caffe2/share/contrib/nnpack/conv_op.cc:175:27: warning: comparison of integers of different signs: 'size_type' (aka 'unsigned long') and 'const int' [-Wsign-compare]
    if (dummyBias_.size() != M) {
        ~~~~~~~~~~~~~~~~~ ^  ~
In file included from ../caffe2/share/contrib/nnpack/conv_op.cc:7:
In file included from ../caffe2/core/context.h:9:
In file included from ../caffe2/core/allocator.h:3:
In file included from ../c10/core/CPUAllocator.h:7:
../c10/util/Logging.h:189:28: warning: comparison of integers of different signs: 'const unsigned long' and 'const int' [-Wsign-compare]
BINARY_COMP_HELPER(Equals, ==)
~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~
../c10/util/Logging.h:184:11: note: expanded from macro 'BINARY_COMP_HELPER'
    if (x op y) {                                            \
        ~ ^  ~
../caffe2/share/contrib/nnpack/conv_op.cc:274:11: note: in instantiation of function template specialization 'c10::enforce_detail::Equals<unsigned long, int>' requested here
          CAFFE_ENFORCE_EQ(transformedFilters_.size(), group_);
          ^
../c10/util/Logging.h:232:27: note: expanded from macro 'CAFFE_ENFORCE_EQ'
  CAFFE_ENFORCE_THAT_IMPL(Equals((x), (y)), #x " == " #y, __VA_ARGS__)
                          ^
6 warnings generated.
[1915/3136] Building CXX object caffe2/CMakeFiles/torch.dir/transforms/common_subexpression_elimination.cc.o
../caffe2/transforms/common_subexpression_elimination.cc:104:23: warning: comparison of integers of different signs: 'int' and 'size_type' (aka 'unsigned long') [-Wsign-compare]
    for (int i = 0; i < subgraph.size(); i++) {
                    ~ ^ ~~~~~~~~~~~~~~~
1 warning generated.
[1918/3136] Building CXX object caffe2/CMakeFiles/torch.dir/transforms/pattern_net_transform.cc.o
../caffe2/transforms/pattern_net_transform.cc:117:30: warning: comparison of integers of different signs: 'value_type' (aka 'int') and 'size_type' (aka 'unsigned long') [-Wsign-compare]
    if (inverse_ops_[parent] < subgraph.size() &&
        ~~~~~~~~~~~~~~~~~~~~ ^ ~~~~~~~~~~~~~~~
../caffe2/transforms/pattern_net_transform.cc:125:29: warning: comparison of integers of different signs: 'value_type' (aka 'int') and 'size_type' (aka 'unsigned long') [-Wsign-compare]
    if (inverse_ops_[child] < subgraph.size() &&
        ~~~~~~~~~~~~~~~~~~~ ^ ~~~~~~~~~~~~~~~
../caffe2/transforms/pattern_net_transform.cc:153:21: warning: comparison of integers of different signs: 'int' and 'size_type' (aka 'unsigned long') [-Wsign-compare]
  for (int i = 0; i < match.size(); i++) {
                  ~ ^ ~~~~~~~~~~~~
../caffe2/transforms/pattern_net_transform.cc:182:21: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
  for (int i = 0; i < r_.size(); i++) {
                  ~ ^ ~~~~~~~~~
4 warnings generated.
[1920/3136] Building CXX object caffe2/CMakeFiles/torch.dir/__/torch/csrc/autograd/generated/Functions.cpp.o
../torch/csrc/autograd/generated/Functions.cpp:401:8: warning: unused function 'cumprod_backward' [-Wunused-function]
Tensor cumprod_backward(const Tensor &grad, const Tensor &input, int64_t dim, optional<ScalarType> dtype) {
       ^
1 warning generated.
[1941/3136] Building CXX object caffe2/CMakeFiles/torch.dir/__/torch/csrc/autograd/profiler.cpp.o
../torch/csrc/autograd/profiler.cpp:72:26: warning: comparison of integers of different signs: 'int' and 'size_type' (aka 'unsigned long') [-Wsign-compare]
        for(int i = 0; i < shapes.size(); i++) {
                       ~ ^ ~~~~~~~~~~~~~
../torch/csrc/autograd/profiler.cpp:75:34: warning: comparison of integers of different signs: 'int' and 'size_type' (aka 'unsigned long') [-Wsign-compare]
            for(int dim = 0; dim < shapes[i].size(); dim++) {
                             ~~~ ^ ~~~~~~~~~~~~~~~~
../torch/csrc/autograd/profiler.cpp:77:22: warning: comparison of integers of different signs: 'int' and 'unsigned long' [-Wsign-compare]
              if(dim < shapes[i].size() - 1)
                 ~~~ ^ ~~~~~~~~~~~~~~~~~~~~
../torch/csrc/autograd/profiler.cpp:84:16: warning: comparison of integers of different signs: 'int' and 'unsigned long' [-Wsign-compare]
          if(i < shapes.size() - 1)
             ~ ^ ~~~~~~~~~~~~~~~~~
4 warnings generated.
[1958/3136] Building CXX object caffe2/CMakeFiles/torch.dir/__/torch/csrc/jit/interpreter.cpp.o
../torch/csrc/jit/interpreter.cpp:292:23: warning: moving a temporary object prevents copy elision [-Wpessimizing-move]
    can_emit_inline = std::move(CanEmitInline(graph).can_emit_inline_);
                      ^
../torch/csrc/jit/interpreter.cpp:292:23: note: remove std::move call here
    can_emit_inline = std::move(CanEmitInline(graph).can_emit_inline_);
                      ^~~~~~~~~~                                     ~
1 warning generated.
[1967/3136] Building CXX object caffe2/CMakeFiles/torch.dir/__/torch/csrc/jit/passes/alias_analysis.cpp.o
FAILED: caffe2/CMakeFiles/torch.dir/__/torch/csrc/jit/passes/alias_analysis.cpp.o
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/clang++  -DAT_PARALLEL_OPENMP=1 -DCAFFE2_BUILD_MAIN_LIB -DCPUINFO_SUPPORTED_PLATFORM=1 -DHAVE_MMAP=1 -DHAVE_SHM_OPEN=1 -DHAVE_SHM_UNLINK=1 -DIDEEP_USE_MKL -DNNP_CONVOLUTION_ONLY=0 -DNNP_INFERENCE_ONLY=0 -DONNX_ML=1 -DONNX_NAMESPACE=onnx_torch -DTH_BLAS_MKL -DUSE_CUDA -D_FILE_OFFSET_BITS=64 -D_THP_CORE -Dtorch_EXPORTS -Iaten/src -I../aten/src -I. -I../ -I../cmake/../third_party/benchmark/include -Icaffe2/contrib/aten -I../third_party/onnx -Ithird_party/onnx -I../third_party/foxi -Ithird_party/foxi -I../caffe2/../torch/csrc/api -I../caffe2/../torch/csrc/api/include -I../caffe2/aten/src/TH -Icaffe2/aten/src/TH -I../caffe2/../torch/../aten/src -Icaffe2/aten/src -Icaffe2/../aten/src -Icaffe2/../aten/src/ATen -I../caffe2/../torch/csrc -I../caffe2/../torch/../third_party/miniz-2.0.8 -I../aten/src/TH -I../aten/../third_party/catch/single_include -I../aten/src/ATen/.. -Icaffe2/aten/src/ATen -I../third_party/miniz-2.0.8 -I../caffe2/core/nomnigraph/include -Icaffe2/aten/src/THC -I../aten/src/THC -I../aten/src/THCUNN -I../aten/src/ATen/cuda -I../c10/.. -Ithird_party/ideep/mkl-dnn/include -I../third_party/ideep/mkl-dnn/src/../include -I../third_party/QNNPACK/include -I../third_party/pthreadpool/include -I../aten/src/ATen/native/quantized/cpu/qnnpack/include -I../aten/src/ATen/native/quantized/cpu/qnnpack/src -I../third_party/QNNPACK/deps/clog/include -I../third_party/NNPACK/include -I../third_party/cpuinfo/include -I../third_party/FP16/include -I../c10/cuda/../.. -isystem ../cmake/../third_party/googletest/googlemock/include -isystem ../cmake/../third_party/googletest/googletest/include -isystem ../third_party/protobuf/src -isystem /usr/local/include -isystem ../third_party/gemmlowp -isystem ../third_party/neon2sse -isystem ../cmake/../third_party/eigen -isystem /usr/local/Cellar/python/3.7.3/Frameworks/Python.framework/Versions/3.7/include/python3.7m -isystem /usr/local/lib/python3.7/site-packages/numpy/core/include -isystem ../cmake/../third_party/pybind11/include -isystem /opt/rocm/hip/include -isystem /include -isystem ../cmake/../third_party/cub -isystem /usr/local/cuda/include -isystem ../third_party/ideep/mkl-dnn/include -isystem ../third_party/ideep/include -isystem include -Wno-deprecated -fvisibility-inlines-hidden -Wno-deprecated-declarations -Xpreprocessor -fopenmp -I/usr/local/include -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -O2 -fPIC -Wno-narrowing -Wall -Wextra -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -Wno-invalid-partial-specialization -Wno-typedef-redefinition -Wno-unknown-warning-option -Wno-unused-private-field -Wno-inconsistent-missing-override -Wno-aligned-allocation-unavailable -Wno-c++14-extensions -Wno-constexpr-not-const -Wno-missing-braces -Qunused-arguments -fcolor-diagnostics -fno-math-errno -fno-trapping-math -Wno-unused-private-field -Wno-missing-braces -Wno-c++14-extensions -Wno-constexpr-not-const -DHAVE_AVX_CPU_DEFINITION -DHAVE_AVX2_CPU_DEFINITION -O3  -isysroot /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.12.sdk -mmacosx-version-min=10.9 -fPIC   -DCUDA_HAS_FP16=1 -DHAVE_GCC_GET_CPUID -DUSE_AVX -DUSE_AVX2 -DTH_HAVE_THREAD -Wall -Wextra -Wno-unused-parameter -Wno-missing-field-initializers -Wno-write-strings -Wno-unknown-pragmas -Wno-missing-braces -fvisibility=hidden -DCAFFE2_BUILD_MAIN_LIB -O2 -std=gnu++11 -MD -MT caffe2/CMakeFiles/torch.dir/__/torch/csrc/jit/passes/alias_analysis.cpp.o -MF caffe2/CMakeFiles/torch.dir/__/torch/csrc/jit/passes/alias_analysis.cpp.o.d -o caffe2/CMakeFiles/torch.dir/__/torch/csrc/jit/passes/alias_analysis.cpp.o -c ../torch/csrc/jit/passes/alias_analysis.cpp
In file included from ../torch/csrc/jit/passes/alias_analysis.cpp:1:
In file included from ../torch/csrc/jit/passes/alias_analysis.h:3:
../c10/util/flat_hash_map.h:1367:24: error: no member named 'out_of_range' in namespace 'std'
            throw std::out_of_range("Argument passed to at() was not in the map.");
                  ~~~~~^
../c10/util/flat_hash_map.h:1374:24: error: no member named 'out_of_range' in namespace 'std'
            throw std::out_of_range("Argument passed to at() was not in the map.");
                  ~~~~~^
2 errors generated.
[1970/3136] Building CXX object caffe2/CMakeFiles/torch.dir/__/torch/csrc/jit/passes/batch_mm.cpp.o
ninja: build stopped: subcommand failed.
Traceback (most recent call last):
  File "setup.py", line 756, in <module>
    build_deps()
  File "setup.py", line 321, in build_deps
    cmake=cmake)
  File "/Users/adrienbitton/Desktop/pytorch/tools/build_pytorch_libs.py", line 63, in build_caffe2
    cmake.build(my_env)
  File "/Users/adrienbitton/Desktop/pytorch/tools/setup_helpers/cmake.py", line 331, in build
    self.run(build_args, my_env)
  File "/Users/adrienbitton/Desktop/pytorch/tools/setup_helpers/cmake.py", line 142, in run
    check_call(command, cwd=self.build_dir, env=env)
  File "/usr/local/Cellar/python/3.7.3/Frameworks/Python.framework/Versions/3.7/lib/python3.7/subprocess.py", line 347, in check_call
    raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['cmake', '--build', '.', '--target', 'install', '--config', 'Release', '--', '-j', '4']' returned non-zero exit status 1.

setting BUILD_SHARED_LIBS=OFF breaks before 0% (similar as before when it stopped at [1970/3136] but the total build size is smaller)

setting USE_NINJA=OFF breaks at 78% (before was 75%) with the following:

[ 78%] Building CXX object caffe2/CMakeFiles/torch.dir/__/torch/csrc/autograd/saved_variable.cpp.o
[ 78%] Building CXX object caffe2/CMakeFiles/torch.dir/__/torch/csrc/autograd/variable.cpp.o
4 warnings generated.
[ 78%] Building CXX object caffe2/CMakeFiles/torch.dir/__/torch/csrc/jit/autodiff.cpp.o
[ 78%] Building CXX object caffe2/CMakeFiles/torch.dir/__/torch/csrc/jit/attributes.cpp.o
[ 78%] Building CXX object caffe2/CMakeFiles/torch.dir/__/torch/csrc/jit/argument_spec.cpp.o
[ 78%] Building CXX object caffe2/CMakeFiles/torch.dir/__/torch/csrc/jit/pass_manager.cpp.o
[ 78%] Building CXX object caffe2/CMakeFiles/torch.dir/__/torch/csrc/jit/pickler.cpp.o
[ 78%] Building CXX object caffe2/CMakeFiles/torch.dir/__/torch/csrc/jit/graph_executor.cpp.o
[ 78%] Building CXX object caffe2/CMakeFiles/torch.dir/__/torch/csrc/jit/import_source.cpp.o
[ 78%] Building CXX object caffe2/CMakeFiles/torch.dir/__/torch/csrc/jit/import.cpp.o
[ 78%] Building CXX object caffe2/CMakeFiles/torch.dir/__/torch/csrc/jit/pickle.cpp.o
[ 78%] Building CXX object caffe2/CMakeFiles/torch.dir/__/torch/csrc/jit/import_export_helpers.cpp.o
[ 78%] Building CXX object caffe2/CMakeFiles/torch.dir/__/torch/csrc/jit/instruction.cpp.o
[ 78%] Building CXX object caffe2/CMakeFiles/torch.dir/__/torch/csrc/jit/interpreter.cpp.o
[ 78%] Building CXX object caffe2/CMakeFiles/torch.dir/__/torch/csrc/jit/constants.cpp.o
[ 78%] Building CXX object caffe2/CMakeFiles/torch.dir/__/torch/csrc/jit/node_hashing.cpp.o
[ 79%] Building CXX object caffe2/CMakeFiles/torch.dir/__/torch/csrc/jit/ir.cpp.o
/Users/adrienbitton/Desktop/pytorch/torch/csrc/jit/interpreter.cpp:292:23: warning: moving a temporary object prevents copy elision [-Wpessimizing-move]
    can_emit_inline = std::move(CanEmitInline(graph).can_emit_inline_);
                      ^
/Users/adrienbitton/Desktop/pytorch/torch/csrc/jit/interpreter.cpp:292:23: note: remove std::move call here
    can_emit_inline = std::move(CanEmitInline(graph).can_emit_inline_);
                      ^~~~~~~~~~                                     ~
[ 79%] Building CXX object caffe2/CMakeFiles/torch.dir/__/torch/csrc/jit/irparser.cpp.o
[ 79%] Building CXX object caffe2/CMakeFiles/torch.dir/__/torch/csrc/jit/jit_log.cpp.o
1 warning generated.
[ 79%] Building CXX object caffe2/CMakeFiles/torch.dir/__/torch/csrc/jit/operator.cpp.o
[ 79%] Building CXX object caffe2/CMakeFiles/torch.dir/__/torch/csrc/jit/register_c10_ops.cpp.o
[ 79%] Building CXX object caffe2/CMakeFiles/torch.dir/__/torch/csrc/jit/subgraph_matcher.cpp.o
[ 79%] Building CXX object caffe2/CMakeFiles/torch.dir/__/torch/csrc/jit/symbolic_script.cpp.o
[ 79%] Building CXX object caffe2/CMakeFiles/torch.dir/__/torch/csrc/jit/profiling_record.cpp.o
[ 79%] Building CXX object caffe2/CMakeFiles/torch.dir/__/torch/csrc/jit/profiling_graph_executor_impl.cpp.o
[ 79%] Building CXX object caffe2/CMakeFiles/torch.dir/__/torch/csrc/jit/passes/alias_analysis.cpp.o
In file included from /Users/adrienbitton/Desktop/pytorch/torch/csrc/jit/passes/alias_analysis.cpp:1:
In file included from /Users/adrienbitton/Desktop/pytorch/torch/csrc/jit/passes/alias_analysis.h:3:
/Users/adrienbitton/Desktop/pytorch/c10/util/flat_hash_map.h:1367:24: error: no member named 'out_of_range' in namespace 'std'
            throw std::out_of_range("Argument passed to at() was not in the map.");
                  ~~~~~^
/Users/adrienbitton/Desktop/pytorch/c10/util/flat_hash_map.h:1374:24: error: no member named 'out_of_range' in namespace 'std'
            throw std::out_of_range("Argument passed to at() was not in the map.");
                  ~~~~~^
[ 79%] Building CXX object caffe2/CMakeFiles/torch.dir/__/torch/csrc/jit/passes/batch_mm.cpp.o
[ 79%] Building CXX object caffe2/CMakeFiles/torch.dir/__/torch/csrc/jit/passes/bailout_graph.cpp.o
2 errors generated.
make[2]: *** [caffe2/CMakeFiles/torch.dir/__/torch/csrc/jit/passes/alias_analysis.cpp.o] Error 1
make[2]: *** Waiting for unfinished jobs....
make[1]: *** [caffe2/CMakeFiles/torch.dir/all] Error 2
make: *** [all] Error 2
Traceback (most recent call last):
  File "setup.py", line 756, in <module>
    build_deps()
  File "setup.py", line 321, in build_deps
    cmake=cmake)
  File "/Users/adrienbitton/Desktop/pytorch/tools/build_pytorch_libs.py", line 63, in build_caffe2
    cmake.build(my_env)
  File "/Users/adrienbitton/Desktop/pytorch/tools/setup_helpers/cmake.py", line 331, in build
    self.run(build_args, my_env)
  File "/Users/adrienbitton/Desktop/pytorch/tools/setup_helpers/cmake.py", line 142, in run
    check_call(command, cwd=self.build_dir, env=env)
  File "/usr/local/Cellar/python/3.7.3/Frameworks/Python.framework/Versions/3.7/lib/python3.7/subprocess.py", line 347, in check_call
    raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['cmake', '--build', '.', '--target', 'install', '--config', 'Release', '--', '-j', '4']' returned non-zero exit status 2.